I think it goes beyond techno-primitivism. IDEs and LSPs purpose have always been deterministic, their output will always be what you expect. It doesn't "make" non-deterministic decisions for you. LLMs remove a large part of the 'thinking' step out of the equation, but it doesn't necessarily 'think' for you, since that's not how LLMs work. Problem is, it encourages people to stop thinking and "let the magic black box handle it". The code it produces, how it understand the context and the systematic interactions you want to happen, are frequently wrong, but it's presented as a solution. I'm talking about 80% from what I've tried; and trust me, I have. If you've ever worked on any repository that's expected to be maintained longer than 2 days, and tried to use LLMs to "boot productivity", you'll know what I mean. My reasons for being against LLMs are beyond a moral imperative. From a pragmatic standpoint, they're affecting my work. I've had to review so many PRs of obviously LLM-generated code that doesn't even work properly. After talking to the corresponding owners of the PRs they assure me they heavily modified the resulting code, which is even worse to think about since what's supposed to be a time saver ends up wasting everybody's time.
yeah, I don't care about originality. But you can't put code you don't understand in a complex system without causing problems that waste time in the long run, llms can produce bad designs that look plausible. Throwaway scripts, how do I do X and code review is where llms can provide value. But when people have the option to be lazy, there will be laziness.
@@pik910 Not sure where you're grabbing the originality point from, since personally I don't care for it either. It's why I think Go is such a good language; each different type of task is usually written in a very specific manner, which is good in my book. I completely agree on throwaway scripts. Recently, a coworker unfamiliar with Python needed to cobble something together in less than an hour, which he managed to do considering with the help of an LLM. Keep in mind he's an experienced developer, and Python reads like pseudo-code, so it's not like it could've been achieved in the same time by someone less experienced. This script will never be used under any other circumstance nor be maintained, so it's the perfect use case. Code reviews, however, I disagree with, since it's a slippery slope. I've lost count of how many times non-senior devs just approve a review based on a senior's approval without doing their due-diligence. People will default to trusting an LLM blindly, which is a problem considering its tendency to hallucinate.
In your experience, has LLM-generated code been worse or better than that of a fresh grad first writing industrial code? What domain and language are you in?
@@VivekHaldar My domain is currently broad. Our main product for stakeholders is written in hundreds of thousands of lines of embedded C code, but most of our time is spent on a myriad of tools for our own team and other development teams. These can be web-based tools using a common tech stack, C++ applications for running simulated environments, or C#/Rust command line utilities. Regarding how LLMs fare against fresh grads, it's not a simple thing to compare. Fresh grads with little coding experience usually commit smaller chunks of code, with their PRs becoming larger and larger as they start to familiarize themselves with our toolset and workflows. The amount of code an LLM can produce is astounding, but usually really bad and worse the bigger it gets. Since fresh grads won't make big PRs, it's hard to compare on that department. However, in very small chunks (a couple of lines for an ultra-specific purpose), yes, an LLM might produce better code than a literal fresh grad, but that doesn't hold up after a couple of months. You can rely on a person being self-sufficient and learning over time, which can't be said about LLMs.
LLMs have been a godsend for getting me started with languages I am not familiar with They are also really really good for framework migrations and simple but powerful scripts But for complex applications, there is still a ways to go. Wanna see what the next generation is able to do.
pro tip: giving the documentation to llms improves performance, maybe even some idiomatic code. Often what you want is for it to forage through the documentation, which it is pretty good at, probably generally on a superhuman performance level. Asking it for ideas for improvements or if it sees any bugs can be nice, also, I like to discuss ideas with llms if I am unsure instead of just writing them down. It gives a baseline level of quality, it usually finds when you are doing something obviously wrong and is good at the creativity part of problem solving or just knowing lots of approaches.
@@pik910 with something like perplexity it can literally search up the documentation. They need to start attaching more tools to LLMs. A next generation LLM with access to a browser, ide, graphing calculator, etc will be quite helpful.
What a piece of art (the video)! When I started programming I already had auto-completion, because it boosts your performance dramatically. I'm the generation of "auto-completion" programmers and I bet there will be generation of "llm prompters". And llms feels like cheating to me, the same as VSCode felt cheating to the authors of "no vscode" article. So, I totally support your opinion. Nice comparisons, support articles and the art about lines. Lovely video!
You got to the heart of it. This whole, I'm-an-artist thing, is nothing more than an ego-trip. Art requires tools. Pen and paper are tools. The computer is a tool. So also are llms
Progress isn't forever. Yes tech has been becoming more abstract... to some degree. There's also movements focused on going back on some abstraction. Also there's no optional variety; every aspect of a product and its quality creates a specific experience and connection with the audience. Anything artisan will generally be better because there's more intentionality. AI lacks intentionality, and as people are no longer able to tell what is AI and what isn't, no one will know what was intentional and they'll become cynical.
I think it goes beyond techno-primitivism. IDEs and LSPs purpose have always been deterministic, their output will always be what you expect. It doesn't "make" non-deterministic decisions for you.
LLMs remove a large part of the 'thinking' step out of the equation, but it doesn't necessarily 'think' for you, since that's not how LLMs work. Problem is, it encourages people to stop thinking and "let the magic black box handle it". The code it produces, how it understand the context and the systematic interactions you want to happen, are frequently wrong, but it's presented as a solution. I'm talking about 80% from what I've tried; and trust me, I have. If you've ever worked on any repository that's expected to be maintained longer than 2 days, and tried to use LLMs to "boot productivity", you'll know what I mean.
My reasons for being against LLMs are beyond a moral imperative. From a pragmatic standpoint, they're affecting my work. I've had to review so many PRs of obviously LLM-generated code that doesn't even work properly. After talking to the corresponding owners of the PRs they assure me they heavily modified the resulting code, which is even worse to think about since what's supposed to be a time saver ends up wasting everybody's time.
yeah, I don't care about originality. But you can't put code you don't understand in a complex system without causing problems that waste time in the long run, llms can produce bad designs that look plausible. Throwaway scripts, how do I do X and code review is where llms can provide value. But when people have the option to be lazy, there will be laziness.
@@pik910 Not sure where you're grabbing the originality point from, since personally I don't care for it either. It's why I think Go is such a good language; each different type of task is usually written in a very specific manner, which is good in my book.
I completely agree on throwaway scripts. Recently, a coworker unfamiliar with Python needed to cobble something together in less than an hour, which he managed to do considering with the help of an LLM. Keep in mind he's an experienced developer, and Python reads like pseudo-code, so it's not like it could've been achieved in the same time by someone less experienced. This script will never be used under any other circumstance nor be maintained, so it's the perfect use case.
Code reviews, however, I disagree with, since it's a slippery slope. I've lost count of how many times non-senior devs just approve a review based on a senior's approval without doing their due-diligence. People will default to trusting an LLM blindly, which is a problem considering its tendency to hallucinate.
In your experience, has LLM-generated code been worse or better than that of a fresh grad first writing industrial code? What domain and language are you in?
@@VivekHaldar My domain is currently broad. Our main product for stakeholders is written in hundreds of thousands of lines of embedded C code, but most of our time is spent on a myriad of tools for our own team and other development teams. These can be web-based tools using a common tech stack, C++ applications for running simulated environments, or C#/Rust command line utilities.
Regarding how LLMs fare against fresh grads, it's not a simple thing to compare. Fresh grads with little coding experience usually commit smaller chunks of code, with their PRs becoming larger and larger as they start to familiarize themselves with our toolset and workflows. The amount of code an LLM can produce is astounding, but usually really bad and worse the bigger it gets. Since fresh grads won't make big PRs, it's hard to compare on that department.
However, in very small chunks (a couple of lines for an ultra-specific purpose), yes, an LLM might produce better code than a literal fresh grad, but that doesn't hold up after a couple of months. You can rely on a person being self-sufficient and learning over time, which can't be said about LLMs.
LLMs have been a godsend for getting me started with languages I am not familiar with
They are also really really good for framework migrations and simple but powerful scripts
But for complex applications, there is still a ways to go. Wanna see what the next generation is able to do.
pro tip: giving the documentation to llms improves performance, maybe even some idiomatic code. Often what you want is for it to forage through the documentation, which it is pretty good at, probably generally on a superhuman performance level. Asking it for ideas for improvements or if it sees any bugs can be nice, also, I like to discuss ideas with llms if I am unsure instead of just writing them down. It gives a baseline level of quality, it usually finds when you are doing something obviously wrong and is good at the creativity part of problem solving or just knowing lots of approaches.
@@pik910 with something like perplexity it can literally search up the documentation. They need to start attaching more tools to LLMs. A next generation LLM with access to a browser, ide, graphing calculator, etc will be quite helpful.
What a piece of art (the video)!
When I started programming I already had auto-completion, because it boosts your performance dramatically. I'm the generation of "auto-completion" programmers and I bet there will be generation of "llm prompters". And llms feels like cheating to me, the same as VSCode felt cheating to the authors of "no vscode" article.
So, I totally support your opinion. Nice comparisons, support articles and the art about lines. Lovely video!
You got to the heart of it. This whole, I'm-an-artist thing, is nothing more than an ego-trip. Art requires tools. Pen and paper are tools. The computer is a tool. So also are llms
So is generative AI a tool????
@@atiedebee1020 yes, it is, and good tool
@@atiedebee1020 Not the most accessible one (in its current avatar) but yes
Progress isn't forever. Yes tech has been becoming more abstract... to some degree. There's also movements focused on going back on some abstraction.
Also there's no optional variety; every aspect of a product and its quality creates a specific experience and connection with the audience. Anything artisan will generally be better because there's more intentionality.
AI lacks intentionality, and as people are no longer able to tell what is AI and what isn't, no one will know what was intentional and they'll become cynical.
AI Rorschach test: Could I/you make art using only llms?
There are artists already making art with image generation. They have an artistic vision, they use the model to realize it.
bravo. codegen turns programming into an art for millions of people. and compare that to the small number of unassisted coding artists there are today