Hot take: If LLMs make you 10x faster at coding that says more about your coding ability than it does about how good LLMs are. The tweet: x.com/neetcode1/status/1814919711437508899 This video may change your mind about the AI hype in general: th-cam.com/video/uB9yZenVLzg/w-d-xo.html
Skill issue. You’re probably shit at prompting. The equivalent of making a StackOverflow post and the people need to ask 50 follow ups to get the context of your issue
AI can seemingly dominate the world, not through a engineer perspective but on a mathematical perspective AI can really code of medium-difficulty tasks. These companies pour billions on transformer models and the best compact brilliance on other side which was not encouraged by those communities on a real scale
Years ago I read somewhere on the internet the field was broken, totally broken. But that’s ok, I am from the medical field, I brought my stethoscope. Also, duck tape, and WD40 if that can help.
@@agnescroteau8960medicine wont lose work, but will dying to overwork though. Especially if you have "universal healthcare" which will take your negotiating power and income
You nailed it. Non technical people don’t understand that the last 10% takes 90% of the time… and the problem with developing with LLMs give you the impression you are almost done, but you still have a long way to go. And if you started with an LLM and don’t know what you are doing… good luck with that last 10% 😂
Like me implementing a web server for aws basically in a single day, then spend the rest of the week at least figuring out the deployment and configurations that are missing. Gotta love how "helpful" aws errors can be.
I honestly love when chatgpt makes throwaway python scripts for me when I feel lazy; but man, maintaining that code going forward? I’d have to rewrite most of it!
Q: Why would people lie/exaggerate like this? A: To generate traffic on their channel/feed/blog via hype and/or because they have financial interests why benefit by hyping up the technology.
@@Icedanon from this equation 10 * 0 (positivity) = 0(productivity) which is true since 0 = 0, LHS = RHS, so here lets take : 0 = x, so 10 * x(positivity) = x(productivity) since x productivity is 0, we have divide both sides by 10, we get : x(positivity) = x(productivity) since dividing 0 by any number equals 0 unless the denominator is 0. so going futher = x/x = productivity/positivity. which is both 1 and infinity, 1. taking x/x as 1 : therefore we conclude that productivity and positivity are inversely proportional, the more productivity u have, the less positivity u get. and vice versa. 2. taking x/x as infinity, we can also conclude that productivity/positivity = infinity. Or productivity = infinity/positivity. if your productivity was 2 units, then your positivity would be infinity/2 which is very large number, so we can take it as infinite. therefore no matter what is your positivity, your productivity is actually infinite, if your positivity is infinity, then your productivity would actually suffer. By the whole answer, we conclude two things : the more productivity u have, the less positivity u get. and vice versa. if your positivity is less than your productivity, then your productivity is actually infinite, if your positivity is infinity, then your productivity would actually suffer. so moral of the story : keep your positivity low, be depressed, take some pills, do drugs, etc to further lower your positivity and increase productivity. MATH SUCKS, SINCE INFINITY IS NOT A FRICKING NUMBER, ITS UNDEFINED IN MATHS, SO WHATEVER THAT I COOKED IS INVALID.
The reason why they're lying is because of money. I'm a senior engineer and I've been in the industry for 25 plus years, LLMs just waste time for anything that isn't the most trivial app on the planet. The AI hype is based around a bunch of VCs misrepresenting the difficulty of programming/engineering for the sake of selling a product. I feel like the Twitter guy doesn't understand what 10X even means. If you can implement this stuff 10 times as fast then you can literally work for one day and then take the rest of the week off and no one will notice a difference. Naturally, I don't think he's a programmer in the first place which is probably why he sees a 10x improvement. This is just a big old case of the dunning kruger effect. The funniest part of all of this stuff is that it just doesn't make sense logically for these LLMs to ever become proficient at writing code. The reason why I say this is because you need to have enough data to train the LLM on every use case. But the problem is that there are plenty of use cases where maybe there's only one or two examples in open source. These AIs have no ability to create anything new and so there's always going to be a distribution where the majority of problems simply can't be solved by the LLM because it doesn't have enough data to understand those problems. At the same time, they'll become really proficient at writing todo apps because there are thousands of those.
sadly most non tech employeer currently started to underestimate engineer, just like my former boss, saying that my $500 salary as fullstack dev is enough because AI can help. hahaha.
Like I've mentioned elsewhere it also really depends on the language, the more popular the language the easier or perhaps more options you can get, the less popular the language.. Well good luck getting AI to help you write in older languages or in custom ones. It's only sort of helpful anyway for problem solving because like you said it's based on examples already existing which might even be the wrong problem to solve leading you down rabbit holes if you don't realize it. The biggest problem with AI though is the cost of maintenance from both technical, and environmental viewpoints. It's like how some NFT's are supposed to "solve" climate change, good luck getting "green" AI.
The productivity gains equally massive as workflow difference may be for a specific repo mantainer against a full stack engineer. I don't think LLMs can help much if ur been doing the same for 10+ years
I've been doing this stuff for even longer. There have always been people gaslighting us about the difficulty of producing quality software. This is just the same people latching onto a new tool. Before it was agile then iso 9001 and on and on.
@@yashghatti That's just not true, you have Webflow and Elementor, both widely used and battle-tested, they were also marketed as replacement for all developer, but they eventually find their own place, and got accepted by everyone as a good tool in some cases.
We've been through *several cycles* of this. It was CASE tools in the 1980s, and then UML in the 90s/early 2000s. Both of these were supposed to obviate the need for coding, you just had to go from requirements to running program. The problem is, any symbols you use to express a spec in such a way that it's specific enough to be executable by computer are isomorphic to constructs in some programming language. They just might not be easily diffable, version-controllable, or viewable or editable except by specialized tools.
Nearly every TH-cam coding influencer who's entire business model is pretending they are actual professional Software Engineers but all of their projects are forks of somebody else's public GitHub project has entered the chat.
Engagement farming is a real thing on twitter and that's what is been going on. People just post anything and if your Post by any means contains the word "AI" and fear mongers among general public, It's sure to get reactions from left and right.
they only make like a few bucks too. so it's really pathetic when actual humans do it. now the bots are excused, they are raking it money through annoying others. that's genius level of hustling. the american way.
I'm absolutely sure that if I post anything, even if it contains the word "AI:" it will get at most 4 views because that's what every tweet I've ever posted since Twitter started has got. Unless you pay them or something, there's no way to get views or followers in that thing.
This is not how you use LLMs to aid coding. You use it to write small self-contained functions, regexps, throwaway scripts, prototypes and non-customer facing utility code for stuff like data visualisations etc. It's not for non-technical people, it's for technical people that want to get through non-critical or simple parts of a task quicker.
Exactly. Been using for some DevOps tasks heavily. Python, bash - don't know and don't care. I have enough developer knowledge to debug it, but not to learn all the syntax and niche libs, frameworks, and language quirks.
they're still really useful for "dumb tasks". i can tell gpt 4o to "Look at this project i made, now i need X project with x and x, make it based on my other code" and it will make me a working crud in less than a minute. sure, it might have some issues or be missing features. But it still saved me like half an hour of coding if not more. i've done that a few times and personally i find it pretty satisfying to be able to generate a basic crud with 5 working endpoints in a few seconds.
@@SoyGriff Very much so :) I love them for learning new spoken languages too, i doubt there's a better tool other than actually practicing with other people. They have many uses, but the message I was trying to reinforce was neetcode's opinion on how they're not as advanced coding wise as they are made out to be. In your case, the crud part can be found basically anywhere, since so many people have already implemented it. For implementing specific business logic their usefulness basically depends on your ability to modularize the problem. If you can break down your problem in to small enough chunks that you can ask chatgpt how to implement them, you've already done a lot of the "programming" yourself. They're definitely useful in their own right.
@@Vancha112 the crud part can't be found easily because it's specific to my project and yet it can generate it in seconds based on my instructions, it saves me a lot of time. i agree i'm doing most of the programming and just telling the AI to implement it, but that's the beauty of it. that's what AI is for. i only have to think and explain while it does all the heavy work. that's why my productivity increased so much since i started using it. i'm building in a month what my old team would build in 6, and i'm alone.
Been coding for 20 years here. The point is.. even if you don't "need help" with that part.. the LLM will do the job faster than you can do it.. thus your productivity is improved. In my opinion, if you are not figuring out how to include LLM's in your workflow, you are going to be left behind by those that do. Is it a 10x increase? For tasks the llm can do.. it's much more than a 10x increase!
@@betadevb I will never forget the incident in NYC near central park where someone yelled out "Vaporeon is here" and people jumping out of their vehicles to catch this Pokemon. IN NYC / CENTRAL PARK !!! th-cam.com/video/MLdWbwQJWI0/w-d-xo.html Vaporeon Central park Stampede
They’re lying because they’re trying to get rid of competition. Propaganda, basically. Yes, I know this sounds crazy, but it worked on me. When AI was first released, there were millions of videos and articles floating around about how AI was going to replace humans, and me, who was at the time learning how to code, gave up on coding because AI scared me. I chose to go on a different path. I’m sure there are more people who gave up on coding because of AI propaganda. Fortunately, though, stopping learning how to code didn’t have a big impact on me since I was 13 at the time and even though I wasted almost two years not learning how to code, I’m back at it and will not give up no matter what 💪 You shouldn’t either. AI will not replace software engineers. Period.
NGL I was gonna use AI as an excuse to finally quit capitalism and move to a mountain with my savings (not trolling this was going to happen). But the only thing that ended up happening is that once again in the industry greed won and a with the layoffs a lot of us devs are being exploited af. We are trapped in the promise of a transition that will take decades with CEOs who just want to keep on cutting numbers on one side and the AI bubble on the other. In the mean time tons of really good people cannot even find internships because interviewers also fell on the AI bubble trap and are now asking freshly graduate kids to code skynet in a blackboard in 15 minutes. The industry really sucks rn.
I think 90% of my job is figuring out how to solve the issue I have, 5% - 6% is bug fixing and testing what I added and the rest is typing code. I think, even if I could magically have all the code in my head on my computer in like a second, it would save me a couple of working hours per week. I think the people who create these tools don't actually understand what programmers actually need. If for example I could have an AI that quickly tests my code, then we could start talking, probably would save me lots of time.
Yes! Automatic testing would be fantastic. If someone could train an AI to do *just that*, and nothinh else, it would be amazing. In general I think AI tries to be too much. It would be more practical with AI that was really good at something very, very specific and worthless outside of that.
The biggest issue with these LLMs is that they lose context SO FAST. 3 prompts in and you need to tell them again and again to keep in mind what you had mentioned in the previous prompts. I was using Chatgpt and Copilot for the leetcode problem "flight assignment" and I accidentally forgot to mention "Flights in this coding question" in my 3rd or 4th prompt and it started giving me Airline flight info. Which is completely bonkers because how could it think I am talking to it about Airlines instead of a coding problem that we were working on a few seconds ago!!
I find the more I know about a specific task the less useful an LLM is. When I’m new to something it’s a great place to start by talking to chatgpt or something.
I'm not a programmer but I've made a few JS sites and Python apps for fun, and one thing I learnt to do is to start new chats. Once you get too deep it starts going batshit. Granted this is all very basic level, so it probably wouldn't help on anything too big or technical anyway, but basically if you spend some time starting new chats and being very specific and detailed with your prompts it does help. With Claude I'll tell it I've updated the files in the project knowledge section and for it to refer to the newest version. There are ways of getting it to stay on track but it probably is a waste of time for an actual programmer.
agree with what you're saying, i've been doing software for +10 years and I do think it has made my productivity go like 10x up, but the difference is that I know what I need and I use chatgpt 4o as a rubber duck, specially when doing architecture decisions and tradeoffs I have a vague idea of lets say 3 different ways of building x product so I just ask for pros/cons, describe my ideas and so on and it works. The thing that i've noticed is that if I spend + 2 hours discussing/bouncing ideas with an LLM it becomes stale really fast, forgets my previous input and just hallucinates, but as an initial technical document writing or small shit like basic components it works VERY good
This. I agree with this a million times over. I treat it like a rubber duck that has 130 IQ. At the end of the day it's *my* hand that is writing the code. The LLM just provides input and feedback. The claim made by the tweet OP is definitely exaugurated, but if you strip out the hyperbole and 'zoom out' a little, it's pretty realistic.
It’s about pain vs complexity. Like he said if it can handle snippets it can handle big projects in chunks. That’s how I use it. I edit more code than I write but my jump of is always an AI. It just physically writes code faster… I can do the thinking and editing but it write 1000-500 lines a min.
A saying that's always valid "Stupid people are the loudest" , that's how I see all those Twitter "influencers/founders" with their AI takes, LLMs, carrers,etc... They need to get good themselves before talking. Wake me up when Primeagen agrees with their nonsense. Good take Neetcode!
Except it's the opposite. Most of these takes against AI for dev productivity are people who havent progressed beyond senior engineer, including Primeagen.
Totally agree. That is the major difference between looking from a non-tech person and a tech person's point of view. From a non-tech person's point of view, they are now able to create a "non-working, working-looking site" (lol), whereas before, they would need to have a UI designer/engineer create it for them, which cost them money and meeting time. From a tech person's point of view, LLM is just a snippet where now I don't need to go to StackOverflow. Using it more than that is just wasting time as mentioned in your video. And the most hyped people that go around talking shit are the non-tech people who work for a tech company that knows nothing about systems but think they do so they start using these LLM tools thinking they can replace engineers.. The worst part is that they use these tools to create so-called prototypes and then give it to the engineers to make it production-ready but don't know why it takes longer than the traditional way (*Cough CEO/Project managers Cough*)
Yes, the problem is that they can get close to the spec you give them, but it's not close *enough* and has to be rewritten. This has been frustrating for me many times where I tell the LLM to change one small detail and it goes round in circles before finally admitting something can't be done without starting from scratch. Huge waste of time in a lot of cases
Thats part of you learning If you learn what tools exist or what libraries can actually do, it should be able to help you code it just fine Its literally translating your prompt from english into code You asking it to do something impossible is partly your fault
@@wforbes87100% As someone using it to write basic code, its a godsend, i dont need to wait a day or submit a ticket or whatever to have to talk to an engineer These guys are vastly overestimating the amount of mundane work that goes on outside of faang lol, most coders or code jobs are not frontier
That's been my experience as well. Even with snippets, it works best when I effectively solve the core logic first and just ask for code, or give it complete code and ask for suggestions. For anything beyond snippets, I've spent more time holding the LLM's hand to not go x or y route, and eventually just figure it out myself. LLMs are definitely far, far away from getting to the point where a lot of people praise them, like 10xing. They definitely are very handy tools, but have a lot of limitations.
@@strigoiu13 , I did not. You have the option to remove your data from being part of the training set. Then, for security purposes, I delete conversations as well. Even then, they have plenty of training examples from other sources.
@@strigoiu13 Also, if it actually learned from its users automatically, it would be saying slurs constantly within days of launch. We've seen that happen to chatbots like that repeatedly.
Ah yes, the old AI made me a 10x engineer. It's always cap...chances are that these individuals that claim this are the ones who push absolute dog water to production because they don't actually understand the code or know how to debug. Personally if I'm prompting this LLM to write something and then having to double-check it and if it is wrong prompt it again and do that whole process till it gets it right, It would have been faster if I did it all myself in the first place.
I don't know man. Personally, I personally find that its much easier to edit 'half-way there' code, than to write from scratch. It might take a while to get used to the peculiarities and bad habits of the LLM and figure out what's the best point to stop prompting and start coding by yourself, but once you figure it out, I do find that relying on AI makes me a lot more productive. Not 10x but definitely at least 3x on a good day. (Although there are obviously also bad days where its barely 1x). I find that its great at data visualization code, complicated refactorings, explaining an existing (not too large) project I'm trying to get started with, and basically speeding up any annoying, slightly complex, tedious process. And it really shines for quick dirty projects in languages you're unfamiliar with (need to google how to init array) but can read just fine once the code's there in front of you, since you can basically just wing it, as long as you've a LLM to get your back.
@@ReiyICN Oh boy I'd never ever rely on AI for "complicated refactoring". Sounds strikingly similar to shooting yourself on the foot. To be fair I've only found AI useful for common boilerplate you don't want to write, or in the case of copilot when you're creating a structure, it is quite good at completing the structure, for example switch or else statements
The issue with a lot of people is PROMPTING, you don't prompt LLMS. You don't have to find the correct prompts or keywords. You just talk to them as if they were a human being, a dumb one. It works really well in my experience. It's better to write 2 paragraphs explaining what you want than trying to make it work 10 times while only writing basic prompts and not providing the whole context
Everybody nowdays is "Founder" or "building X" with no technical background, few years ago it was the hype ride with no code tools and now it is with LLMs.
Important to keep in mind that a lot of the hype is either manufactured by folks that have invested a lot of money into the current AI boom or folks that have fallen for said marketing.
The only people who can fall for said marketing are those who haven't actually tried the product. The rest, like the guy writing this article, they're CLEARLY stakeholders. I bet this guy bought some Anthropic stock beforehand, or is just a paid actor
It is overhyped but at the same time it does do my work much faster. It can't build entire systems or even big parts of a system, but It can work on small parts. For example, writing simple functions, components, UI elements etc. I mainly use it to speed up my work, instead of coding, its mostly me checking over the code generated and fixing small things. Sometimes its frustrating and gets it very wrong, but I usually just have to fix the prompt. Overall its definitely sped up my work flow, maybe not 10x, but 2-3x is reasonable.
And thats enough for it to be a massive change Now your company can make you do 3x the work instead of hiring one or two more people They will absolutely do that And AI will advance more to the point where eventually, you will not be needed There is no arcane or unknown laws of coding libraries, they are all manmade and documented, the ai will get better
Your thinking in zero sum terms. The demand for code will simply increase... the reality is.. most companies want to use a lot more software than they currently do.. so they will simply create more applications and better tools for users.
The thing is AI is like a fast worker that catapults to the answer quickly so you have to seer it with the correct type of questions so it is not ambiguous in its output, I had to code a task component some features (add task, remove them with a cross button, add due date with a calendar etc, I had its figma file and have Claude 3.5 all the details to remove ambiguity and it made a surprisingly good boilerplate component as I knew its training data would have something similar. For run of the mill tasks it is a game changer but for something requiring a spark of imagination (nil training data) it fails pretty badly.
There are around 10 gazillion implemenations of "my first task list" on github, of course it managed to do that. Now ask it to design an async cache that fits into the constraints of your existing application...
i like your style , calm composed and very genuine and non-toxic , you know you're seeing bullshit yet you respond respectfully and give everyone the benefit of the doubt.
Man, I am an artist for nearly three last decades and I feel exactly like you when I am listening to other artists praising using LLMs for art. I tried many image generators and they work great... for people that just want random picture to jump out of it :D The more specific need it is - the more problems with generating simple picture. You will just waste time trying to describe what you need and getting random pictures that are sometimes not even remotely connected to what you need. And thats just simple pictures. When it comes to 3D models - LLMs are laughably simplistic. I see so many YT videos where people are AMAZED by the results - while showing something absurdly simple and still in need of manual fixing. LLMs cant even get good topology and people keep on talking about how it will replace us. More so - some people are claiming that they already lost a job to generator... HOW? What the hell they were doing? How simple thing that they could be replaced by something so deeply flawed? I recently started to learn a bit of coding for a simple, proof of concept game I am making. I didnt even tried LLM because I dont want to waste time. I rather ACTUALLY LEARN and understand how code works instead of trying to copy-paste it and then repeat it 1000 times because something isnt working and I will dont know why while LLM will tell me "ow, I am sorry let me fix it. Heres improved solution!". And then spit something wrong once again :D
The generator doesn't have to actually be good to replace people, see. All it has to do is be shiny enough for the people marketing it to convince people's upper management that it can replace them. Or be a convenient excuse to have mass layoffs and rehire at lower price or overseas.
@@tlilmiztli hi fellow artist, before AI hype, fortunately I also already switching to fullstack dev and I think being artistic give something different to what you build.
@@minhuang8848 it's a good analogy if you look at it this way: microwave heats up food. you can heat up left over pasta. you can heat up microwaveable food. it's alright and it'll fill the stomach but it's not that great compared to the food you make in oven or stove. llms will give you small working code snippets but they won't solve your complicated application. they don't come up with novel ideas. in that sense it's a microwave. you give it a prompt and it gives you mediocre code in a short time. making food by yourself is like programming while pushing a button on a microwave which is just like prompting. i just don't see how llms are like cnc machines or 3d printers. if anything they would be helpless and inconsistent cnc machine or 3d printer operators. i don't see them as tools in that sense, perhaps assistants at best
I'm starting to think that the bots are manufactured by twitter. What benefit does anyone outside the company have to run bots that respond to posts like that. Not to mention the captcha when registering, I literally could not pass it myself after like 3 tries of having to get 20/20 answers correct to the point that I gave up. Maybe I'm stupid and AI can solve that better than me, I don't know, seems fishy. It's probably 90% of posts I see are AI.
I work a lot with writing quick utility tools and api integrations for enterprise tool marketplaces and this is extremely useful for making hyper specific private apps to help a team handle one tiny piece of straightforward automation + hooking together a couple if APIs + maybe a super quick interface. LLMs are really powerful for things like this and have prob made me 10x faster for certain easy but tedious tasks.
I have to say. I built an app that uses html 5 canvas, vanilla js, sqlite, heroku. I use the project knowledge base which update each session my entire codebase. I give it a directory map app. Each task I begin a new chat. I write intro and a definition of todays task WHICH I KEEP SMALL. One at a time. I have built something good. But its still hard work. I am considering writing an ebook for my method and then some tooling that makes writing code with Claude easier. But I do see your point, I'm not a coder, I cant solve complex things and yet the llm cant. But I'm learning to code by doing with it and the familarity I have with my codebase (around 1mb now and 20ish files) is the real superpower. I'm learning fast. I dont think LLMs do it all in 5 years, I think a broader scope of people will be able to do it all.
I think this is the expected behavior of non-technical people, they will be defensive and want to believe that they can do anything a software developer/engineer can do with the help of LLM, it's just human nature
It's not human nature, it's what they've been told by the people they're paying for the service. The error is blindly believing what the salesmen tell you
it's expected behavior of people with no common sense and a thought process of an elementary school kid on a good day.. which describes most of these parasites "working" in management
Sad part is you’re all wrong… LLMs will create a Revolution where non technical founders CAN build a company and one that will rival companies as large Microsoft and bigger. 💎
I’ve recently had a very nice experience with Claude. The only downside is that the amount of time you have to interact with it is limited even with the pro plan. Every now and then it will tell you to wait a few hours to continue. But aside from that, I’m building an app I could not have done in the time I had without it. I’m an expert JS dev but there are some things that I don’t understand at all like audio engineering. I’m building an app that is music based using JS and so I prompted Claude to teach me tonejs (not build the app) through a series of small lessons and building up from there until through the lessons I had a working prototype of what I’m after. Major game changer
It's actually quite simple man, non-technical people don't really understand the complexity of the application. They see it looks the same, so it must be the same! Edge cases?! what is that
Thank you for calling this out. Adding to this, I heard another engineer recently call LLMs "fancy autocomplete". That's kind of what it feels like. It's amazing (but I suppose not surprising) that so many non-engineering folks are trying to tell engineers what LLMs are. The irony! Granted, there is complexity to LLMs and how they work, but I don't think most engineers saying that LLMs aren't "all that" is a matter of us trying to "save our jobs"; it's a matter of trying to tell the truth. I guess it more just feels like another example of a non-engineer trying to tell us why are job isn't "hard". Well, that and a bunch of marketing nonsense by big tech to cash in on the next big thing.
I've realized that using AI for a small function, or even an issue where I ask it to make what I want to give me "ideas" (I guess) of another way to do it, has lead me to waste a lot of time trying to get the right answer out of it instead of looking on stackoverflow for example
Agree with every word you said. I've been learning coding constantly over the past 2 years, and while I do use AI, it is a small part of what I overall do. And I'm still relatively a total beginner. I work with a few people who are way less technical than they think they are, and they believe that coding will be dead soon, and that they could do what I could do using AI, but it would take them a little more time. None of them have attempted anything more advanced than setting up a spreadsheet to ingest Google Calendar events.
i agree with the first tweet after trying to work on some project using claude 3.5. it is true he doesnt get complex stuff like your entire app but if you just constantly ask it questions about small parts it gets those small parts done very fast. for example my UI was very bad so i took a screenshot of it and gave it that and the code for the component and told it to make the UI better and it just did it in 1 try. same with asking for specific small changes one at a time. you dont ask "write an app that does x" but write "change this function to also do y" and then it does it way better if you give it the minimal context thats actually neccesary instead of the entire app
The people that succeed in this industry are the ones that embrace change and figure out how to use new tools. I still know people that use oldschool vi in their coding, and never adopted IDEs.. or said git offered "nothing new". In reality.. these folks simply didn't want to do the work to learn new things.
I’ve been an engineer for 20 years and I’ve been building a new SaaS product with Claude 3.5, my experience lets me ask the exact questions and give it the exact context I need to create what I want. So far it’s helped me build vue frontend components, node js backend, helped me configure typescript, it helped me configure vercel. It helped me build out authentication, the middleware, firebase integration wasn’t smooth but it helped. Helped me debug Cors issues and also build out the copy. I think the development process has been at least 5-8x faster.
LLMs and LMMs are currently effective for generating boilerplate code or providing insights into topics I'm unfamiliar with, without needing to sift through documentation.
I completely agree with the video. It makes sense that LLM's would be able to reproduce things that are easy for an experienced engineer, because that would exist in it's training data. There's no reason to expect that LLMs can reason about the logic in the code it outputs, so the correctness of it's output will be based on either it's training data or a complete coincidence. There may be ways to still use this technology to work smarter, not harder, e.g. writing documentation, suggesting names for functions, generating boilerplate, writing html snippets of UI components without requiring context (such as a submit button, might be pretty similar to any other submit button). Basically things that are language based, or copy-paste, and don't require logic. Maybe one day, more intelligent AI models that include logic and language will exist and are more capable writing novel code. Anyone familiar with LLM workflows may have a headstart. But these don't exist yet.
I'm an engineer in my fifties. I've used GPT4O to help me control our test and measurement equipment from inside Excel. We already use semi-automated Excel templates to produce certification. I am fairly handy with VBA in Excel. But what I am now doing with automation is something I would never do without an LLM. I barely have the time to do my job. I most certainly don't have the time to learn to use the APIs that GPT4O 'knows'. So bear in mind the transformative nature of this new technology for those of us who use coding as just one of the tools in the box, and not their main skill base.
@@ryan-skeldon You'd be shocked how many companies are reliant on 20 year old excel files that just do all the data collection. It works and it works well, esp if they have really old equipment that's difficult to interface with.
Thank you for bringing this up. I saw this post on twitter ! And now i see you here it got suggested to me on yt. Since this post I have tried claude for lisp, its "not bad" but it still doesn't understand human context no matter how much I hack at the prompt. I find these people talking absolute horseshit. I'd love to see them do it on video.
Well I am an LLM engineer, and truth be told, as Microsoft too responded the same after the copilot outrage, its just tools to help the professionals in their domains. People especially from non programming or beginner level programming background always get it wrong, they gets baffled with the smallest of code snippet. If you have no knowledge of the background of the task that you are trying to solve, LLM sure can waste your time, they are specifically designed to assist people with background. LLM sure can save your time and help you as a tool, it is not intended to replace an engineer, and the number 10x is an exaggerated number. However, it is current state of the art, it does not mean LLMs won't be better in the future. As a personal example I used LLMs all the time to prototype by creating interfaces, but I do have a degree in Computer Science, and many times I have to rewrite prompts, overall I can say it saves you 1.5x to 2x time at max, maybe more in some rare occasions, but it cannot be generalized.
This. If you know what you’re doing and looking at, your understand what the LLMs are and are not, they are fantastic, fantastic tools for speed and productivity. They are insanely helpful for documentation. The code and documents aren’t perfect, but I can iterate so fast, that ultimately I’ve pushed better code and faster
IMO it's the exact opposite, if I work on an existing project, know the tech stack well and the duplication is reduced, the gain from LLM is really minimal or even negative. Negative, cause accepting and modifying suggested solution often ends up time-wise worse than just doing it from scratch, you can also pass bugs that you'd not do, but you won't notice in generated code. Also sometimes I make up a special cases for copilot to prove itself, cause it's kind of satisfying.... lol It's different when prototyping, working with unknown tech stack or where duplication is by design (huge CRUD services), or inherited as bad design, or for e.g. unit testing where simplicity and duplication is desired. And I love copilot for Powershell, exactly because I don't know it well, it's 10x speed up in some cases there, and 5% in my core activity.
hell yeah! I just started learning coding for data science and man, it's scary. All this llm's coming out looking like they will take over jobs, and all companies are laying offs engineers, plus all these people showing off what they built using claude and cursor on Twitter, not understanding a thing of what it's made of. It's a breath of fresh air having this perspective come from a seasoned and respected programmer Thank you so much for saying this!
its not 10x faster, but it is often around 1.2x to 2x, depending on level of expertise you have with the programming that needs be done. Doing stuff like: - "I have this code {code paste here} and I want to test it for x y and z, write a unit test for it." - "rewrite this code to do the same thing but async, or for a different kind of object, etc." - "write an algorithm for this class {paste code} which should do: {something boilerplate-y}" - allot of graph rendering done with python/matplotlib is imo way faster doing a first draft with an LLM and then optimizing certain things as opposed to reading documentation. If I last used matplotlib 6 months ago to plot a scatter-plot with color coded disks, I won't remember that the cmap param for the scatter plot function is called cmap, for example) - Porting code between languages (yes, it still makes sense to read and test it) The list isn't really exhaustive.
Agree on all of these, especially porting code. I'm very familiar with C and Python but my Go is very rusty, but I can have it convert entire parsing pipelines from Python into Go with minimal issue. It's a godsend
Bro I kid you not I thought the same as you, but recently I have been getting so frustrated with it not being able to complete even these simple tasks optimally.
ChatGPT made my work slower yesterday. I tried to use Python to fill product descriptions in .csv file using ChatGPT API, but the code it gave, errored and couldn't find a solution and fix it. I had to read documentation about library I was using and found out, my .csv file was separated by semicolons, not commas, which had to be properly configured in python .csv tool. I would put that kind of task as easy, yet LLM failed.
Thanks for this. Honestly I was starting to think I’m either an idiot or taking crazy pills. I’m not a coder/developer, I’m a research scientist. I’ve been trying to use AI to help me with some literature study. Mind you this is what you do at the outset of a project to get an overview of what’s been done before. So basically it’s before all the actually difficult and frustrating stuff happens. The sad truth is, i couldn’t even get AI to meaningfully assist me with that, let alone any actual scientific research. In the end I spent more time to get the AI to do work for me than it would have taken me to just do it myself. So whenever anyone brings up how amazing AI had been for their job I’m baffled. I have slowly come to the conclusion that all these people do all day is write emails and make ppt presentations about corporate stuff. I just don’t see any use case for AI where it does any serious intellectual work.
I did commit 3 PRs last week that were coded entirely with an LLM. Describe the problem and provide sample similar code, review the solution, maybe a couple back and forth with LLM iterating on the solution, request tests, put everything on the repo, run tests, and feed errors into the LLM until code is fixed. I am the person who would have coded this anyways so I have the needed technical skills. The idea of a non technical person doing this today (or soon) is risible, however I did get a huge improvement days of works condensed in a day. Also the idea that engineers spend most their time on “hard” problems is strange tbh. I spent most my time finding existing solutions to non novel issues. Maybe we work on very different problems, idk. Have you considered maybe people are not lying but are seeing different time wasters disappear overnight due to LLMs?
@@gershommaes902 a management script I wrote for a manager who had a last minute question about data (I took 30 minutes between create, test, iterate, submit), a Django query to retrieve the roots from the subforest that remains when you apply RBAC to a forest on the db (mind you, minimizing data access and avoiding unnecessary data fetch), a pair of mixin classes to decorate models and query sets to emit a signal any time you make any change to the underlaying data on the db and a handler to track that on a separate model of the db. None of these really worked out of the box or were perfect, but I had a good sense of what I wanted and the test cases (which I generated via Claude itself) and I iterated several times over requirements or even over design options (I tried several options until I settled with the mixins). I got working results on a fraction of time and with more coverage than I would have otherwise. This is a revolution and it’s only going to get better. I’m waiting for better Claude-IDE integration, more agentic like workflow. Also, live testing on dev or stg environment is a time drain I wish I will be able to automate soon with some sort of bot that reads the PR and runs some “manual” tests on a local version of the whole site.
You’re absolutely right. For small things, I breeze through. When I was trying to have multistate logic, not only did it waste my time, but it literally ruined the code. When you try to guide it, it would literally look like it was agreeing with you then it will literally disconnect parts of the code that was supposed to be building What I’ve been working on is not mission critical. It was possibly a test. But it is clearly limitation, and we need to figure out how to integrate it in understanding how they are limited.
People tend to forget that in essence LLMs are just fancy-shmancy search engines which translate prompt to output in one flyover. As long as you stay within the range of whatever prompt->something translations they were trained on, it can work pretty well. When you leave that area, they break down horribly.
Actually, I agree with the OG post, it does make you considerably faster and will only get better. When GPT4 came out, I was writing a PHP API and designing an SQL database. I asked myself, let's assume all I have is GPT4, can I complete this task? It took me little over 4 hours to do the whole thing. Yes, if I knew exactly what I wanted I would've coded it myself way faster, but this idea of brainstorming the framework and steps with LLMs, creating a plan, executing the tasks with ample context, attaching the parts together, and debugging actually worked. I ended up with very functional code at the end of the day only using natural language. This is a new direction in application development which involves LLMs throughout the whole development journey. Especially as the first post said "technical founder" since it implies you need to use a pretty varied or wide-ranged stack where some technologies you're not very familiar with but you need to work with. 10x seems a lot, but as I've seen in many cases, it's actually true. What takes 20 minutes of tweaking and coding, would take a 1 min prompt and a 1 min Ctrl+C, Ctrl+V
Totally agree. It can not develop medium or hard projects. The way I use LLM’s is first to architect the project, and break it down into smaller manageable chunks. Once done with that, ask the model to code those pieces, with specific interfaces. With current capability, LLM’s can not replace developers.
I think Claude really helps speed things up in a few ways. It helps as another pair of eyes for bugfixes. It helps when you have no idea how to even get started in a sphere. Its really good at variable and function naming. And it can type faster than me so I can often tell it exactly what I want a function to do and it will be done about twice as fast as I can write it. Claude is not going to write your app but it is a pretty good copilot
@GoodByeSkyHarborLive Yes, it's better than GPT. GPT is still useful, but Claude seems much more with it and able to correct its mistakes where GPT gets things wrong a lot more and gets stuck. For example, Claude will start to suggest debug techniques when you keep getting the same error. It will even ask for you to share other classes or methods. It seems to creatively think about the problem. Gpt just gets into a fail loop and can't get out.
Your bugs must be extremely trivial. IME when it comes to bugfixes that "you have no idea how to even get started in a sphere" means you start with bugs in MLoC codebase (and you have no idea which part of code is called without spending hours) only to discover that bug is caused by calling external site across the way which returns warning code which is not even documented and there is no information about that site anywhere other than source code for dll written in 2010. (And by IME I mean what happened this morning, at least not evening)
I completely agree with the main point of the video, with one caveat. I have seen people polarizing pretty fast on this topic, between people thinking that LLM can _already_ substitute junior engineers and people thinking that they will not be an issue for their jobs. You are perfectly right: we can observe that camp A is wrong. But I am as sure of the fact that camp B is wrong too. Even believing your claim "in the next 5 years LLM will not be able to substitute a JE", 5 years is _very few_ time. I have 30 years of work in front of me, years that 3 years ago I thought I would spend coding. If this revolution happens today or in 5 years, my choices are pretty much the same: I have to adapt to the change fast. And honestly I do not have the same confidence you have in the 5 years claim. Today looks like a far target, but considering the point where it was last year, and the continuous tech revolutions of the past two years, I would not rule out that next year an LLM will be able to code neetcode from scratch. Sure, I would be surprised. But I have been surprised many times by LLMs evolution speed.
I agree, as someone who loves LLMs and has been using them for my work as a junior dev, it saves time from stack overflowing and googling syntax, boilerplate and code snippets. It has saved me from bugging my senior engineers plenty of times as well. But I would be amazed if in 5 years things improve significantly, let alone replace a whole dev team. Things are looking to have some level of diminishing returns already so if we get EVEN 2x more "effectiveness" within the coming years and it can solve medium level complex tasks I would be thrilled.
Man, never seen content from you before. Algo brought it up and I let it play in bg. You are so based man, I really can feel you about this topic. I see it the exact same way: all this buzz comes from people who swim the hype that don't get that LLMs aren't doing the work and have to be seen like a tool. Real innovations come from creativity which comes from intelligence. People may build their next [insert software innovation here] but will not break the barrier to actually cover all other areas that come with it. Because they think Claude will do it for them. But there is a positive side to the hype: among these masses will be some 5% of people who actually get inspired and become developers because the tools helped them to discover their talents.
That is absolutely true! I am tired of having to explain this to people over and over again just because some people keep over-exaggerating what current LLMs can actually do.
ive worked with high performing juniors that couldn't build that, and in the real world seniority has a lot more to do with your ability to communicate/organize and lead projects than it does pure coding ability. keep at it!
I'm not a coder, but I work with a lot of scripting and IaC (which I guess makes me a very junior coder in a way). No LLM has been able to whip me up a decent script that I don't have to spend the whole day cleaning up. Best results so far have been to request the code in parts, and piece them together myself afterwards. I think you're right, 5 years and it still won't be able to do what a human can do. But it will eliminate basically all low level data entry/data manipulation jobs.
The biggest win I've had with AI, was when I was working on a feature to add some telemetry to our software to track what reports are clients are using. For all the reports we had they were all defined in one bigass 3000+ line file. I needed to add a string to each report which had a english version of the report because the actual name would get translated if you were to change to french for example, and I needed to make sure I always sent the same name for each report when sending out the report click event. I dreaded that I would have to do literal hours of mindnumbing copy pasting for hundreds of reports, but instead I just pasted that whole file to ChatGPT and got it all done in less than 10mins. Now could have I also done the same with some scripting, yea. But it wouldn't have been nearly as fast to develop the script, test it, then handle all the inevitable edge cases. And it was way easier to just explain in english that I wanted this very simple thing done.
And the basic reason why LLM can't code up something as relatively complex as the neetcode site is because THEY DON'T UNDERSTAND, THEY REGURGITATE, and more compute or more data (which they seem to have ran out of) can't fix that. Until the AI system can somehow reason what an app like that might need, and then work on it, it won't work. This would require a complete change in architecture, LLMs won't replace even half decent juinor devs. As they are now, it's just a glorified auto correct. Helpful for very simple stuff that's been replicated a million times, but it can't do more than that.
To say LLMs don't understand is an oversimplification of a model family that I don't think you quite understand yourself. You would be surprised with the level of intelligence at which LLMs operate.
You are wrong and you dont understand how they work Llms can complete unique tasks, that alone should tell you its not regurgitation Look into geoffrey hinton
@@robotron26 actually they can complete tasks that fit a template they're given based off their large corpus of data. See the Arc test. They actually can't solve unique tasks. If they do solve it, its very likely there's an almost identical complete template that its solving.
@@jpfdjsldfji No, you are completely wrong. LLMs are not intelligent because they just predict the next word. If you indeed understand what you're writing, then you're not really PREDICTING anything, aren't you?
I saw a CEO of some company bragging that AI created a guest check-in app for some event he was hosting. It was basically a to-do list. Add the person's name and check them off when they arrive. Everyone in the comments was gushing about AI. And tbf, I'm not sure how many of the commenters are actually real and not AI bots because that's where we are on social media these days, but it was still ridiculous. The only cool thing about it was the app he used to prompt the AI also ran the code in a sandbox so you could just prompt and use whatever it created immediately. But that doesn't make up for the fact that anything beyond the most basic of apps is impossible to build with AI.
@@Dom-zy1qy maybe :) I tried making a program that could generate graphical representations of trees some time ago, but failed because I thought it was too complicated. But now I'm curious again maybe I should take another shot ^^
Im 50/50 on this take. I don't believe everyone is lying. I've used LLMs to help me solve complex problems despite their limitations. I personally love them as apart of my workflow. But what I will say is that what you get from them highly depends on your skill level. The only reason I'm able to get them to help with complex coding tasks is because: 1. I narrow the scope of what I want them to solve. 2. I am providing really detailed and long prompts to get them to do what I need. Because I'm used to building software, I know the specific things that I want them to implement when I do work with them. The leap in productivity is going to come from knowing how to iterate over what you're given. I've been building software for 10 years so it comes natural to me to know what to look for. If you aren't a coder, sure you might be able to make some progress where you wouldn't have before. But what you're able to accomplish with LLMs will always reflect the skill level of the prompter. Even as they improve, its up to us to figure out how to check them when they're incorrect and get meaningful responses from them. There will be times where they slow you down just because they're not perfect, but I've found on the whole that I'm more productive with them because the time they save me when the output is good can save many hours of time.
You are correct that LLM's have difficulty with more complex projects, but the whole idea of good clean code in the first place, is to separate your complex architecture into simple snippets of code that interoperate but run independently of each other. This is basically what functions are. They don't need know what the other functions internals are. And LLM's can definitely help you write simple functions quicker than before. If you are an engineer at heart, you won't notice that much of a difference in speed, but if you are architect at heart, suddenly you have a bricklayer at your service that helps you build cathedrals one brick at a time. The fact that engineers, photographers, novelists and artists don't seem to grasp that its not about the skill behind the individual pieces of art (humans are way better), but about the composition of the whole (80% of the quality at 10x the speed). Its perhaps easier to see if you look outside your own profession where you aren't hindered by your own standards but merely judge the outcome. What is 10x more efficient, hiring a photographer or generating a couple of photo's from your favorite AI tool?
I can understand where the tweet is coming from, that when you are first starting out on a project where you don't know the technologies well then LLMs can make you feel 10x. After a day or two, that's gone. Anyway, I think you're right. It's just that people like that do a tiny bit of work and go WOW I SHOULD POST THIS, THIS IS AMAZING
I don't think he's lying. His experience mirrors mine. You don't ask the LLM to design your app for you. There are a few ways in which they help. 1. When you're trying to do something you're unfamiliar with, ask for guidelines on the task. Give it as much context as possible. This helps you get up to speed quicker with relevant information. You can then either ask follow up questions or Google specific parts that you need more clarity on. 2. They automate grunt work. Stuff that's not complex, but still takes a lot of effort. Pattern matching stuff. Like converting SQL to query builder or ORM code and vice versa. 3. They can explain stuff that's hard to Google. Like if you give it a regular expression, it can tell you exactly what it does and beak it down into parts for you, so that you can edit it the way you need to. Explaining complex bash commands work well too. You can't easily Google this, but and LLM can explain it very well.
Dude I'm another software engineer (I'm technically a security engineer with a SE background) and I felt THE EXACT SAME WAY you described - any time there is a problem that is more complex than "show me a basic example of ...", LLMs completely fail and waste your time. I have spent 45 minutes to an hour trying to get something from an LLM that took me 5-10 minutes to do after simply googling or looking at StackOverflow. I had the same feelings when ChatGPT first got big and I still echo the same sentiment now. In fact, as a security engineer, I've seen LLMs introduce critical vulnerabilities in code silently...
ok bro if you think ai itself is a pump and dump scheme, then you're clearly being biased for a reason. ai helps a lot. you're missing out if you dont use it
I honestly can't wait for the AI bubble to burst. It seriously can't burst soon enough. But only because I'm selfish. I want cheap GPUs. Nvidia been hoarding them VRAM chips for their "AI" shovels. Everyone is in a gold mining rush rn with "AI" and Nvidia is selling the shovels. The pickaxes. It's sickening. And they're completely ignoring the gamers, the people who they actually BUILT their empire off of. 16GB cards should have been standard with the RTX 3000 series. Instead, with the "Ada Lovelace" cards (4000 series) they had the lowest GPU sales in over 20 years. Gee, I wonder why! When the "4070 SUPER" is really a 60-class and the "real" 70-class is now $800. Nvidia can suck it.
AI can't code or solve novel math problems but it can make trippy videos, songs, images. Because a code is as good as useless if there is one major bug or some minor bugs. But the same is not true for videos because they have to be only played.
you nailed it at 5:18, exactly what I was thinking. When it would be better to start from scratch than try to "fix" what the ai gives you. It like if you could get a 90% discount on a backup software, but it has a 2% chance of permanently corrupting your data. Useless. Or a fake rolex for 1/10 the cost when you need a real one, if it would be more effective to start from scratch and build a real rolex than try to turn the fake one into a real one. Many of these ai solutions are just picking the low hanging fruit. The fallacy is when they try to extrapolate those results into a real use case. It doesn't matter how efficiently it can pick low hanging fruit if it has no viable path to harvesting the more difficult to reach.
I can’t code, but I made a Facebook marketplace copycat using ai that’s fully functional with messaging and everything. It would be stupid to make a super complex startup with ai, but I am interested in business and ai helps me code enough to where I can get started, and worry about hiring a coder later.
What it is 10x for me is understanding. I can ask a question and get feedback, instead of sifting. I'm not asking for code itself, but for understanding behind what I am doing. I ask it more questions about its answers, and sometimes cross reference with other AI. I mostly use Claude, and use Grok as a backup. I'm not in there going, "Make Me an Auth Component". I'm asking, "What are things to keep in mind when looking into auth solutions?"
You're definitely using it wrong, if it makes you slower not faster. Here's how to use it properly: 1. Yourself decide what the file should do, consider the design choices, technologies, structure. 2. Write up everything you thought of in step 1, as bullet points. 3. Provide pseudocode for anything non-boilerplate 4. If you have another file in the project with a structure or code style you want mantained provide that as context 5. Use either Gpt-4, Claude 3.5 sonnet, or Deepseek Coder v2, to generate the code. 6. (not yet readily available) Write test cases, and use a AI coding ide, to iteratively debug it's code until it passes the test cases. As a person who has many years of experience coding in python, but doesn't know every library under the sun, and every syntax perfectly, the llm's ability to code bug free code is amazing. I am at least 2-3x faster with it.
LLM actually have something called an effective context window, and this is not the maximum context window it can support. There is also a limit on the number of logical steps it can take, which is proportional to the number of transformer layers in the model. This puts a limit on how much information it can effectively process in the context. This means the right way to use LLM to code is to shrink the context if you find LLM cannot solve a task you give, i.e. By breaking down a complex problem into smaller problems. This the skill that all architects have, and this is the correct way to use LLM to code. I have personally found using claude has greatly increased my productivity. What used to take me few days now only takes a few hours. If you don't see this productivity gain then you have not mastered the skill of using LLM correctly, I.e. You have not done your part of the thinking properly and break down the problem into small enough well defined chunks
I think you’re just bad at prompting. I’m a .NET dev and ChatGPT 4o has easily made my work 10x faster. You just have to VERY clearly explain what you want, how you want it to go about performing the task, provide all the necessary context/background, and then iterate on the LLM’s first response over and over until it’s perfect. Tell it what it did wrong, what you don’t like, what you want improved, and keep going. It’s like having an ultra-fast programmer working for me who writes all the code and all I have to do is clearly explain what I want and then review it. I’m sorry you haven’t gotten good results using AI for programming work, but if you’re not getting good results, I tend to think that’s on you, not the LLMs. I think you’re bad at prompting, and probably pretty bad at explaining things interpersonally as well.
That part about explaining things interpersonally is actually interesting because that is a common problem that many of us programmers have. After all, when working at a lower level (not UI design or things like that) we are working with abstractions that are difficult to verbalize. And at some point you just say... let me do it myself. Because... If you have to invest time defining very well the functionality of a piece of code, then you are not being that efficient. You are just pushing a car though a supermarket aisle because you have become too dependent on that technology.
If LLMs make your work 10x faster than your work is extremely simple to begin with. That's why you're finding success with your prompting and others don't.
@@pelly5742 Such is the life of a full-stack dev. Some of my tasks are insanely complex, most are not. I don’t have a junior programmer working for me who I can give all the grunt work to so that I can just do the fun stuff. I have to do everything myself. GPT 4o has become that junior programmer who does all of the routine stuff, and does it incredibly fast, so that I can work on the more complex aspects that humans are still better at, and that’s how it has 10x’d my workflow. GPT 4o is like having a full time junior programmer who has come to me right out of school with a masters in computer science, writes code with superhuman speed in any language, and works for me for only $20/month. It’s revolutionary. If you’re not getting good results using the tool then you’re probably just not very good at using the tool. It takes an especially narrow mind to believe that all the people who are getting better results with the tool than you are are all just lying about it.
Broo, you are genuinely correct, I completely agree with you. They are saying all that stuff just to attract more investments and just copying each other (I mean the founders of LLM's with a little different graphs showing that one LLM outperformed the other one) I have recently used LLM to build the fronted of application, after spending several hours of working on it, it was a failure. I understood that with no knowledge of progamming, LLMs are just a waste of time.
best dev’s I know tend to even delete AI plugins, because it waste time and is a distraction personally I go to LLM only for simple snippets and idea generation, anything more and it’s a waste of time
I'm with you man, it largely just wastes me time. I use it (for coding specifically) to introduce me to concepts on a personalized basis that I'm not familiar with, like learning a language I haven't used before. But that's it. Any actual work it's legitimately dumb. Then when I say that on LinkedIn people who have no idea how to do their job nor mine tell me how I'm wrong.
yay! i love watching people fighting over arguments in twitter! One shares their opinion (even if its incorrect), some guy criticize the opinion, people take it as a challenge, then the guy makes a video on the tweet, then the situation moves further, i love watching this! lemme get my popcorn
You're completely right. Even as a Master's student in Aerospace Engineering, LLMs can't help me with my problems beyond the most basic outlines. When you need to get more niche or technical, their answers make zero sense, and you're better off doing your own literature search.
you are are 10000% using it wrong. I setup orchestrated docker containers, terraform deploymemts with beautifully designed reusable components, open source vector store in a container with volume claims. All deployed to azure, pulling from my own private docker image container registry, provisioning an azure resource group into azure container apps service which is managed kubernetes behind the scenes. yes I have working knowledge but I wrote zero code just worked with the chat system while referencing and pasting actual documentation. it helps to use an editor like vim for quickly editing sections and pages of code without always having to use a mouse. Claude is literally a game changer for programmer/founder hybrids like me
You're right, as a mediocre developer (and that's being generous), I'm able to 10x relative to my previous pathetic capability. But I'm still able to create working applications that would have taken me forever without LLMs. Some will say I'm cheating myself from learning to code, but to that I would say: 1. I'm learning an extraordinary amount, just via observation. My coding knowledge has definitely increased dramatically. 2. An analogy: Using a calculator may degrade my arithmetic skills, but I'm able to work faster at a higher abstraction level. LLMs will only get better, so I wouldn't rule them out for them taking on more complex tasks in time. No matter how good you are at coding, you might want to continue investing some amount of time in experimentation. All this being said, I've learned to love coding so much. I've always found it painful because I perceived it as taking too long to produce meaningful results. Now, I'm getting more instant gratification, which motivates me to generate, debug, refactor, and type more code. And it motivates me to reinforce my learning in a more structured manner. Therefore, I'm taking a look at your site and will most likely sign up.
@@ryan-skeldon Mostly structuring code in classes and functions, becoming more familiar with syntax and reading code, identifying the source of bugs, making performance improvements, and reusing code.
I pretty much agree, though I do kinda know where that "10x faster" argument is coming from. Little backstory, I have over a decade experience programing so I'm not a beginner by any measure, however one thing I always hated doing was UIs... I don't mind designing them in Photoshop or Figma or whatever! But I hate coding HTML/CSS/JS. I'm absolutely going number everytime I look at an Android screen xml or mostly even flutter code (though I do like this a bit more). And this is where I absolutely like that I can get the skeleton code and components out of an LLM and pretty sure the design would otherwise take me forever to make (mostly because I would avoid doing it till the last moment). So having something decent looking that I can modify, bend and then build the actual fun backend stuff for definitely feels like it's suddenly 10x easier to make stuff. Or maybe closer to 5x but whatever... But when it comes to the stuff I'm actually good at it can't provide any benefits other than small bursts of completion snippets that I would've already written myself, but now I just need to hit TAB. But that's like 1.05x speed improvement at best and sometimes I find myself disabling AI completion because it outright gets in the way anyway. So when you hear someone say it improves their productivity 10x times, you can assume that they either: 1. aren't very good at it themselves to begin with or 2. really hate doing it and their life is a drag because they work with tech they hate I like the LLM tech, but I really dislike the "average user"... xD My ears bleed everytime I hear a manager talk about how much it improved their life...
Claude 3.5 is actually very good at UI task, if you know what you are doing. Example: 1) create a project, 2) describe your packages 3) describe your workflow 4) You can add some snipplets in a txt file. 5) Start with a new chat using that project. 6) Prompt away: Create a Header and footer using this data . This is 10 times faster, yes
I wouldnt say 10x but it has 100% enabled me to build stuff that would take MUCH longer before. I dunno if I can say this here but I've copied the "Jamie" app a meeting minute taker that gives insights, action items, tasks etc... Over time it meta analyses meetings and create daily checklists and follow ups for the tasks as well as initialising some of the tasks I need to finish like compiling metrics reports. LLMs are fantastic advanced language parsers. I think most people misunderstand what kind of tool they are. Let me add, I am NOT a coder. The fact that me, a non-coder, can now build these things is the important part.
Finally someone who isnt smoking weeds has spoken! I am not even a SDE but BI/Data guy and LLMS cannot yet solve some of my complex data transformations asks either! Thank You!
I think your viewpoint is valid and the ballad of LLMs wasting time if mid-senior devs has been well told at this point. In answer to your sincere question, I do believe that you have not yet found the ceiling in terms of LLMs ability to handle complex reasoning tasks, that I would class as medium difficulty problems. My top tip is to meet an LLM where its strengths lie, which is in the realm of natural language. Second is not to necessarily rely on the agent to produce large outputs in one shot, but instead perform complex pivots on aspects of a codebase at one time, making sure to provide ALL relevant context in a lean format.
It's great. Been developing a game without any knowledge of programming. Easy stuff. Just like art, you need to give ai the right prompts for code. For everything else there's indian youtubers
« All the hard parts had to be solved by me » : yes. That’s the issue. LLMs are amazing when they solve problems which are simple to YOU. The interface you showed is simple to me as a senior engineer, I just don’t want to use all that time to implement the front end which I already did too many times already. So I’ll just guide the LLM and tell it what do to exactly. If you guide it to the point where it can’t even go wrong, it’s an amazing tool. And that’s also why I disagree with another statement : it’s an amazing junior dev. But a junior dev is only amazing with someone to guide them. Otherwise they’re just lost and not sure what to do, so they hallucinate and do stupid things 24/7. But, well guided ? Claude made me at least 3x faster in my day to day job. Where I agree though : what makes LLMs efficient is being technical myself, and very good at that. I’ve always worked using a TDD workflow, which works very well with LLMs (think / design first, write assertions which will prove that the problem is solved > now implement). I don’t think anyone that knows nothing about code or isn’t technical at all could ever be efficient at coding using LLMs. Just like a client can’t come and tell me « make me a website ». Errr ok I have like 50 questions for you now, most of which you have no idea about the answer. And that’s very surface level stuff, on a green field project, where you actually have the most chance to be successful in the first place with that profile.
Whenever I run into a coding issue that I have a hard time solving, LLMs have been completely useless. Mostly just hallucinating nonsense. Not even difficult problems, just things like baseline Rust. Like you said, it's powerful for snippets and streamlining repetitive and simple things: the "monkeywork". That's where it genuinely makes me faster. For anything requiring thought, it always is a waste of time, sometimes even misleading.
I believe that LLMs won't ever replace Software Engineers because, to get quality outputs, the time and effort you need to detail your problem and how you want it solved is, for the most part, already the job software people are hired to do. I deal with machine learning and many times I opened a conversation and realized that I knew already the answer to the problem by just framing it and putting constraints on the solution and, no shock, that's called thinking! On the other hand, when you plug the entire script of your model and ask "Why is the gradient not backpropagating correctly?" the LLM will provide a fancy list of overly smart solutions totally ignoring your specific problem resulting in a massive waste of time. Having that said, removing all those time-consuming moments when you are solving low-level problems like finding the correct function that does the job from a cool library, is a massive quality-of-life improvement and allows you to focus on the interesting aspects of the job.
Hot take: If LLMs make you 10x faster at coding that says more about your coding ability than it does about how good LLMs are.
The tweet: x.com/neetcode1/status/1814919711437508899
This video may change your mind about the AI hype in general: th-cam.com/video/uB9yZenVLzg/w-d-xo.html
can you list some complex apps? i'm still learning i feel underpowered
you just made me feel good for getting shit code from gpt
W Neet
Skill issue. You’re probably shit at prompting. The equivalent of making a StackOverflow post and the people need to ask 50 follow ups to get the context of your issue
AI can seemingly dominate the world, not through a engineer perspective but on a mathematical perspective AI can really code of medium-difficulty tasks. These companies pour billions on transformer models and the best compact brilliance on other side which was not encouraged by those communities on a real scale
The alliance of "CEOs who hate paying salaries" and "students who hate doing homework" both wanting LLMs to code perfectly
It’s a perfect circle since latter drops out of college after meeting a potential investor to become the former
It could even be a Venn diagram
trust me, student here, we really dont want LLMs to code perfectly or even close to it. Our futures arent worth the few hours of homework 😭
Years ago I read somewhere on the internet the field was broken, totally broken. But that’s ok, I am from the medical field, I brought my stethoscope. Also, duck tape, and WD40 if that can help.
@@agnescroteau8960medicine wont lose work, but will dying to overwork though. Especially if you have "universal healthcare" which will take your negotiating power and income
Pre LLM => Devs expected to work 45 hours per week.
Post LLM => Devs expected to work 60 hours per week.
Somebody’s gotta fix all of that LLM spaghetti that looked like could work but just doesn’t 😂
You nailed it. Non technical people don’t understand that the last 10% takes 90% of the time… and the problem with developing with LLMs give you the impression you are almost done, but you still have a long way to go. And if you started with an LLM and don’t know what you are doing… good luck with that last 10% 😂
The last 10% also makes 90% of the value. It's the edge that makes an app competitive, not the part that's the same as any other app.
Like me implementing a web server for aws basically in a single day, then spend the rest of the week at least figuring out the deployment and configurations that are missing. Gotta love how "helpful" aws errors can be.
I honestly love when chatgpt makes throwaway python scripts for me when I feel lazy; but man, maintaining that code going forward? I’d have to rewrite most of it!
“Founder” just means your side project has an LLC and a bank account
😂
And you're probably already looking for investors, lol
@@XHackManiacX actually no, I just want to put my app on the app store
You talk about me i take offense 😂
Bank account with a couple bucks is optional😂
"I made the rookie mistake of opening up Twitter" LOL :)
Yes, a "mistake" he made. And instead of quickly closing it and forgetting like a bad dream, he published a whole video about it.
🤣
@@TheCronix1 - the tweet author, probably.
😂😂😂😂
But I got this video on Twitter 😅
Q: Why would people lie/exaggerate like this?
A: To generate traffic on their channel/feed/blog via hype and/or because they have financial interests why benefit by hyping up the technology.
Basically your typically permutation of your typical Social Media borne scam.
10 x 0 positivity is still 0 productivity
Keep coping…
@@J3R3MI6 Stay useless
Actually, it's 0 positivity. You forgot to double check your math.
@@Icedanon from this equation 10 * 0 (positivity) = 0(productivity)
which is true since 0 = 0, LHS = RHS, so here lets take :
0 = x, so 10 * x(positivity) = x(productivity)
since x productivity is 0, we have divide both sides by 10, we get : x(positivity) = x(productivity) since dividing 0 by any number equals 0 unless the denominator is 0.
so going futher =
x/x = productivity/positivity. which is both 1 and infinity,
1. taking x/x as 1 :
therefore we conclude that productivity and positivity are inversely proportional,
the more productivity u have, the less positivity u get.
and vice versa.
2. taking x/x as infinity, we can also conclude that
productivity/positivity = infinity. Or productivity = infinity/positivity.
if your productivity was 2 units, then your positivity would be infinity/2 which is very large number, so we can take it as infinite.
therefore no matter what is your positivity, your productivity is actually infinite,
if your positivity is infinity, then your productivity would actually suffer.
By the whole answer, we conclude two things :
the more productivity u have, the less positivity u get.
and vice versa.
if your positivity is less than your productivity, then your productivity is actually infinite,
if your positivity is infinity, then your productivity would actually suffer.
so moral of the story :
keep your positivity low, be depressed, take some pills, do drugs, etc to further lower your positivity and increase productivity.
MATH SUCKS, SINCE INFINITY IS NOT A FRICKING NUMBER, ITS UNDEFINED IN MATHS, SO WHATEVER THAT I COOKED IS INVALID.
Terrence Howard: "Hold my abacus whilst I deal with this punk! "
The reason why they're lying is because of money. I'm a senior engineer and I've been in the industry for 25 plus years, LLMs just waste time for anything that isn't the most trivial app on the planet. The AI hype is based around a bunch of VCs misrepresenting the difficulty of programming/engineering for the sake of selling a product.
I feel like the Twitter guy doesn't understand what 10X even means. If you can implement this stuff 10 times as fast then you can literally work for one day and then take the rest of the week off and no one will notice a difference. Naturally, I don't think he's a programmer in the first place which is probably why he sees a 10x improvement. This is just a big old case of the dunning kruger effect.
The funniest part of all of this stuff is that it just doesn't make sense logically for these LLMs to ever become proficient at writing code. The reason why I say this is because you need to have enough data to train the LLM on every use case. But the problem is that there are plenty of use cases where maybe there's only one or two examples in open source. These AIs have no ability to create anything new and so there's always going to be a distribution where the majority of problems simply can't be solved by the LLM because it doesn't have enough data to understand those problems. At the same time, they'll become really proficient at writing todo apps because there are thousands of those.
sadly most non tech employeer currently started to underestimate engineer, just like my former boss, saying that my $500 salary as fullstack dev is enough because AI can help. hahaha.
Like I've mentioned elsewhere it also really depends on the language, the more popular the language the easier or perhaps more options you can get, the less popular the language.. Well good luck getting AI to help you write in older languages or in custom ones. It's only sort of helpful anyway for problem solving because like you said it's based on examples already existing which might even be the wrong problem to solve leading you down rabbit holes if you don't realize it. The biggest problem with AI though is the cost of maintenance from both technical, and environmental viewpoints. It's like how some NFT's are supposed to "solve" climate change, good luck getting "green" AI.
The productivity gains equally massive as workflow difference may be for a specific repo mantainer against a full stack engineer.
I don't think LLMs can help much if ur been doing the same for 10+ years
understand what it is and now inescaple global arms-race for technical superiority and upgrading nation defence
LLMs / AI tech represents
I've been doing this stuff for even longer. There have always been people gaslighting us about the difficulty of producing quality software. This is just the same people latching onto a new tool. Before it was agile then iso 9001 and on and on.
Weren't "no-code" tools hyped up in the exact same way? Or am I misremembering?
They did, now only governments use them.
Yeaaap :) that ship sank so hard no one's even talking about those any more
@@Jia-Tan Can you please explain what those mean? (The “no-code” tools)
@@yashghatti That's just not true, you have Webflow and Elementor, both widely used and battle-tested, they were also marketed as replacement for all developer, but they eventually find their own place, and got accepted by everyone as a good tool in some cases.
We've been through *several cycles* of this. It was CASE tools in the 1980s, and then UML in the 90s/early 2000s. Both of these were supposed to obviate the need for coding, you just had to go from requirements to running program. The problem is, any symbols you use to express a spec in such a way that it's specific enough to be executable by computer are isomorphic to constructs in some programming language. They just might not be easily diffable, version-controllable, or viewable or editable except by specialized tools.
"LLMs will replace all developers" said a person who's the most major accomplishment is a hello world app.
🤣
Nearly every TH-cam coding influencer who's entire business model is pretending they are actual professional Software Engineers but all of their projects are forks of somebody else's public GitHub project has entered the chat.
a 'Snake' game
@@AnimeGIFfy don't compare your hello world app to mine. Mine is like an amber alert and notifies every phone on planet earth of my presence🤣.
🤣
Engagement farming is a real thing on twitter and that's what is been going on. People just post anything and if your Post by any means contains the word "AI" and fear mongers among general public, It's sure to get reactions from left and right.
I believe X is paying based on your posts interactions, that’s why that thing is full of bots
they only make like a few bucks too. so it's really pathetic when actual humans do it.
now the bots are excused, they are raking it money through annoying others. that's genius level of hustling. the american way.
I'm absolutely sure that if I post anything, even if it contains the word "AI:" it will get at most 4 views because that's what every tweet I've ever posted since Twitter started has got. Unless you pay them or something, there's no way to get views or followers in that thing.
@@ronilevarez901The only way is to post something really eye catching or just spam everyday.
Enragement farming
This is not how you use LLMs to aid coding. You use it to write small self-contained functions, regexps, throwaway scripts, prototypes and non-customer facing utility code for stuff like data visualisations etc. It's not for non-technical people, it's for technical people that want to get through non-critical or simple parts of a task quicker.
Basically a replacement for StackOverflow and not much else.
You clearly get it.
Exactly. Been using for some DevOps tasks heavily. Python, bash - don't know and don't care. I have enough developer knowledge to debug it, but not to learn all the syntax and niche libs, frameworks, and language quirks.
It's like having a developer buddy on Discord ALWAYS ready to go. I agree completely with you. Consulting, sharing snippets, etc. is the way.
i.e. a novelty of little consequence to all but grifters and sheep
Its an 80-20 thing. LLM's suck ass for the parts that actually take the time, and help with what most people dont need help with.
they're still really useful for "dumb tasks". i can tell gpt 4o to "Look at this project i made, now i need X project with x and x, make it based on my other code" and it will make me a working crud in less than a minute. sure, it might have some issues or be missing features. But it still saved me like half an hour of coding if not more.
i've done that a few times and personally i find it pretty satisfying to be able to generate a basic crud with 5 working endpoints in a few seconds.
@@SoyGriff Very much so :) I love them for learning new spoken languages too, i doubt there's a better tool other than actually practicing with other people. They have many uses, but the message I was trying to reinforce was neetcode's opinion on how they're not as advanced coding wise as they are made out to be.
In your case, the crud part can be found basically anywhere, since so many people have already implemented it. For implementing specific business logic their usefulness basically depends on your ability to modularize the problem. If you can break down your problem in to small enough chunks that you can ask chatgpt how to implement them, you've already done a lot of the "programming" yourself.
They're definitely useful in their own right.
@@Vancha112 the crud part can't be found easily because it's specific to my project and yet it can generate it in seconds based on my instructions, it saves me a lot of time.
i agree i'm doing most of the programming and just telling the AI to implement it, but that's the beauty of it. that's what AI is for. i only have to think and explain while it does all the heavy work. that's why my productivity increased so much since i started using it. i'm building in a month what my old team would build in 6, and i'm alone.
Been coding for 20 years here. The point is.. even if you don't "need help" with that part.. the LLM will do the job faster than you can do it.. thus your productivity is improved. In my opinion, if you are not figuring out how to include LLM's in your workflow, you are going to be left behind by those that do. Is it a 10x increase? For tasks the llm can do.. it's much more than a 10x increase!
It's not about "needing help with it" it's that it can do a bunch of tedious stuff in a few keystrokes rather than having to type it out
Every 4 years silicon valley gets caught being shady and people just Pikachu face through it like it's the first time
Not limited to silicon valley. Any areas with great amount of cash flow are full of cheating and lying.
I am old enough to remember when they told us the future of gaming is pikachu hunting with your smartphone camera.
@@betadevb I will never forget the incident in NYC near central park where someone yelled out "Vaporeon is here" and people jumping out of their vehicles to catch this Pokemon. IN NYC / CENTRAL PARK !!! th-cam.com/video/MLdWbwQJWI0/w-d-xo.html Vaporeon Central park Stampede
@@betadevb To be fair, I still play pokemon Go
This is my favorite hot take. Thanks for this! 😂😂
They’re lying because they’re trying to get rid of competition. Propaganda, basically. Yes, I know this sounds crazy, but it worked on me. When AI was first released, there were millions of videos and articles floating around about how AI was going to replace humans, and me, who was at the time learning how to code, gave up on coding because AI scared me. I chose to go on a different path. I’m sure there are more people who gave up on coding because of AI propaganda. Fortunately, though, stopping learning how to code didn’t have a big impact on me since I was 13 at the time and even though I wasted almost two years not learning how to code, I’m back at it and will not give up no matter what 💪 You shouldn’t either. AI will not replace software engineers. Period.
Understandable, felt that too. I'm a little decent at coding and felt totally replacable when devin was on the hype train.
NGL I was gonna use AI as an excuse to finally quit capitalism and move to a mountain with my savings (not trolling this was going to happen). But the only thing that ended up happening is that once again in the industry greed won and a with the layoffs a lot of us devs are being exploited af. We are trapped in the promise of a transition that will take decades with CEOs who just want to keep on cutting numbers on one side and the AI bubble on the other.
In the mean time tons of really good people cannot even find internships because interviewers also fell on the AI bubble trap and are now asking freshly graduate kids to code skynet in a blackboard in 15 minutes.
The industry really sucks rn.
Good on you, I only managed to start learning at age 17
Why does your profile pic look like you're 35
It's not him, that's Robert the Niro form "taxi driver" movie @@anon3118
I think 90% of my job is figuring out how to solve the issue I have, 5% - 6% is bug fixing and testing what I added and the rest is typing code. I think, even if I could magically have all the code in my head on my computer in like a second, it would save me a couple of working hours per week. I think the people who create these tools don't actually understand what programmers actually need. If for example I could have an AI that quickly tests my code, then we could start talking, probably would save me lots of time.
Yes! Automatic testing would be fantastic. If someone could train an AI to do *just that*, and nothinh else, it would be amazing. In general I think AI tries to be too much. It would be more practical with AI that was really good at something very, very specific and worthless outside of that.
The biggest issue with these LLMs is that they lose context SO FAST. 3 prompts in and you need to tell them again and again to keep in mind what you had mentioned in the previous prompts. I was using Chatgpt and Copilot for the leetcode problem "flight assignment" and I accidentally forgot to mention "Flights in this coding question" in my 3rd or 4th prompt and it started giving me Airline flight info. Which is completely bonkers because how could it think I am talking to it about Airlines instead of a coding problem that we were working on a few seconds ago!!
You should increase token count.
@@PanicAtProductionit will be crazy. Even LSPs start to struggle on bigger projects.
I find the more I know about a specific task the less useful an LLM is. When I’m new to something it’s a great place to start by talking to chatgpt or something.
I'm not a programmer but I've made a few JS sites and Python apps for fun, and one thing I learnt to do is to start new chats. Once you get too deep it starts going batshit. Granted this is all very basic level, so it probably wouldn't help on anything too big or technical anyway, but basically if you spend some time starting new chats and being very specific and detailed with your prompts it does help. With Claude I'll tell it I've updated the files in the project knowledge section and for it to refer to the newest version. There are ways of getting it to stay on track but it probably is a waste of time for an actual programmer.
lmao 😂 it giving you flight info is wild
agree with what you're saying, i've been doing software for +10 years and I do think it has made my productivity go like 10x up, but the difference is that I know what I need and I use chatgpt 4o as a rubber duck, specially when doing architecture decisions and tradeoffs I have a vague idea of lets say 3 different ways of building x product so I just ask for pros/cons, describe my ideas and so on and it works. The thing that i've noticed is that if I spend + 2 hours discussing/bouncing ideas with an LLM it becomes stale really fast, forgets my previous input and just hallucinates, but as an initial technical document writing or small shit like basic components it works VERY good
This. I agree with this a million times over. I treat it like a rubber duck that has 130 IQ. At the end of the day it's *my* hand that is writing the code. The LLM just provides input and feedback. The claim made by the tweet OP is definitely exaugurated, but if you strip out the hyperbole and 'zoom out' a little, it's pretty realistic.
It’s about pain vs complexity.
Like he said if it can handle snippets it can handle big projects in chunks. That’s how I use it. I edit more code than I write but my jump of is always an AI.
It just physically writes code faster… I can do the thinking and editing but it write 1000-500 lines a min.
The problem with this video is that he starts with his emotional opinion and then finds examples that proves him right
@@rocketPower047 literally autistic
A saying that's always valid "Stupid people are the loudest" , that's how I see all those Twitter "influencers/founders" with their AI takes, LLMs, carrers,etc... They need to get good themselves before talking. Wake me up when Primeagen agrees with their nonsense.
Good take Neetcode!
Except it's the opposite. Most of these takes against AI for dev productivity are people who havent progressed beyond senior engineer, including Primeagen.
@@OCamlChad maybe because the AI itself has not progressed beyond the level of an intern.
@@OCamlChadsince that's not good enough for you maybe you should ask John Carmack next time.
You are definitely right... Now, how do we shut Elon up?
@@OCamlChadLol okay Mr Junior Dev
Totally agree. That is the major difference between looking from a non-tech person and a tech person's point of view.
From a non-tech person's point of view, they are now able to create a "non-working, working-looking site" (lol), whereas before, they would need to have a UI designer/engineer create it for them, which cost them money and meeting time.
From a tech person's point of view, LLM is just a snippet where now I don't need to go to StackOverflow. Using it more than that is just wasting time as mentioned in your video.
And the most hyped people that go around talking shit are the non-tech people who work for a tech company that knows nothing about systems but think they do so they start using these LLM tools thinking they can replace engineers..
The worst part is that they use these tools to create so-called prototypes and then give it to the engineers to make it production-ready but don't know why it takes longer than the traditional way (*Cough CEO/Project managers Cough*)
Yes, the problem is that they can get close to the spec you give them, but it's not close *enough* and has to be rewritten. This has been frustrating for me many times where I tell the LLM to change one small detail and it goes round in circles before finally admitting something can't be done without starting from scratch. Huge waste of time in a lot of cases
Pretty sure my project manager could say the same about our dev team 😂😂
Thats part of you learning
If you learn what tools exist or what libraries can actually do, it should be able to help you code it just fine
Its literally translating your prompt from english into code
You asking it to do something impossible is partly your fault
@@wforbes87100%
As someone using it to write basic code, its a godsend, i dont need to wait a day or submit a ticket or whatever to have to talk to an engineer
These guys are vastly overestimating the amount of mundane work that goes on outside of faang lol, most coders or code jobs are not frontier
@@robotron26 Sure, but the LLMs sometimes thing something impossible is in fact possible and lead you on.
i dont think you used it properly..
That's been my experience as well. Even with snippets, it works best when I effectively solve the core logic first and just ask for code, or give it complete code and ask for suggestions. For anything beyond snippets, I've spent more time holding the LLM's hand to not go x or y route, and eventually just figure it out myself. LLMs are definitely far, far away from getting to the point where a lot of people praise them, like 10xing. They definitely are very handy tools, but have a lot of limitations.
so, you basically trained it for free...good job! do it more often, please, we love free work!
@@strigoiu13 , I did not. You have the option to remove your data from being part of the training set. Then, for security purposes, I delete conversations as well. Even then, they have plenty of training examples from other sources.
@@strigoiu13 Also, if it actually learned from its users automatically, it would be saying slurs constantly within days of launch. We've seen that happen to chatbots like that repeatedly.
@@strigoiu13So what? It was useful to him, sounds like a fair trade
Ah yes, the old AI made me a 10x engineer. It's always cap...chances are that these individuals that claim this are the ones who push absolute dog water to production because they don't actually understand the code or know how to debug. Personally if I'm prompting this LLM to write something and then having to double-check it and if it is wrong prompt it again and do that whole process till it gets it right, It would have been faster if I did it all myself in the first place.
I don't know man. Personally, I personally find that its much easier to edit 'half-way there' code, than to write from scratch. It might take a while to get used to the peculiarities and bad habits of the LLM and figure out what's the best point to stop prompting and start coding by yourself, but once you figure it out, I do find that relying on AI makes me a lot more productive. Not 10x but definitely at least 3x on a good day. (Although there are obviously also bad days where its barely 1x). I find that its great at data visualization code, complicated refactorings, explaining an existing (not too large) project I'm trying to get started with, and basically speeding up any annoying, slightly complex, tedious process. And it really shines for quick dirty projects in languages you're unfamiliar with (need to google how to init array) but can read just fine once the code's there in front of you, since you can basically just wing it, as long as you've a LLM to get your back.
@@ReiyICN Oh boy I'd never ever rely on AI for "complicated refactoring". Sounds strikingly similar to shooting yourself on the foot. To be fair I've only found AI useful for common boilerplate you don't want to write, or in the case of copilot when you're creating a structure, it is quite good at completing the structure, for example switch or else statements
@@ReiyICNmore like 1.3x
Even if LLM can do 90% code, the 10% will take 90% of your time. Its "last mile problem" pattern
The issue with a lot of people is PROMPTING, you don't prompt LLMS. You don't have to find the correct prompts or keywords. You just talk to them as if they were a human being, a dumb one. It works really well in my experience.
It's better to write 2 paragraphs explaining what you want than trying to make it work 10 times while only writing basic prompts and not providing the whole context
@@SoyGriff some projects are to complex to explain the entire context , I find once iv explained it I already know how to solve it anyway
Everybody nowdays is "Founder" or "building X" with no technical background, few years ago it was the hype ride with no code tools and now it is with LLMs.
Important to keep in mind that a lot of the hype is either manufactured by folks that have invested a lot of money into the current AI boom or folks that have fallen for said marketing.
The only people who can fall for said marketing are those who haven't actually tried the product. The rest, like the guy writing this article, they're CLEARLY stakeholders. I bet this guy bought some Anthropic stock beforehand, or is just a paid actor
It is overhyped but at the same time it does do my work much faster. It can't build entire systems or even big parts of a system, but It can work on small parts. For example, writing simple functions, components, UI elements etc. I mainly use it to speed up my work, instead of coding, its mostly me checking over the code generated and fixing small things. Sometimes its frustrating and gets it very wrong, but I usually just have to fix the prompt. Overall its definitely sped up my work flow, maybe not 10x, but 2-3x is reasonable.
And thats enough for it to be a massive change
Now your company can make you do 3x the work instead of hiring one or two more people
They will absolutely do that
And AI will advance more to the point where eventually, you will not be needed
There is no arcane or unknown laws of coding libraries, they are all manmade and documented, the ai will get better
Your thinking in zero sum terms. The demand for code will simply increase... the reality is.. most companies want to use a lot more software than they currently do.. so they will simply create more applications and better tools for users.
@@robotron26 And your point is? If developers will be replaced, it's would mean that pretty much all of intellectual jobs are done.
The thing is AI is like a fast worker that catapults to the answer quickly so you have to seer it with the correct type of questions so it is not ambiguous in its output, I had to code a task component some features (add task, remove them with a cross button, add due date with a calendar etc, I had its figma file and have Claude 3.5 all the details to remove ambiguity and it made a surprisingly good boilerplate component as I knew its training data would have something similar.
For run of the mill tasks it is a game changer but for something requiring a spark of imagination (nil training data) it fails pretty badly.
There are around 10 gazillion implemenations of "my first task list" on github, of course it managed to do that. Now ask it to design an async cache that fits into the constraints of your existing application...
i like your style , calm composed and very genuine and non-toxic ,
you know you're seeing bullshit yet you respond respectfully and give everyone the benefit of the doubt.
I wouldn't call him calm, rather hysteric. :D
@@Neomadra that's not hysterical at all
Man, I am an artist for nearly three last decades and I feel exactly like you when I am listening to other artists praising using LLMs for art. I tried many image generators and they work great... for people that just want random picture to jump out of it :D The more specific need it is - the more problems with generating simple picture. You will just waste time trying to describe what you need and getting random pictures that are sometimes not even remotely connected to what you need. And thats just simple pictures. When it comes to 3D models - LLMs are laughably simplistic. I see so many YT videos where people are AMAZED by the results - while showing something absurdly simple and still in need of manual fixing. LLMs cant even get good topology and people keep on talking about how it will replace us. More so - some people are claiming that they already lost a job to generator... HOW? What the hell they were doing? How simple thing that they could be replaced by something so deeply flawed? I recently started to learn a bit of coding for a simple, proof of concept game I am making. I didnt even tried LLM because I dont want to waste time. I rather ACTUALLY LEARN and understand how code works instead of trying to copy-paste it and then repeat it 1000 times because something isnt working and I will dont know why while LLM will tell me "ow, I am sorry let me fix it. Heres improved solution!". And then spit something wrong once again :D
The generator doesn't have to actually be good to replace people, see. All it has to do is be shiny enough for the people marketing it to convince people's upper management that it can replace them. Or be a convenient excuse to have mass layoffs and rehire at lower price or overseas.
@@tlilmiztli hi fellow artist, before AI hype, fortunately I also already switching to fullstack dev and I think being artistic give something different to what you build.
Totally with you, Neetcode! The AI hype is like calling a microwave a personal chef. Thanks for cutting through the noise!
ooh i like that analogy
@@minhuang8848 it's a good analogy if you look at it this way: microwave heats up food. you can heat up left over pasta. you can heat up microwaveable food. it's alright and it'll fill the stomach but it's not that great compared to the food you make in oven or stove.
llms will give you small working code snippets but they won't solve your complicated application. they don't come up with novel ideas. in that sense it's a microwave. you give it a prompt and it gives you mediocre code in a short time.
making food by yourself is like programming while pushing a button on a microwave which is just like prompting.
i just don't see how llms are like cnc machines or 3d printers. if anything they would be helpless and inconsistent cnc machine or 3d printer operators. i don't see them as tools in that sense, perhaps assistants at best
Man those replies and comments are AI themselves. :(
That's what I thought. Bots are hyping things up deceiving humans into the trend. If you tell a lie too many times...
I'm starting to think that the bots are manufactured by twitter. What benefit does anyone outside the company have to run bots that respond to posts like that. Not to mention the captcha when registering, I literally could not pass it myself after like 3 tries of having to get 20/20 answers correct to the point that I gave up. Maybe I'm stupid and AI can solve that better than me, I don't know, seems fishy. It's probably 90% of posts I see are AI.
@@ltpfdev Well if these are indeed Twitter's own bots, then they'd just bypass the captcha and probably post via API
what even is real anymore ;(
I work a lot with writing quick utility tools and api integrations for enterprise tool marketplaces and this is extremely useful for making hyper specific private apps to help a team handle one tiny piece of straightforward automation + hooking together a couple if APIs + maybe a super quick interface. LLMs are really powerful for things like this and have prob made me 10x faster for certain easy but tedious tasks.
6:13 - Here's the bottom line. That's exactly what I think. Great video, BTW!
I have to say. I built an app that uses html 5 canvas, vanilla js, sqlite, heroku.
I use the project knowledge base which update each session my entire codebase. I give it a directory map app.
Each task I begin a new chat. I write intro and a definition of todays task WHICH I KEEP SMALL. One at a time.
I have built something good. But its still hard work. I am considering writing an ebook for my method and then some tooling that makes writing code with Claude easier.
But I do see your point, I'm not a coder, I cant solve complex things and yet the llm cant. But I'm learning to code by doing with it and the familarity I have with my codebase (around 1mb now and 20ish files) is the real superpower. I'm learning fast. I dont think LLMs do it all in 5 years, I think a broader scope of people will be able to do it all.
I think this is the expected behavior of non-technical people, they will be defensive and want to believe that they can do anything a software developer/engineer can do with the help of LLM,
it's just human nature
Lmao it's even worse than that, they believe software engineers and programmers are gatekeeping coding from the common people lmao
@@vishnu2407 Yeah exactly 😂
It's not human nature, it's what they've been told by the people they're paying for the service. The error is blindly believing what the salesmen tell you
it's expected behavior of people with no common sense and a thought process of an elementary school kid on a good day.. which describes most of these parasites "working" in management
Sad part is you’re all wrong… LLMs will create a Revolution where non technical founders CAN build a company and one that will rival companies as large Microsoft and bigger. 💎
I’ve recently had a very nice experience with Claude. The only downside is that the amount of time you have to interact with it is limited even with the pro plan. Every now and then it will tell you to wait a few hours to continue. But aside from that, I’m building an app I could not have done in the time I had without it. I’m an expert JS dev but there are some things that I don’t understand at all like audio engineering. I’m building an app that is music based using JS and so I prompted Claude to teach me tonejs (not build the app) through a series of small lessons and building up from there until through the lessons I had a working prototype of what I’m after. Major game changer
It's actually quite simple man, non-technical people don't really understand the complexity of the application. They see it looks the same, so it must be the same!
Edge cases?! what is that
if i dont know what edge case is, then i ask the ai ? simple. its so funny why are all the comments like this? calm down
@@playversetv3877 yeah good luck with that!
You actually think the ai can give you a viable answer to this? At some point, its time to use ur own brain to solve problems.
@@playversetv3877use you own brain, you think AI can spoonfed you everything you want?
Thank you for calling this out. Adding to this, I heard another engineer recently call LLMs "fancy autocomplete". That's kind of what it feels like. It's amazing (but I suppose not surprising) that so many non-engineering folks are trying to tell engineers what LLMs are. The irony! Granted, there is complexity to LLMs and how they work, but I don't think most engineers saying that LLMs aren't "all that" is a matter of us trying to "save our jobs"; it's a matter of trying to tell the truth.
I guess it more just feels like another example of a non-engineer trying to tell us why are job isn't "hard". Well, that and a bunch of marketing nonsense by big tech to cash in on the next big thing.
I've realized that using AI for a small function, or even an issue where I ask it to make what I want to give me "ideas" (I guess) of another way to do it, has lead me to waste a lot of time trying to get the right answer out of it instead of looking on stackoverflow for example
it's when I find myself basically yelling at it, asking it if its stupid kind of thing
Agree with every word you said. I've been learning coding constantly over the past 2 years, and while I do use AI, it is a small part of what I overall do. And I'm still relatively a total beginner.
I work with a few people who are way less technical than they think they are, and they believe that coding will be dead soon, and that they could do what I could do using AI, but it would take them a little more time. None of them have attempted anything more advanced than setting up a spreadsheet to ingest Google Calendar events.
i agree with the first tweet after trying to work on some project using claude 3.5. it is true he doesnt get complex stuff like your entire app but if you just constantly ask it questions about small parts it gets those small parts done very fast. for example my UI was very bad so i took a screenshot of it and gave it that and the code for the component and told it to make the UI better and it just did it in 1 try. same with asking for specific small changes one at a time. you dont ask "write an app that does x" but write "change this function to also do y" and then it does it way better if you give it the minimal context thats actually neccesary instead of the entire app
The people that succeed in this industry are the ones that embrace change and figure out how to use new tools. I still know people that use oldschool vi in their coding, and never adopted IDEs.. or said git offered "nothing new". In reality.. these folks simply didn't want to do the work to learn new things.
yeh that makes more sense. just be specific , how hard is that
I’ve been an engineer for 20 years and I’ve been building a new SaaS product with Claude 3.5, my experience lets me ask the exact questions and give it the exact context I need to create what I want. So far it’s helped me build vue frontend components, node js backend, helped me configure typescript, it helped me configure vercel. It helped me build out authentication, the middleware, firebase integration wasn’t smooth but it helped. Helped me debug Cors issues and also build out the copy.
I think the development process has been at least 5-8x faster.
LLMs and LMMs are currently effective for generating boilerplate code or providing insights into topics I'm unfamiliar with, without needing to sift through documentation.
yeh so much better than documentation because documentation has so many missing knowledge .
I completely agree with the video. It makes sense that LLM's would be able to reproduce things that are easy for an experienced engineer, because that would exist in it's training data. There's no reason to expect that LLMs can reason about the logic in the code it outputs, so the correctness of it's output will be based on either it's training data or a complete coincidence. There may be ways to still use this technology to work smarter, not harder, e.g. writing documentation, suggesting names for functions, generating boilerplate, writing html snippets of UI components without requiring context (such as a submit button, might be pretty similar to any other submit button). Basically things that are language based, or copy-paste, and don't require logic.
Maybe one day, more intelligent AI models that include logic and language will exist and are more capable writing novel code. Anyone familiar with LLM workflows may have a headstart. But these don't exist yet.
I'm an engineer in my fifties. I've used GPT4O to help me control our test and measurement equipment from inside Excel. We already use semi-automated Excel templates to produce certification.
I am fairly handy with VBA in Excel. But what I am now doing with automation is something I would never do without an LLM. I barely have the time to do my job. I most certainly don't have the time to learn to use the APIs that GPT4O 'knows'.
So bear in mind the transformative nature of this new technology for those of us who use coding as just one of the tools in the box, and not their main skill base.
Sounds like your company should hire a SWE to work on better tools for you so you can focus on your job.
@@ryan-skeldon You'd be shocked how many companies are reliant on 20 year old excel files that just do all the data collection. It works and it works well, esp if they have really old equipment that's difficult to interface with.
@@ryan-skeldon Until the employees protest it cause they're used to Excel and basically just want Excel
Thank you for bringing this up. I saw this post on twitter ! And now i see you here it got suggested to me on yt. Since this post I have tried claude for lisp, its "not bad" but it still doesn't understand human context no matter how much I hack at the prompt. I find these people talking absolute horseshit. I'd love to see them do it on video.
Well I am an LLM engineer, and truth be told, as Microsoft too responded the same after the copilot outrage, its just tools to help the professionals in their domains. People especially from non programming or beginner level programming background always get it wrong, they gets baffled with the smallest of code snippet. If you have no knowledge of the background of the task that you are trying to solve, LLM sure can waste your time, they are specifically designed to assist people with background. LLM sure can save your time and help you as a tool, it is not intended to replace an engineer, and the number 10x is an exaggerated number. However, it is current state of the art, it does not mean LLMs won't be better in the future. As a personal example I used LLMs all the time to prototype by creating interfaces, but I do have a degree in Computer Science, and many times I have to rewrite prompts, overall I can say it saves you 1.5x to 2x time at max, maybe more in some rare occasions, but it cannot be generalized.
This. If you know what you’re doing and looking at, your understand what the LLMs are and are not, they are fantastic, fantastic tools for speed and productivity. They are insanely helpful for documentation. The code and documents aren’t perfect, but I can iterate so fast, that ultimately I’ve pushed better code and faster
a knife in a chef's hands is not the same as one in a child's hands.
IMO it's the exact opposite, if I work on an existing project, know the tech stack well and the duplication is reduced, the gain from LLM is really minimal or even negative. Negative, cause accepting and modifying suggested solution often ends up time-wise worse than just doing it from scratch, you can also pass bugs that you'd not do, but you won't notice in generated code. Also sometimes I make up a special cases for copilot to prove itself, cause it's kind of satisfying.... lol
It's different when prototyping, working with unknown tech stack or where duplication is by design (huge CRUD services), or inherited as bad design, or for e.g. unit testing where simplicity and duplication is desired. And I love copilot for Powershell, exactly because I don't know it well, it's 10x speed up in some cases there, and 5% in my core activity.
@@kocot. that 100 makes sense. At my job, I’m normally prototyping or building from scratch
hell yeah! I just started learning coding for data science and man, it's scary. All this llm's coming out looking like they will take over jobs, and all companies are laying offs engineers, plus all these people showing off what they built using claude and cursor on Twitter, not understanding a thing of what it's made of. It's a breath of fresh air having this perspective come from a seasoned and respected programmer Thank you so much for saying this!
its not 10x faster, but it is often around 1.2x to 2x, depending on level of expertise you have with the programming that needs be done.
Doing stuff like:
- "I have this code {code paste here} and I want to test it for x y and z, write a unit test for it."
- "rewrite this code to do the same thing but async, or for a different kind of object, etc."
- "write an algorithm for this class {paste code} which should do: {something boilerplate-y}"
- allot of graph rendering done with python/matplotlib is imo way faster doing a first draft with an LLM and then optimizing certain things as opposed to reading documentation. If I last used matplotlib 6 months ago to plot a scatter-plot with color coded disks, I won't remember that the cmap param for the scatter plot function is called cmap, for example)
- Porting code between languages (yes, it still makes sense to read and test it)
The list isn't really exhaustive.
Agree on all of these, especially porting code. I'm very familiar with C and Python but my Go is very rusty, but I can have it convert entire parsing pipelines from Python into Go with minimal issue. It's a godsend
Bro I kid you not I thought the same as you, but recently I have been getting so frustrated with it not being able to complete even these simple tasks optimally.
ChatGPT made my work slower yesterday. I tried to use Python to fill product descriptions in .csv file using ChatGPT API, but the code it gave, errored and couldn't find a solution and fix it. I had to read documentation about library I was using and found out, my .csv file was separated by semicolons, not commas, which had to be properly configured in python .csv tool. I would put that kind of task as easy, yet LLM failed.
@@qbek_san Sometimes working on your own is the most efficient way to go. AI still has huge recognition problems. It's not advanced.
Idk I'd trust AI unit tests xD And if I have to read through it anyway to make sure it's correct, might as well write it myself, idk.
Thanks for this. Honestly I was starting to think I’m either an idiot or taking crazy pills.
I’m not a coder/developer, I’m a research scientist. I’ve been trying to use AI to help me with some literature study. Mind you this is what you do at the outset of a project to get an overview of what’s been done before. So basically it’s before all the actually difficult and frustrating stuff happens.
The sad truth is, i couldn’t even get AI to meaningfully assist me with that, let alone any actual scientific research. In the end I spent more time to get the AI to do work for me than it would have taken me to just do it myself.
So whenever anyone brings up how amazing AI had been for their job I’m baffled. I have slowly come to the conclusion that all these people do all day is write emails and make ppt presentations about corporate stuff.
I just don’t see any use case for AI where it does any serious intellectual work.
I did commit 3 PRs last week that were coded entirely with an LLM.
Describe the problem and provide sample similar code, review the solution, maybe a couple back and forth with LLM iterating on the solution, request tests, put everything on the repo, run tests, and feed errors into the LLM until code is fixed. I am the person who would have coded this anyways so I have the needed technical skills. The idea of a non technical person doing this today (or soon) is risible, however I did get a huge improvement days of works condensed in a day. Also the idea that engineers spend most their time on “hard” problems is strange tbh. I spent most my time finding existing solutions to non novel issues. Maybe we work on very different problems, idk.
Have you considered maybe people are not lying but are seeing different time wasters disappear overnight due to LLMs?
I'm curious about the solution implemented in your 3 PRs?
LLMs work. Especially ones trained on large amounts of code, can handle large context, and the person using them is good at prompting.
@@gershommaes902 a management script I wrote for a manager who had a last minute question about data (I took 30 minutes between create, test, iterate, submit), a Django query to retrieve the roots from the subforest that remains when you apply RBAC to a forest on the db (mind you, minimizing data access and avoiding unnecessary data fetch), a pair of mixin classes to decorate models and query sets to emit a signal any time you make any change to the underlaying data on the db and a handler to track that on a separate model of the db. None of these really worked out of the box or were perfect, but I had a good sense of what I wanted and the test cases (which I generated via Claude itself) and I iterated several times over requirements or even over design options (I tried several options until I settled with the mixins). I got working results on a fraction of time and with more coverage than I would have otherwise.
This is a revolution and it’s only going to get better. I’m waiting for better Claude-IDE integration, more agentic like workflow. Also, live testing on dev or stg environment is a time drain I wish I will be able to automate soon with some sort of bot that reads the PR and runs some “manual” tests on a local version of the whole site.
You’re absolutely right.
For small things, I breeze through.
When I was trying to have multistate logic, not only did it waste my time, but it literally ruined the code.
When you try to guide it, it would literally look like it was agreeing with you then it will literally disconnect parts of the code that was supposed to be building
What I’ve been working on is not mission critical. It was possibly a test. But it is clearly limitation, and we need to figure out how to integrate it in understanding how they are limited.
People tend to forget that in essence LLMs are just fancy-shmancy search engines which translate prompt to output in one flyover. As long as you stay within the range of whatever prompt->something translations they were trained on, it can work pretty well. When you leave that area, they break down horribly.
Actually, I agree with the OG post, it does make you considerably faster and will only get better. When GPT4 came out, I was writing a PHP API and designing an SQL database. I asked myself, let's assume all I have is GPT4, can I complete this task? It took me little over 4 hours to do the whole thing. Yes, if I knew exactly what I wanted I would've coded it myself way faster, but this idea of brainstorming the framework and steps with LLMs, creating a plan, executing the tasks with ample context, attaching the parts together, and debugging actually worked. I ended up with very functional code at the end of the day only using natural language.
This is a new direction in application development which involves LLMs throughout the whole development journey. Especially as the first post said "technical founder" since it implies you need to use a pretty varied or wide-ranged stack where some technologies you're not very familiar with but you need to work with. 10x seems a lot, but as I've seen in many cases, it's actually true. What takes 20 minutes of tweaking and coding, would take a 1 min prompt and a 1 min Ctrl+C, Ctrl+V
Totally agree. It can not develop medium or hard projects. The way I use LLM’s is first to architect the project, and break it down into smaller manageable chunks. Once done with that, ask the model to code those pieces, with specific interfaces. With current capability, LLM’s can not replace developers.
I think Claude really helps speed things up in a few ways. It helps as another pair of eyes for bugfixes. It helps when you have no idea how to even get started in a sphere. Its really good at variable and function naming. And it can type faster than me so I can often tell it exactly what I want a function to do and it will be done about twice as fast as I can write it. Claude is not going to write your app but it is a pretty good copilot
so its better than gpt and copilot? what are those good at?
@GoodByeSkyHarborLive Yes, it's better than GPT. GPT is still useful, but Claude seems much more with it and able to correct its mistakes where GPT gets things wrong a lot more and gets stuck. For example, Claude will start to suggest debug techniques when you keep getting the same error. It will even ask for you to share other classes or methods. It seems to creatively think about the problem. Gpt just gets into a fail loop and can't get out.
Your bugs must be extremely trivial. IME when it comes to bugfixes that "you have no idea how to even get started in a sphere" means you start with bugs in MLoC codebase (and you have no idea which part of code is called without spending hours) only to discover that bug is caused by calling external site across the way which returns warning code which is not even documented and there is no information about that site anywhere other than source code for dll written in 2010. (And by IME I mean what happened this morning, at least not evening)
I completely agree with the main point of the video, with one caveat. I have seen people polarizing pretty fast on this topic, between people thinking that LLM can _already_ substitute junior engineers and people thinking that they will not be an issue for their jobs. You are perfectly right: we can observe that camp A is wrong. But I am as sure of the fact that camp B is wrong too. Even believing your claim "in the next 5 years LLM will not be able to substitute a JE", 5 years is _very few_ time. I have 30 years of work in front of me, years that 3 years ago I thought I would spend coding. If this revolution happens today or in 5 years, my choices are pretty much the same: I have to adapt to the change fast.
And honestly I do not have the same confidence you have in the 5 years claim. Today looks like a far target, but considering the point where it was last year, and the continuous tech revolutions of the past two years, I would not rule out that next year an LLM will be able to code neetcode from scratch. Sure, I would be surprised. But I have been surprised many times by LLMs evolution speed.
I agree, as someone who loves LLMs and has been using them for my work as a junior dev, it saves time from stack overflowing and googling syntax, boilerplate and code snippets. It has saved me from bugging my senior engineers plenty of times as well. But I would be amazed if in 5 years things improve significantly, let alone replace a whole dev team. Things are looking to have some level of diminishing returns already so if we get EVEN 2x more "effectiveness" within the coming years and it can solve medium level complex tasks I would be thrilled.
Man, never seen content from you before. Algo brought it up and I let it play in bg. You are so based man, I really can feel you about this topic. I see it the exact same way: all this buzz comes from people who swim the hype that don't get that LLMs aren't doing the work and have to be seen like a tool. Real innovations come from creativity which comes from intelligence. People may build their next [insert software innovation here] but will not break the barrier to actually cover all other areas that come with it. Because they think Claude will do it for them. But there is a positive side to the hype: among these masses will be some 5% of people who actually get inspired and become developers because the tools helped them to discover their talents.
Yes you're not doing it right. Breaking those complex tasks into simple tasks and then feeding it to LLM is a skill too
So without LLMs you just do one file, one singleton god object?
That is absolutely true! I am tired of having to explain this to people over and over again just because some people keep over-exaggerating what current LLMs can actually do.
You built that as a junior? I'm finished!
ive worked with high performing juniors that couldn't build that, and in the real world seniority has a lot more to do with your ability to communicate/organize and lead projects than it does pure coding ability. keep at it!
@@dehancedmedia2900 thanks!
@@dehancedmedia2900 Right? seems tough for a junior
He is not saying that he coded exactly "that" as a junior. He is saying that this platform started at the hands of junior developer.
I'm not a coder, but I work with a lot of scripting and IaC (which I guess makes me a very junior coder in a way). No LLM has been able to whip me up a decent script that I don't have to spend the whole day cleaning up. Best results so far have been to request the code in parts, and piece them together myself afterwards. I think you're right, 5 years and it still won't be able to do what a human can do. But it will eliminate basically all low level data entry/data manipulation jobs.
The biggest win I've had with AI, was when I was working on a feature to add some telemetry to our software to track what reports are clients are using.
For all the reports we had they were all defined in one bigass 3000+ line file. I needed to add a string to each report which had a english version of the report because the actual name would get translated if you were to change to french for example, and I needed to make sure I always sent the same name for each report when sending out the report click event.
I dreaded that I would have to do literal hours of mindnumbing copy pasting for hundreds of reports, but instead I just pasted that whole file to ChatGPT and got it all done in less than 10mins.
Now could have I also done the same with some scripting, yea. But it wouldn't have been nearly as fast to develop the script, test it, then handle all the inevitable edge cases. And it was way easier to just explain in english that I wanted this very simple thing done.
LLMs provide some help, but to think that you can replace a junior developer with an LLM... Well... Just give it a try and see how it goes.
And the basic reason why LLM can't code up something as relatively complex as the neetcode site is because THEY DON'T UNDERSTAND, THEY REGURGITATE, and more compute or more data (which they seem to have ran out of) can't fix that.
Until the AI system can somehow reason what an app like that might need, and then work on it, it won't work. This would require a complete change in architecture, LLMs won't replace even half decent juinor devs.
As they are now, it's just a glorified auto correct. Helpful for very simple stuff that's been replicated a million times, but it can't do more than that.
To say LLMs don't understand is an oversimplification of a model family that I don't think you quite understand yourself. You would be surprised with the level of intelligence at which LLMs operate.
You are wrong and you dont understand how they work
Llms can complete unique tasks, that alone should tell you its not regurgitation
Look into geoffrey hinton
@@robotron26 actually they can complete tasks that fit a template they're given based off their large corpus of data. See the Arc test. They actually can't solve unique tasks. If they do solve it, its very likely there's an almost identical complete template that its solving.
@@jpfdjsldfji No, you are completely wrong. LLMs are not intelligent because they just predict the next word. If you indeed understand what you're writing, then you're not really PREDICTING anything, aren't you?
@@jpfdjsldfji what they do is not intelligence. But their design is absolutely intelligent.
I saw a CEO of some company bragging that AI created a guest check-in app for some event he was hosting. It was basically a to-do list. Add the person's name and check them off when they arrive. Everyone in the comments was gushing about AI. And tbf, I'm not sure how many of the commenters are actually real and not AI bots because that's where we are on social media these days, but it was still ridiculous. The only cool thing about it was the app he used to prompt the AI also ran the code in a sandbox so you could just prompt and use whatever it created immediately. But that doesn't make up for the fact that anything beyond the most basic of apps is impossible to build with AI.
Interesting point, how *did* you make that fancy directed graph? :D
Perhaps he procedurally generates SVGs, was curious myself. Probably gonna try and replicate it.
@@Dom-zy1qy maybe :) I tried making a program that could generate graphical representations of trees some time ago, but failed because I thought it was too complicated. But now I'm curious again maybe I should take another shot ^^
Im 50/50 on this take. I don't believe everyone is lying. I've used LLMs to help me solve complex problems despite their limitations. I personally love them as apart of my workflow. But what I will say is that what you get from them highly depends on your skill level. The only reason I'm able to get them to help with complex coding tasks is because:
1. I narrow the scope of what I want them to solve.
2. I am providing really detailed and long prompts to get them to do what I need.
Because I'm used to building software, I know the specific things that I want them to implement when I do work with them. The leap in productivity is going to come from knowing how to iterate over what you're given. I've been building software for 10 years so it comes natural to me to know what to look for. If you aren't a coder, sure you might be able to make some progress where you wouldn't have before. But what you're able to accomplish with LLMs will always reflect the skill level of the prompter. Even as they improve, its up to us to figure out how to check them when they're incorrect and get meaningful responses from them. There will be times where they slow you down just because they're not perfect, but I've found on the whole that I'm more productive with them because the time they save me when the output is good can save many hours of time.
You are correct that LLM's have difficulty with more complex projects, but the whole idea of good clean code in the first place, is to separate your complex architecture into simple snippets of code that interoperate but run independently of each other. This is basically what functions are. They don't need know what the other functions internals are. And LLM's can definitely help you write simple functions quicker than before.
If you are an engineer at heart, you won't notice that much of a difference in speed, but if you are architect at heart, suddenly you have a bricklayer at your service that helps you build cathedrals one brick at a time. The fact that engineers, photographers, novelists and artists don't seem to grasp that its not about the skill behind the individual pieces of art (humans are way better), but about the composition of the whole (80% of the quality at 10x the speed).
Its perhaps easier to see if you look outside your own profession where you aren't hindered by your own standards but merely judge the outcome. What is 10x more efficient, hiring a photographer or generating a couple of photo's from your favorite AI tool?
I can understand where the tweet is coming from, that when you are first starting out on a project where you don't know the technologies well then LLMs can make you feel 10x. After a day or two, that's gone. Anyway, I think you're right. It's just that people like that do a tiny bit of work and go WOW I SHOULD POST THIS, THIS IS AMAZING
I don't think he's lying. His experience mirrors mine. You don't ask the LLM to design your app for you.
There are a few ways in which they help.
1. When you're trying to do something you're unfamiliar with, ask for guidelines on the task. Give it as much context as possible. This helps you get up to speed quicker with relevant information. You can then either ask follow up questions or Google specific parts that you need more clarity on.
2. They automate grunt work. Stuff that's not complex, but still takes a lot of effort. Pattern matching stuff. Like converting SQL to query builder or ORM code and vice versa.
3. They can explain stuff that's hard to Google. Like if you give it a regular expression, it can tell you exactly what it does and beak it down into parts for you, so that you can edit it the way you need to. Explaining complex bash commands work well too. You can't easily Google this, but and LLM can explain it very well.
Dude I'm another software engineer (I'm technically a security engineer with a SE background) and I felt THE EXACT SAME WAY you described - any time there is a problem that is more complex than "show me a basic example of ...", LLMs completely fail and waste your time. I have spent 45 minutes to an hour trying to get something from an LLM that took me 5-10 minutes to do after simply googling or looking at StackOverflow. I had the same feelings when ChatGPT first got big and I still echo the same sentiment now. In fact, as a security engineer, I've seen LLMs introduce critical vulnerabilities in code silently...
Anyone at this point in tech who still thinks AI is not a stock pump and dump scheme is probably still a toddler.
Wishing all ML Engineers and AI "experts" a merry AI Winter
ok bro if you think ai itself is a pump and dump scheme, then you're clearly being biased for a reason. ai helps a lot. you're missing out if you dont use it
@@usernamesrbacknowthx ai development is da best
I honestly can't wait for the AI bubble to burst. It seriously can't burst soon enough.
But only because I'm selfish. I want cheap GPUs.
Nvidia been hoarding them VRAM chips for their "AI" shovels. Everyone is in a gold mining rush rn with "AI" and Nvidia is selling the shovels. The pickaxes. It's sickening. And they're completely ignoring the gamers, the people who they actually BUILT their empire off of. 16GB cards should have been standard with the RTX 3000 series. Instead, with the "Ada Lovelace" cards (4000 series) they had the lowest GPU sales in over 20 years. Gee, I wonder why! When the "4070 SUPER" is really a 60-class and the "real" 70-class is now $800. Nvidia can suck it.
AI can't code or solve novel math problems but it can make trippy videos, songs, images. Because a code is as good as useless if there is one major bug or some minor bugs. But the same is not true for videos because they have to be only played.
you nailed it at 5:18, exactly what I was thinking. When it would be better to start from scratch than try to "fix" what the ai gives you. It like if you could get a 90% discount on a backup software, but it has a 2% chance of permanently corrupting your data. Useless. Or a fake rolex for 1/10 the cost when you need a real one, if it would be more effective to start from scratch and build a real rolex than try to turn the fake one into a real one.
Many of these ai solutions are just picking the low hanging fruit. The fallacy is when they try to extrapolate those results into a real use case. It doesn't matter how efficiently it can pick low hanging fruit if it has no viable path to harvesting the more difficult to reach.
I can’t code, but I made a Facebook marketplace copycat using ai that’s fully functional with messaging and everything. It would be stupid to make a super complex startup with ai, but I am interested in business and ai helps me code enough to where I can get started, and worry about hiring a coder later.
What it is 10x for me is understanding. I can ask a question and get feedback, instead of sifting. I'm not asking for code itself, but for understanding behind what I am doing. I ask it more questions about its answers, and sometimes cross reference with other AI. I mostly use Claude, and use Grok as a backup. I'm not in there going, "Make Me an Auth Component". I'm asking, "What are things to keep in mind when looking into auth solutions?"
You're definitely using it wrong, if it makes you slower not faster. Here's how to use it properly:
1. Yourself decide what the file should do, consider the design choices, technologies, structure.
2. Write up everything you thought of in step 1, as bullet points.
3. Provide pseudocode for anything non-boilerplate
4. If you have another file in the project with a structure or code style you want mantained provide that as context
5. Use either Gpt-4, Claude 3.5 sonnet, or Deepseek Coder v2, to generate the code.
6. (not yet readily available) Write test cases, and use a AI coding ide, to iteratively debug it's code until it passes the test cases.
As a person who has many years of experience coding in python, but doesn't know every library under the sun, and every syntax perfectly, the llm's ability to code bug free code is amazing. I am at least 2-3x faster with it.
Yeah a lot of people get off talking about what it can't do instead of just utilize it lol!!!
The point is that for a non-technical person the LLMs are useless even in that limited context. Because to get to step 5 you need to be a programmer.
LLM actually have something called an effective context window, and this is not the maximum context window it can support. There is also a limit on the number of logical steps it can take, which is proportional to the number of transformer layers in the model. This puts a limit on how much information it can effectively process in the context. This means the right way to use LLM to code is to shrink the context if you find LLM cannot solve a task you give, i.e. By breaking down a complex problem into smaller problems. This the skill that all architects have, and this is the correct way to use LLM to code.
I have personally found using claude has greatly increased my productivity. What used to take me few days now only takes a few hours. If you don't see this productivity gain then you have not mastered the skill of using LLM correctly, I.e. You have not done your part of the thinking properly and break down the problem into small enough well defined chunks
I think you’re just bad at prompting. I’m a .NET dev and ChatGPT 4o has easily made my work 10x faster. You just have to VERY clearly explain what you want, how you want it to go about performing the task, provide all the necessary context/background, and then iterate on the LLM’s first response over and over until it’s perfect. Tell it what it did wrong, what you don’t like, what you want improved, and keep going. It’s like having an ultra-fast programmer working for me who writes all the code and all I have to do is clearly explain what I want and then review it. I’m sorry you haven’t gotten good results using AI for programming work, but if you’re not getting good results, I tend to think that’s on you, not the LLMs. I think you’re bad at prompting, and probably pretty bad at explaining things interpersonally as well.
That part about explaining things interpersonally is actually interesting because that is a common problem that many of us programmers have. After all, when working at a lower level (not UI design or things like that) we are working with abstractions that are difficult to verbalize. And at some point you just say... let me do it myself.
Because... If you have to invest time defining very well the functionality of a piece of code, then you are not being that efficient. You are just pushing a car though a supermarket aisle because you have become too dependent on that technology.
If LLMs make your work 10x faster than your work is extremely simple to begin with. That's why you're finding success with your prompting and others don't.
@@pelly5742 Such is the life of a full-stack dev. Some of my tasks are insanely complex, most are not. I don’t have a junior programmer working for me who I can give all the grunt work to so that I can just do the fun stuff. I have to do everything myself. GPT 4o has become that junior programmer who does all of the routine stuff, and does it incredibly fast, so that I can work on the more complex aspects that humans are still better at, and that’s how it has 10x’d my workflow. GPT 4o is like having a full time junior programmer who has come to me right out of school with a masters in computer science, writes code with superhuman speed in any language, and works for me for only $20/month. It’s revolutionary. If you’re not getting good results using the tool then you’re probably just not very good at using the tool. It takes an especially narrow mind to believe that all the people who are getting better results with the tool than you are are all just lying about it.
The art of coding involves breaking things down into simple subtasks. Once that is done, an LLM can work on that extremely simple stuff.
.NET dev, explains it all
Broo, you are genuinely correct, I completely agree with you.
They are saying all that stuff just to attract more investments and just copying each other (I mean the founders of LLM's with a little different graphs showing that one LLM outperformed the other one)
I have recently used LLM to build the fronted of application, after spending several hours of working on it, it was a failure.
I understood that with no knowledge of progamming, LLMs are just a waste of time.
10x devs can get another 10x using the best LLMs (myself included). the smarter you are the better effect you get on LLMs.
best dev’s I know tend to even delete AI plugins, because it waste time and is a distraction
personally I go to LLM only for simple snippets and idea generation, anything more and it’s a waste of time
So you have no idea of how to use LLMs...
@@vitalyl1327 sure, like I need to waste more time prompting to get them to work, instead of just getting it done myself
Me (a 20x dev, don't ask my secrets) using self-reasoning AI (AutoGPT) to get another 50x
I'm with you man, it largely just wastes me time. I use it (for coding specifically) to introduce me to concepts on a personalized basis that I'm not familiar with, like learning a language I haven't used before.
But that's it. Any actual work it's legitimately dumb.
Then when I say that on LinkedIn people who have no idea how to do their job nor mine tell me how I'm wrong.
Keep coping, AI will take over jobs. Stay in denial, if it helps you sleep at night.
the sad truth
yay! i love watching people fighting over arguments in twitter! One shares their opinion (even if its incorrect), some guy criticize the opinion, people take it as a challenge, then the guy makes a video on the tweet, then the situation moves further, i love watching this! lemme get my popcorn
Two words "Skill issue"
@@minhuang8848 False, I use SOTA and it sucks dick. It can generate 1-2 complex functions and that's it
You're completely right. Even as a Master's student in Aerospace Engineering, LLMs can't help me with my problems beyond the most basic outlines. When you need to get more niche or technical, their answers make zero sense, and you're better off doing your own literature search.
aww first it was the artists time to be mad and now its the coders
You are bang on point. Most people, especially "Founders" and self proclaimed "CEOs" think they know it all with one day of prompting. :)
you are are 10000% using it wrong. I setup orchestrated docker containers, terraform deploymemts with beautifully designed reusable components, open source vector store in a container with volume claims. All deployed to azure, pulling from my own private docker image container registry, provisioning an azure resource group into azure container apps service which is managed kubernetes behind the scenes.
yes I have working knowledge but I wrote zero code just worked with the chat system while referencing and pasting actual documentation.
it helps to use an editor like vim for quickly editing sections and pages of code without always having to use a mouse.
Claude is literally a game changer for programmer/founder hybrids like me
none of those things require code to do in the first place lol
You're right, as a mediocre developer (and that's being generous), I'm able to 10x relative to my previous pathetic capability. But I'm still able to create working applications that would have taken me forever without LLMs. Some will say I'm cheating myself from learning to code, but to that I would say:
1. I'm learning an extraordinary amount, just via observation. My coding knowledge has definitely increased dramatically.
2. An analogy: Using a calculator may degrade my arithmetic skills, but I'm able to work faster at a higher abstraction level.
LLMs will only get better, so I wouldn't rule them out for them taking on more complex tasks in time. No matter how good you are at coding, you might want to continue investing some amount of time in experimentation.
All this being said, I've learned to love coding so much. I've always found it painful because I perceived it as taking too long to produce meaningful results. Now, I'm getting more instant gratification, which motivates me to generate, debug, refactor, and type more code. And it motivates me to reinforce my learning in a more structured manner. Therefore, I'm taking a look at your site and will most likely sign up.
What have you actually learned?
@@ryan-skeldon Mostly structuring code in classes and functions, becoming more familiar with syntax and reading code, identifying the source of bugs, making performance improvements, and reusing code.
I pretty much agree, though I do kinda know where that "10x faster" argument is coming from. Little backstory, I have over a decade experience programing so I'm not a beginner by any measure, however one thing I always hated doing was UIs... I don't mind designing them in Photoshop or Figma or whatever! But I hate coding HTML/CSS/JS. I'm absolutely going number everytime I look at an Android screen xml or mostly even flutter code (though I do like this a bit more). And this is where I absolutely like that I can get the skeleton code and components out of an LLM and pretty sure the design would otherwise take me forever to make (mostly because I would avoid doing it till the last moment). So having something decent looking that I can modify, bend and then build the actual fun backend stuff for definitely feels like it's suddenly 10x easier to make stuff. Or maybe closer to 5x but whatever... But when it comes to the stuff I'm actually good at it can't provide any benefits other than small bursts of completion snippets that I would've already written myself, but now I just need to hit TAB. But that's like 1.05x speed improvement at best and sometimes I find myself disabling AI completion because it outright gets in the way anyway.
So when you hear someone say it improves their productivity 10x times, you can assume that they either:
1. aren't very good at it themselves to begin with or
2. really hate doing it and their life is a drag because they work with tech they hate
I like the LLM tech, but I really dislike the "average user"... xD My ears bleed everytime I hear a manager talk about how much it improved their life...
Claude 3.5 is actually very good at UI task, if you know what you are doing. Example: 1) create a project, 2) describe your packages 3) describe your workflow 4) You can add some snipplets in a txt file. 5) Start with a new chat using that project. 6) Prompt away: Create a Header and footer using this data . This is 10 times faster, yes
I wouldnt say 10x but it has 100% enabled me to build stuff that would take MUCH longer before. I dunno if I can say this here but I've copied the "Jamie" app a meeting minute taker that gives insights, action items, tasks etc... Over time it meta analyses meetings and create daily checklists and follow ups for the tasks as well as initialising some of the tasks I need to finish like compiling metrics reports.
LLMs are fantastic advanced language parsers. I think most people misunderstand what kind of tool they are.
Let me add, I am NOT a coder. The fact that me, a non-coder, can now build these things is the important part.
Finally someone who isnt smoking weeds has spoken! I am not even a SDE but BI/Data guy and LLMS cannot yet solve some of my complex data transformations asks either! Thank You!
I think your viewpoint is valid and the ballad of LLMs wasting time if mid-senior devs has been well told at this point. In answer to your sincere question, I do believe that you have not yet found the ceiling in terms of LLMs ability to handle complex reasoning tasks, that I would class as medium difficulty problems. My top tip is to meet an LLM where its strengths lie, which is in the realm of natural language. Second is not to necessarily rely on the agent to produce large outputs in one shot, but instead perform complex pivots on aspects of a codebase at one time, making sure to provide ALL relevant context in a lean format.
It's great. Been developing a game without any knowledge of programming. Easy stuff. Just like art, you need to give ai the right prompts for code. For everything else there's indian youtubers
« All the hard parts had to be solved by me » : yes. That’s the issue. LLMs are amazing when they solve problems which are simple to YOU. The interface you showed is simple to me as a senior engineer, I just don’t want to use all that time to implement the front end which I already did too many times already. So I’ll just guide the LLM and tell it what do to exactly. If you guide it to the point where it can’t even go wrong, it’s an amazing tool.
And that’s also why I disagree with another statement : it’s an amazing junior dev. But a junior dev is only amazing with someone to guide them. Otherwise they’re just lost and not sure what to do, so they hallucinate and do stupid things 24/7. But, well guided ? Claude made me at least 3x faster in my day to day job.
Where I agree though : what makes LLMs efficient is being technical myself, and very good at that. I’ve always worked using a TDD workflow, which works very well with LLMs (think / design first, write assertions which will prove that the problem is solved > now implement).
I don’t think anyone that knows nothing about code or isn’t technical at all could ever be efficient at coding using LLMs. Just like a client can’t come and tell me « make me a website ». Errr ok I have like 50 questions for you now, most of which you have no idea about the answer. And that’s very surface level stuff, on a green field project, where you actually have the most chance to be successful in the first place with that profile.
Whenever I run into a coding issue that I have a hard time solving, LLMs have been completely useless. Mostly just hallucinating nonsense. Not even difficult problems, just things like baseline Rust. Like you said, it's powerful for snippets and streamlining repetitive and simple things: the "monkeywork". That's where it genuinely makes me faster. For anything requiring thought, it always is a waste of time, sometimes even misleading.
I believe that LLMs won't ever replace Software Engineers because, to get quality outputs, the time and effort you need to detail your problem and how you want it solved is, for the most part, already the job software people are hired to do. I deal with machine learning and many times I opened a conversation and realized that I knew already the answer to the problem by just framing it and putting constraints on the solution and, no shock, that's called thinking!
On the other hand, when you plug the entire script of your model and ask "Why is the gradient not backpropagating correctly?" the LLM will provide a fancy list of overly smart solutions totally ignoring your specific problem resulting in a massive waste of time.
Having that said, removing all those time-consuming moments when you are solving low-level problems like finding the correct function that does the job from a cool library, is a massive quality-of-life improvement and allows you to focus on the interesting aspects of the job.