Hot take: If LLMs make you 10x faster at coding that says more about your coding ability than it does about how good LLMs are. The tweet: x.com/neetcode1/status/1814919711437508899 This video may change your mind about the AI hype in general: th-cam.com/video/uB9yZenVLzg/w-d-xo.html
Skill issue. You’re probably shit at prompting. The equivalent of making a StackOverflow post and the people need to ask 50 follow ups to get the context of your issue
AI can seemingly dominate the world, not through a engineer perspective but on a mathematical perspective AI can really code of medium-difficulty tasks. These companies pour billions on transformer models and the best compact brilliance on other side which was not encouraged by those communities on a real scale
Years ago I read somewhere on the internet the field was broken, totally broken. But that’s ok, I am from the medical field, I brought my stethoscope. Also, duck tape, and WD40 if that can help.
@@agnescroteau8960medicine wont lose work, but will dying to overwork though. Especially if you have "universal healthcare" which will take your negotiating power and income
@@hungrybeaverontheleaveri have seen that code in on of the client projects. If I am being honest. That would not pass any code review. The technical debt 📈😅😂😅
You nailed it. Non technical people don’t understand that the last 10% takes 90% of the time… and the problem with developing with LLMs give you the impression you are almost done, but you still have a long way to go. And if you started with an LLM and don’t know what you are doing… good luck with that last 10% 😂
Like me implementing a web server for aws basically in a single day, then spend the rest of the week at least figuring out the deployment and configurations that are missing. Gotta love how "helpful" aws errors can be.
I honestly love when chatgpt makes throwaway python scripts for me when I feel lazy; but man, maintaining that code going forward? I’d have to rewrite most of it!
Q: Why would people lie/exaggerate like this? A: To generate traffic on their channel/feed/blog via hype and/or because they have financial interests which benefit by hyping up the technology.
just like this loser youtuber is doing. AI and especially claude has objectively saved me months of work in my business. he's just baiting anti-ai idiots and soon-to-be-replaced programmers who have nothing else going for them
@@Icedanon from this equation 10 * 0 (positivity) = 0(productivity) which is true since 0 = 0, LHS = RHS, so here lets take : 0 = x, so 10 * x(positivity) = x(productivity) since x productivity is 0, we have divide both sides by 10, we get : x(positivity) = x(productivity) since dividing 0 by any number equals 0 unless the denominator is 0. so going futher = x/x = productivity/positivity. which is both 1 and infinity, 1. taking x/x as 1 : therefore we conclude that productivity and positivity are inversely proportional, the more productivity u have, the less positivity u get. and vice versa. 2. taking x/x as infinity, we can also conclude that productivity/positivity = infinity. Or productivity = infinity/positivity. if your productivity was 2 units, then your positivity would be infinity/2 which is very large number, so we can take it as infinite. therefore no matter what is your positivity, your productivity is actually infinite, if your positivity is infinity, then your productivity would actually suffer. By the whole answer, we conclude two things : the more productivity u have, the less positivity u get. and vice versa. if your positivity is less than your productivity, then your productivity is actually infinite, if your positivity is infinity, then your productivity would actually suffer. so moral of the story : keep your positivity low, be depressed, take some pills, do drugs, etc to further lower your positivity and increase productivity. MATH SUCKS, SINCE INFINITY IS NOT A FRICKING NUMBER, ITS UNDEFINED IN MATHS, SO WHATEVER THAT I COOKED IS INVALID.
This is not how you use LLMs to aid coding. You use it to write small self-contained functions, regexps, throwaway scripts, prototypes and non-customer facing utility code for stuff like data visualisations etc. It's not for non-technical people, it's for technical people that want to get through non-critical or simple parts of a task quicker.
Exactly. Been using for some DevOps tasks heavily. Python, bash - don't know and don't care. I have enough developer knowledge to debug it, but not to learn all the syntax and niche libs, frameworks, and language quirks.
@@yashghatti That's just not true, you have Webflow and Elementor, both widely used and battle-tested, they were also marketed as replacement for all developer, but they eventually find their own place, and got accepted by everyone as a good tool in some cases.
We've been through *several cycles* of this. It was CASE tools in the 1980s, and then UML in the 90s/early 2000s. Both of these were supposed to obviate the need for coding, you just had to go from requirements to running program. The problem is, any symbols you use to express a spec in such a way that it's specific enough to be executable by computer are isomorphic to constructs in some programming language. They just might not be easily diffable, version-controllable, or viewable or editable except by specialized tools.
Nearly every TH-cam coding influencer who's entire business model is pretending they are actual professional Software Engineers but all of their projects are forks of somebody else's public GitHub project has entered the chat.
The reason why they're lying is because of money. I'm a senior engineer and I've been in the industry for 25 plus years, LLMs just waste time for anything that isn't the most trivial app on the planet. The AI hype is based around a bunch of VCs misrepresenting the difficulty of programming/engineering for the sake of selling a product. I feel like the Twitter guy doesn't understand what 10X even means. If you can implement this stuff 10 times as fast then you can literally work for one day and then take the rest of the week off and no one will notice a difference. Naturally, I don't think he's a programmer in the first place which is probably why he sees a 10x improvement. This is just a big old case of the dunning kruger effect. The funniest part of all of this stuff is that it just doesn't make sense logically for these LLMs to ever become proficient at writing code. The reason why I say this is because you need to have enough data to train the LLM on every use case. But the problem is that there are plenty of use cases where maybe there's only one or two examples in open source. These AIs have no ability to create anything new and so there's always going to be a distribution where the majority of problems simply can't be solved by the LLM because it doesn't have enough data to understand those problems. At the same time, they'll become really proficient at writing todo apps because there are thousands of those.
sadly most non tech employeer currently started to underestimate engineer, just like my former boss, saying that my $500 salary as fullstack dev is enough because AI can help. hahaha.
Like I've mentioned elsewhere it also really depends on the language, the more popular the language the easier or perhaps more options you can get, the less popular the language.. Well good luck getting AI to help you write in older languages or in custom ones. It's only sort of helpful anyway for problem solving because like you said it's based on examples already existing which might even be the wrong problem to solve leading you down rabbit holes if you don't realize it. The biggest problem with AI though is the cost of maintenance from both technical, and environmental viewpoints. It's like how some NFT's are supposed to "solve" climate change, good luck getting "green" AI.
The productivity gains equally massive as workflow difference may be for a specific repo mantainer against a full stack engineer. I don't think LLMs can help much if ur been doing the same for 10+ years
I've been doing this stuff for even longer. There have always been people gaslighting us about the difficulty of producing quality software. This is just the same people latching onto a new tool. Before it was agile then iso 9001 and on and on.
@@betadevb I will never forget the incident in NYC near central park where someone yelled out "Vaporeon is here" and people jumping out of their vehicles to catch this Pokemon. IN NYC / CENTRAL PARK !!! th-cam.com/video/MLdWbwQJWI0/w-d-xo.html Vaporeon Central park Stampede
they're still really useful for "dumb tasks". i can tell gpt 4o to "Look at this project i made, now i need X project with x and x, make it based on my other code" and it will make me a working crud in less than a minute. sure, it might have some issues or be missing features. But it still saved me like half an hour of coding if not more. i've done that a few times and personally i find it pretty satisfying to be able to generate a basic crud with 5 working endpoints in a few seconds.
@@SoyGriff Very much so :) I love them for learning new spoken languages too, i doubt there's a better tool other than actually practicing with other people. They have many uses, but the message I was trying to reinforce was neetcode's opinion on how they're not as advanced coding wise as they are made out to be. In your case, the crud part can be found basically anywhere, since so many people have already implemented it. For implementing specific business logic their usefulness basically depends on your ability to modularize the problem. If you can break down your problem in to small enough chunks that you can ask chatgpt how to implement them, you've already done a lot of the "programming" yourself. They're definitely useful in their own right.
@@Vancha112 the crud part can't be found easily because it's specific to my project and yet it can generate it in seconds based on my instructions, it saves me a lot of time. i agree i'm doing most of the programming and just telling the AI to implement it, but that's the beauty of it. that's what AI is for. i only have to think and explain while it does all the heavy work. that's why my productivity increased so much since i started using it. i'm building in a month what my old team would build in 6, and i'm alone.
Been coding for 20 years here. The point is.. even if you don't "need help" with that part.. the LLM will do the job faster than you can do it.. thus your productivity is improved. In my opinion, if you are not figuring out how to include LLM's in your workflow, you are going to be left behind by those that do. Is it a 10x increase? For tasks the llm can do.. it's much more than a 10x increase!
I think 90% of my job is figuring out how to solve the issue I have, 5% - 6% is bug fixing and testing what I added and the rest is typing code. I think, even if I could magically have all the code in my head on my computer in like a second, it would save me a couple of working hours per week. I think the people who create these tools don't actually understand what programmers actually need. If for example I could have an AI that quickly tests my code, then we could start talking, probably would save me lots of time.
Yes! Automatic testing would be fantastic. If someone could train an AI to do *just that*, and nothinh else, it would be amazing. In general I think AI tries to be too much. It would be more practical with AI that was really good at something very, very specific and worthless outside of that.
Engagement farming is a real thing on twitter and that's what is been going on. People just post anything and if your Post by any means contains the word "AI" and fear mongers among general public, It's sure to get reactions from left and right.
they only make like a few bucks too. so it's really pathetic when actual humans do it. now the bots are excused, they are raking it money through annoying others. that's genius level of hustling. the american way.
I'm absolutely sure that if I post anything, even if it contains the word "AI:" it will get at most 4 views because that's what every tweet I've ever posted since Twitter started has got. Unless you pay them or something, there's no way to get views or followers in that thing.
Everybody nowdays is "Founder" or "building X" with no technical background, few years ago it was the hype ride with no code tools and now it is with LLMs.
Apple just published a study confirming that LLMs can't reason, they can only replicate information based on the pattern recognition of the data that they were trained on. That's why they can't handle medium difficulty tasks that may require complex problem-solving. Especially if it's nuanced and particular to your project, and therefore maybe not found in an outside project. You're not crazy, you're just not trying to be an AI consultant or guru and therefore you aren't lying to you yourself constantly about what these LLMs can do.
I work as an AI implementer and have been in IT for over 30 years, and while I agree that yes, LLM reasoning is weak... I also think most people are missing the point. We... the people paid professionally to do this, use it for tasks that it's good at. We come from a world in which relational database do all the heaving lifting. Unstructured data was a dumping ground only really navigable by humans. LLM's are good at mining this dump and following logic. More specifically, you use LLM's for tasks with a Narrow focus, unprepared data and a tight definition of outcomes. That approach yields the optimal results. ML then 'in essence' is the same as weakly coded 'on the spot' application. The same as what an human does. So, early days for now of this new world in which 'applications' don't exist and data being enjoined as needed on the fly. I wrote an 'LLM' based app, that acts as a log enhanced protocol gateway. Many to one. Normally you would have an ESB do this right? Nope, shove in the crap, tell it what to do, give it the schema for the destination JSON API and poof, it generates the cleaned up stream. TERRIBLE efficiency, but, I have no input structure schema to worry about. So, just lob data at it, it worked out how to clean it in to a consumable stream. We're only scratching the surface of applications based on current tech, and the tech is moving faster than our ability to seize all the opportunities it offers. "Yes... I'm a consultant' But, I don't even find the use cases. I'm just involved in making it happen. LLM's for coding are IMHO, a TOTAL mixed bag. Right now getting it to do what you really want takes longer than doing it yourself and the hard parts of code... no chance. But, it has had moments of brilliance when I learnt new ways of doing things.
@@TelithsRage I understand your sentiment. I personally use it for game development, and even created a plugin in godot so I can type prompts from chatGPT without leaving godot and have it populate in my scripts directly so I can quickly see any issues and then fix from there. BUT - I still have to design the game. ChatGPT can't design a game for me, and there are major companies (Like EA) trying to tell people it can. Design and asset creation are not the same thing. Not only that, but as someone who also works within at one of the big Sillicon Valley tech companies, I have witnessed the lack of breadth that AI creation has when we work with agencies. The work of 6 difference creative agencies, ended up looking like it all came from the same one. LLMs work really well for all the purposes you mentioned. But there are companies jumping the gun to say it can be in the driver seat and not just the assistant, and I haven't seen a single piece of evidence that that's true.
@@NickKeighley The point is that there's too many professional consultants trying to sell LLMs saying they do more than what they're designed to do. We wouldn't need Apple to run a whole report, if people weren't always being misleading and mytifying technology
The biggest issue with these LLMs is that they lose context SO FAST. 3 prompts in and you need to tell them again and again to keep in mind what you had mentioned in the previous prompts. I was using Chatgpt and Copilot for the leetcode problem "flight assignment" and I accidentally forgot to mention "Flights in this coding question" in my 3rd or 4th prompt and it started giving me Airline flight info. Which is completely bonkers because how could it think I am talking to it about Airlines instead of a coding problem that we were working on a few seconds ago!!
I find the more I know about a specific task the less useful an LLM is. When I’m new to something it’s a great place to start by talking to chatgpt or something.
I'm not a programmer but I've made a few JS sites and Python apps for fun, and one thing I learnt to do is to start new chats. Once you get too deep it starts going batshit. Granted this is all very basic level, so it probably wouldn't help on anything too big or technical anyway, but basically if you spend some time starting new chats and being very specific and detailed with your prompts it does help. With Claude I'll tell it I've updated the files in the project knowledge section and for it to refer to the newest version. There are ways of getting it to stay on track but it probably is a waste of time for an actual programmer.
I made an app completely for work with ai. It’s a 400 line csv parser in essence. This is about the max ai is capable of, making something you already could have made but with the downside of having to verbally explain computer science to a toddler
They’re lying because they’re trying to get rid of competition. Propaganda, basically. Yes, I know this sounds crazy, but it worked on me. When AI was first released, there were millions of videos and articles floating around about how AI was going to replace humans, and me, who was at the time learning how to code, gave up on coding because AI scared me. I chose to go on a different path. I’m sure there are more people who gave up on coding because of AI propaganda. Fortunately, though, stopping learning how to code didn’t have a big impact on me since I was 13 at the time and even though I wasted almost two years not learning how to code, I’m back at it and will not give up no matter what 💪 You shouldn’t either. AI will not replace software engineers. Period.
NGL I was gonna use AI as an excuse to finally quit capitalism and move to a mountain with my savings (not trolling this was going to happen). But the only thing that ended up happening is that once again in the industry greed won and a with the layoffs a lot of us devs are being exploited af. We are trapped in the promise of a transition that will take decades with CEOs who just want to keep on cutting numbers on one side and the AI bubble on the other. In the mean time tons of really good people cannot even find internships because interviewers also fell on the AI bubble trap and are now asking freshly graduate kids to code skynet in a blackboard in 15 minutes. The industry really sucks rn.
Totally agree. That is the major difference between looking from a non-tech person and a tech person's point of view. From a non-tech person's point of view, they are now able to create a "non-working, working-looking site" (lol), whereas before, they would need to have a UI designer/engineer create it for them, which cost them money and meeting time. From a tech person's point of view, LLM is just a snippet where now I don't need to go to StackOverflow. Using it more than that is just wasting time as mentioned in your video. And the most hyped people that go around talking shit are the non-tech people who work for a tech company that knows nothing about systems but think they do so they start using these LLM tools thinking they can replace engineers.. The worst part is that they use these tools to create so-called prototypes and then give it to the engineers to make it production-ready but don't know why it takes longer than the traditional way (*Cough CEO/Project managers Cough*)
A saying that's always valid "Stupid people are the loudest" , that's how I see all those Twitter "influencers/founders" with their AI takes, LLMs, carrers,etc... They need to get good themselves before talking. Wake me up when Primeagen agrees with their nonsense. Good take Neetcode!
Except it's the opposite. Most of these takes against AI for dev productivity are people who havent progressed beyond senior engineer, including Primeagen.
agree with what you're saying, i've been doing software for +10 years and I do think it has made my productivity go like 10x up, but the difference is that I know what I need and I use chatgpt 4o as a rubber duck, specially when doing architecture decisions and tradeoffs I have a vague idea of lets say 3 different ways of building x product so I just ask for pros/cons, describe my ideas and so on and it works. The thing that i've noticed is that if I spend + 2 hours discussing/bouncing ideas with an LLM it becomes stale really fast, forgets my previous input and just hallucinates, but as an initial technical document writing or small shit like basic components it works VERY good
This. I agree with this a million times over. I treat it like a rubber duck that has 130 IQ. At the end of the day it's *my* hand that is writing the code. The LLM just provides input and feedback. The claim made by the tweet OP is definitely exaugurated, but if you strip out the hyperbole and 'zoom out' a little, it's pretty realistic.
It’s about pain vs complexity. Like he said if it can handle snippets it can handle big projects in chunks. That’s how I use it. I edit more code than I write but my jump of is always an AI. It just physically writes code faster… I can do the thinking and editing but it write 1000-500 lines a min.
Ah yes, the old AI made me a 10x engineer. It's always cap...chances are that these individuals that claim this are the ones who push absolute dog water to production because they don't actually understand the code or know how to debug. Personally if I'm prompting this LLM to write something and then having to double-check it and if it is wrong prompt it again and do that whole process till it gets it right, It would have been faster if I did it all myself in the first place.
I don't know man. Personally, I personally find that its much easier to edit 'half-way there' code, than to write from scratch. It might take a while to get used to the peculiarities and bad habits of the LLM and figure out what's the best point to stop prompting and start coding by yourself, but once you figure it out, I do find that relying on AI makes me a lot more productive. Not 10x but definitely at least 3x on a good day. (Although there are obviously also bad days where its barely 1x). I find that its great at data visualization code, complicated refactorings, explaining an existing (not too large) project I'm trying to get started with, and basically speeding up any annoying, slightly complex, tedious process. And it really shines for quick dirty projects in languages you're unfamiliar with (need to google how to init array) but can read just fine once the code's there in front of you, since you can basically just wing it, as long as you've a LLM to get your back.
@@ReiyICN Oh boy I'd never ever rely on AI for "complicated refactoring". Sounds strikingly similar to shooting yourself on the foot. To be fair I've only found AI useful for common boilerplate you don't want to write, or in the case of copilot when you're creating a structure, it is quite good at completing the structure, for example switch or else statements
The issue with a lot of people is PROMPTING, you don't prompt LLMS. You don't have to find the correct prompts or keywords. You just talk to them as if they were a human being, a dumb one. It works really well in my experience. It's better to write 2 paragraphs explaining what you want than trying to make it work 10 times while only writing basic prompts and not providing the whole context
Yes, the problem is that they can get close to the spec you give them, but it's not close *enough* and has to be rewritten. This has been frustrating for me many times where I tell the LLM to change one small detail and it goes round in circles before finally admitting something can't be done without starting from scratch. Huge waste of time in a lot of cases
Thats part of you learning If you learn what tools exist or what libraries can actually do, it should be able to help you code it just fine Its literally translating your prompt from english into code You asking it to do something impossible is partly your fault
@@wforbes87100% As someone using it to write basic code, its a godsend, i dont need to wait a day or submit a ticket or whatever to have to talk to an engineer These guys are vastly overestimating the amount of mundane work that goes on outside of faang lol, most coders or code jobs are not frontier
I believe that LLMs won't ever replace Software Engineers because, to get quality outputs, the time and effort you need to detail your problem and how you want it solved is, for the most part, already the job software people are hired to do. I deal with machine learning and many times I opened a conversation and realized that I knew already the answer to the problem by just framing it and putting constraints on the solution and, no shock, that's called thinking! On the other hand, when you plug the entire script of your model and ask "Why is the gradient not backpropagating correctly?" the LLM will provide a fancy list of overly smart solutions totally ignoring your specific problem resulting in a massive waste of time. Having that said, removing all those time-consuming moments when you are solving low-level problems like finding the correct function that does the job from a cool library, is a massive quality-of-life improvement and allows you to focus on the interesting aspects of the job.
Important to keep in mind that a lot of the hype is either manufactured by folks that have invested a lot of money into the current AI boom or folks that have fallen for said marketing.
The only people who can fall for said marketing are those who haven't actually tried the product. The rest, like the guy writing this article, they're CLEARLY stakeholders. I bet this guy bought some Anthropic stock beforehand, or is just a paid actor
I work a lot with writing quick utility tools and api integrations for enterprise tool marketplaces and this is extremely useful for making hyper specific private apps to help a team handle one tiny piece of straightforward automation + hooking together a couple if APIs + maybe a super quick interface. LLMs are really powerful for things like this and have prob made me 10x faster for certain easy but tedious tasks.
I'm starting to think that the bots are manufactured by twitter. What benefit does anyone outside the company have to run bots that respond to posts like that. Not to mention the captcha when registering, I literally could not pass it myself after like 3 tries of having to get 20/20 answers correct to the point that I gave up. Maybe I'm stupid and AI can solve that better than me, I don't know, seems fishy. It's probably 90% of posts I see are AI.
It is overhyped but at the same time it does do my work much faster. It can't build entire systems or even big parts of a system, but It can work on small parts. For example, writing simple functions, components, UI elements etc. I mainly use it to speed up my work, instead of coding, its mostly me checking over the code generated and fixing small things. Sometimes its frustrating and gets it very wrong, but I usually just have to fix the prompt. Overall its definitely sped up my work flow, maybe not 10x, but 2-3x is reasonable.
And thats enough for it to be a massive change Now your company can make you do 3x the work instead of hiring one or two more people They will absolutely do that And AI will advance more to the point where eventually, you will not be needed There is no arcane or unknown laws of coding libraries, they are all manmade and documented, the ai will get better
Your thinking in zero sum terms. The demand for code will simply increase... the reality is.. most companies want to use a lot more software than they currently do.. so they will simply create more applications and better tools for users.
LLMs and LMMs are currently effective for generating boilerplate code or providing insights into topics I'm unfamiliar with, without needing to sift through documentation.
i like your style , calm composed and very genuine and non-toxic , you know you're seeing bullshit yet you respond respectfully and give everyone the benefit of the doubt.
i agree with the first tweet after trying to work on some project using claude 3.5. it is true he doesnt get complex stuff like your entire app but if you just constantly ask it questions about small parts it gets those small parts done very fast. for example my UI was very bad so i took a screenshot of it and gave it that and the code for the component and told it to make the UI better and it just did it in 1 try. same with asking for specific small changes one at a time. you dont ask "write an app that does x" but write "change this function to also do y" and then it does it way better if you give it the minimal context thats actually neccesary instead of the entire app
The people that succeed in this industry are the ones that embrace change and figure out how to use new tools. I still know people that use oldschool vi in their coding, and never adopted IDEs.. or said git offered "nothing new". In reality.. these folks simply didn't want to do the work to learn new things.
The thing is AI is like a fast worker that catapults to the answer quickly so you have to seer it with the correct type of questions so it is not ambiguous in its output, I had to code a task component some features (add task, remove them with a cross button, add due date with a calendar etc, I had its figma file and have Claude 3.5 all the details to remove ambiguity and it made a surprisingly good boilerplate component as I knew its training data would have something similar. For run of the mill tasks it is a game changer but for something requiring a spark of imagination (nil training data) it fails pretty badly.
There are around 10 gazillion implemenations of "my first task list" on github, of course it managed to do that. Now ask it to design an async cache that fits into the constraints of your existing application...
I think a good analogy to the new AI chatbots is: quite a while ago, we could've had a computer play chess for the first maybe 8 moves, they would just lookup the correct responses in an opening book, and equally in endgames with 7 or fewer pieces, chess is a solved game and again a computer could do these to a perfect level. Asking a computer in the early 90s to play chess in the middle-game was a nonstarter, they were hopeless. Eventually we figured out how to get the computer to do more complex things and now it can play all the way through.
That's been my experience as well. Even with snippets, it works best when I effectively solve the core logic first and just ask for code, or give it complete code and ask for suggestions. For anything beyond snippets, I've spent more time holding the LLM's hand to not go x or y route, and eventually just figure it out myself. LLMs are definitely far, far away from getting to the point where a lot of people praise them, like 10xing. They definitely are very handy tools, but have a lot of limitations.
@@strigoiu13 , I did not. You have the option to remove your data from being part of the training set. Then, for security purposes, I delete conversations as well. Even then, they have plenty of training examples from other sources.
@@strigoiu13 Also, if it actually learned from its users automatically, it would be saying slurs constantly within days of launch. We've seen that happen to chatbots like that repeatedly.
That is absolutely true! I am tired of having to explain this to people over and over again just because some people keep over-exaggerating what current LLMs can actually do.
I think this is the expected behavior of non-technical people, they will be defensive and want to believe that they can do anything a software developer/engineer can do with the help of LLM, it's just human nature
It's not human nature, it's what they've been told by the people they're paying for the service. The error is blindly believing what the salesmen tell you
it's expected behavior of people with no common sense and a thought process of an elementary school kid on a good day.. which describes most of these parasites "working" in management
Sad part is you’re all wrong… LLMs will create a Revolution where non technical founders CAN build a company and one that will rival companies as large Microsoft and bigger. 💎
Yeah you need to make architectural choices before starting But Claude definitely makes you faster Coding is less about writing code and more about planning out what you’re building beforehand
he just explained why it doesn't "make it faster" and instead "makes it slower". look up technical debt and to a lesser extent code smells. AI solves easy problems and still only easy problems and still in a vacuum. I don't need AI to plan for me because if you have an idea you need a plan anyway. why would I need all this extra boilerplate generated for me? lucas, you don't sound like you have ever coded in your life and don't understand that 'architectural' choices are made well before you sit in for a developer role.
@@minhuang8848 it's a good analogy if you look at it this way: microwave heats up food. you can heat up left over pasta. you can heat up microwaveable food. it's alright and it'll fill the stomach but it's not that great compared to the food you make in oven or stove. llms will give you small working code snippets but they won't solve your complicated application. they don't come up with novel ideas. in that sense it's a microwave. you give it a prompt and it gives you mediocre code in a short time. making food by yourself is like programming while pushing a button on a microwave which is just like prompting. i just don't see how llms are like cnc machines or 3d printers. if anything they would be helpless and inconsistent cnc machine or 3d printer operators. i don't see them as tools in that sense, perhaps assistants at best
Totally agree. It can not develop medium or hard projects. The way I use LLM’s is first to architect the project, and break it down into smaller manageable chunks. Once done with that, ask the model to code those pieces, with specific interfaces. With current capability, LLM’s can not replace developers.
I've realized that using AI for a small function, or even an issue where I ask it to make what I want to give me "ideas" (I guess) of another way to do it, has lead me to waste a lot of time trying to get the right answer out of it instead of looking on stackoverflow for example
Man, I am an artist for nearly three last decades and I feel exactly like you when I am listening to other artists praising using LLMs for art. I tried many image generators and they work great... for people that just want random picture to jump out of it :D The more specific need it is - the more problems with generating simple picture. You will just waste time trying to describe what you need and getting random pictures that are sometimes not even remotely connected to what you need. And thats just simple pictures. When it comes to 3D models - LLMs are laughably simplistic. I see so many YT videos where people are AMAZED by the results - while showing something absurdly simple and still in need of manual fixing. LLMs cant even get good topology and people keep on talking about how it will replace us. More so - some people are claiming that they already lost a job to generator... HOW? What the hell they were doing? How simple thing that they could be replaced by something so deeply flawed? I recently started to learn a bit of coding for a simple, proof of concept game I am making. I didnt even tried LLM because I dont want to waste time. I rather ACTUALLY LEARN and understand how code works instead of trying to copy-paste it and then repeat it 1000 times because something isnt working and I will dont know why while LLM will tell me "ow, I am sorry let me fix it. Heres improved solution!". And then spit something wrong once again :D
The generator doesn't have to actually be good to replace people, see. All it has to do is be shiny enough for the people marketing it to convince people's upper management that it can replace them. Or be a convenient excuse to have mass layoffs and rehire at lower price or overseas.
@@tlilmiztli hi fellow artist, before AI hype, fortunately I also already switching to fullstack dev and I think being artistic give something different to what you build.
It's actually quite simple man, non-technical people don't really understand the complexity of the application. They see it looks the same, so it must be the same! Edge cases?! what is that
I’ve recently had a very nice experience with Claude. The only downside is that the amount of time you have to interact with it is limited even with the pro plan. Every now and then it will tell you to wait a few hours to continue. But aside from that, I’m building an app I could not have done in the time I had without it. I’m an expert JS dev but there are some things that I don’t understand at all like audio engineering. I’m building an app that is music based using JS and so I prompted Claude to teach me tonejs (not build the app) through a series of small lessons and building up from there until through the lessons I had a working prototype of what I’m after. Major game changer
its not 10x faster, but it is often around 1.2x to 2x, depending on level of expertise you have with the programming that needs be done. Doing stuff like: - "I have this code {code paste here} and I want to test it for x y and z, write a unit test for it." - "rewrite this code to do the same thing but async, or for a different kind of object, etc." - "write an algorithm for this class {paste code} which should do: {something boilerplate-y}" - allot of graph rendering done with python/matplotlib is imo way faster doing a first draft with an LLM and then optimizing certain things as opposed to reading documentation. If I last used matplotlib 6 months ago to plot a scatter-plot with color coded disks, I won't remember that the cmap param for the scatter plot function is called cmap, for example) - Porting code between languages (yes, it still makes sense to read and test it) The list isn't really exhaustive.
Agree on all of these, especially porting code. I'm very familiar with C and Python but my Go is very rusty, but I can have it convert entire parsing pipelines from Python into Go with minimal issue. It's a godsend
Bro I kid you not I thought the same as you, but recently I have been getting so frustrated with it not being able to complete even these simple tasks optimally.
ChatGPT made my work slower yesterday. I tried to use Python to fill product descriptions in .csv file using ChatGPT API, but the code it gave, errored and couldn't find a solution and fix it. I had to read documentation about library I was using and found out, my .csv file was separated by semicolons, not commas, which had to be properly configured in python .csv tool. I would put that kind of task as easy, yet LLM failed.
When I was studying computer science, I did a lot of extra teaching for other students for cash on the side. LLMs became a big thing just after I graduated, but I did see the effect it had on a lot of students I still worked with. 1) It can't do complex stuff. LLMs could not solve the homework assignments you'd get by the end of your first semester at my uni. 2) What they can do, they often do badly. When my students had to familiarize themselves with a new language, I'd have them reimplement some common algorithms and data structures in it. (sorting, searching, linked lists, etc.) One of my guys did quick sort with chat gpt, and it was wrong. It sorted, but it allocated twice as much memory as it should have. This type of bug is the worst thing for new coders - you'll look at the output, see it's correct, and move on, while not learning what makes it a wrong solution.
Well I am an LLM engineer, and truth be told, as Microsoft too responded the same after the copilot outrage, its just tools to help the professionals in their domains. People especially from non programming or beginner level programming background always get it wrong, they gets baffled with the smallest of code snippet. If you have no knowledge of the background of the task that you are trying to solve, LLM sure can waste your time, they are specifically designed to assist people with background. LLM sure can save your time and help you as a tool, it is not intended to replace an engineer, and the number 10x is an exaggerated number. However, it is current state of the art, it does not mean LLMs won't be better in the future. As a personal example I used LLMs all the time to prototype by creating interfaces, but I do have a degree in Computer Science, and many times I have to rewrite prompts, overall I can say it saves you 1.5x to 2x time at max, maybe more in some rare occasions, but it cannot be generalized.
This. If you know what you’re doing and looking at, your understand what the LLMs are and are not, they are fantastic, fantastic tools for speed and productivity. They are insanely helpful for documentation. The code and documents aren’t perfect, but I can iterate so fast, that ultimately I’ve pushed better code and faster
IMO it's the exact opposite, if I work on an existing project, know the tech stack well and the duplication is reduced, the gain from LLM is really minimal or even negative. Negative, cause accepting and modifying suggested solution often ends up time-wise worse than just doing it from scratch, you can also pass bugs that you'd not do, but you won't notice in generated code. Also sometimes I make up a special cases for copilot to prove itself, cause it's kind of satisfying.... lol It's different when prototyping, working with unknown tech stack or where duplication is by design (huge CRUD services), or inherited as bad design, or for e.g. unit testing where simplicity and duplication is desired. And I love copilot for Powershell, exactly because I don't know it well, it's 10x speed up in some cases there, and 5% in my core activity.
Dude I'm another software engineer (I'm technically a security engineer with a SE background) and I felt THE EXACT SAME WAY you described - any time there is a problem that is more complex than "show me a basic example of ...", LLMs completely fail and waste your time. I have spent 45 minutes to an hour trying to get something from an LLM that took me 5-10 minutes to do after simply googling or looking at StackOverflow. I had the same feelings when ChatGPT first got big and I still echo the same sentiment now. In fact, as a security engineer, I've seen LLMs introduce critical vulnerabilities in code silently...
ChatGPT couldn't code something I made in python when I was only halfway through a 100 days of code course. I started with zero experience.I could even tell it which libraries to use and it couldn't do it.
I’ve been an engineer for 20 years and I’ve been building a new SaaS product with Claude 3.5, my experience lets me ask the exact questions and give it the exact context I need to create what I want. So far it’s helped me build vue frontend components, node js backend, helped me configure typescript, it helped me configure vercel. It helped me build out authentication, the middleware, firebase integration wasn’t smooth but it helped. Helped me debug Cors issues and also build out the copy. I think the development process has been at least 5-8x faster.
I did commit 3 PRs last week that were coded entirely with an LLM. Describe the problem and provide sample similar code, review the solution, maybe a couple back and forth with LLM iterating on the solution, request tests, put everything on the repo, run tests, and feed errors into the LLM until code is fixed. I am the person who would have coded this anyways so I have the needed technical skills. The idea of a non technical person doing this today (or soon) is risible, however I did get a huge improvement days of works condensed in a day. Also the idea that engineers spend most their time on “hard” problems is strange tbh. I spent most my time finding existing solutions to non novel issues. Maybe we work on very different problems, idk. Have you considered maybe people are not lying but are seeing different time wasters disappear overnight due to LLMs?
@@gershommaes902 a management script I wrote for a manager who had a last minute question about data (I took 30 minutes between create, test, iterate, submit), a Django query to retrieve the roots from the subforest that remains when you apply RBAC to a forest on the db (mind you, minimizing data access and avoiding unnecessary data fetch), a pair of mixin classes to decorate models and query sets to emit a signal any time you make any change to the underlaying data on the db and a handler to track that on a separate model of the db. None of these really worked out of the box or were perfect, but I had a good sense of what I wanted and the test cases (which I generated via Claude itself) and I iterated several times over requirements or even over design options (I tried several options until I settled with the mixins). I got working results on a fraction of time and with more coverage than I would have otherwise. This is a revolution and it’s only going to get better. I’m waiting for better Claude-IDE integration, more agentic like workflow. Also, live testing on dev or stg environment is a time drain I wish I will be able to automate soon with some sort of bot that reads the PR and runs some “manual” tests on a local version of the whole site.
I agree. Working with an LLM feels similar to working with a junior developer fresh out of college-someone who knows a vast number of algorithms and can implement them incredibly fast. However, much like with junior developers, the time savings aren’t always as significant as one might expect. I also worry that LLMs may not become dramatically more powerful than they are today. That said, just as we adapt to using auto-complete and collaborating with junior developers, learning how to effectively work alongside LLMs is becoming increasingly essential.
I agree, as someone who loves LLMs and has been using them for my work as a junior dev, it saves time from stack overflowing and googling syntax, boilerplate and code snippets. It has saved me from bugging my senior engineers plenty of times as well. But I would be amazed if in 5 years things improve significantly, let alone replace a whole dev team. Things are looking to have some level of diminishing returns already so if we get EVEN 2x more "effectiveness" within the coming years and it can solve medium level complex tasks I would be thrilled.
You’re absolutely right. For small things, I breeze through. When I was trying to have multistate logic, not only did it waste my time, but it literally ruined the code. When you try to guide it, it would literally look like it was agreeing with you then it will literally disconnect parts of the code that was supposed to be building What I’ve been working on is not mission critical. It was possibly a test. But it is clearly limitation, and we need to figure out how to integrate it in understanding how they are limited.
People tend to forget that in essence LLMs are just fancy-shmancy search engines which translate prompt to output in one flyover. As long as you stay within the range of whatever prompt->something translations they were trained on, it can work pretty well. When you leave that area, they break down horribly.
I think Claude really helps speed things up in a few ways. It helps as another pair of eyes for bugfixes. It helps when you have no idea how to even get started in a sphere. Its really good at variable and function naming. And it can type faster than me so I can often tell it exactly what I want a function to do and it will be done about twice as fast as I can write it. Claude is not going to write your app but it is a pretty good copilot
@GoodByeSkyHarborLive Yes, it's better than GPT. GPT is still useful, but Claude seems much more with it and able to correct its mistakes where GPT gets things wrong a lot more and gets stuck. For example, Claude will start to suggest debug techniques when you keep getting the same error. It will even ask for you to share other classes or methods. It seems to creatively think about the problem. Gpt just gets into a fail loop and can't get out.
Your bugs must be extremely trivial. IME when it comes to bugfixes that "you have no idea how to even get started in a sphere" means you start with bugs in MLoC codebase (and you have no idea which part of code is called without spending hours) only to discover that bug is caused by calling external site across the way which returns warning code which is not even documented and there is no information about that site anywhere other than source code for dll written in 2010. (And by IME I mean what happened this morning, at least not evening)
100% agree with you. LLMs are currently OK at scaffolding easy stuff & small chunks of code. Fetch some data, pass it on to the view, generate some basic UI. But 10x coding? Nope.
ok bro if you think ai itself is a pump and dump scheme, then you're clearly being biased for a reason. ai helps a lot. you're missing out if you dont use it
I honestly can't wait for the AI bubble to burst. It seriously can't burst soon enough. But only because I'm selfish. I want cheap GPUs. Nvidia been hoarding them VRAM chips for their "AI" shovels. Everyone is in a gold mining rush rn with "AI" and Nvidia is selling the shovels. The pickaxes. It's sickening. And they're completely ignoring the gamers, the people who they actually BUILT their empire off of. 16GB cards should have been standard with the RTX 3000 series. Instead, with the "Ada Lovelace" cards (4000 series) they had the lowest GPU sales in over 20 years. Gee, I wonder why! When the "4070 SUPER" is really a 60-class and the "real" 70-class is now $800. Nvidia can suck it.
AI can't code or solve novel math problems but it can make trippy videos, songs, images. Because a code is as good as useless if there is one major bug or some minor bugs. But the same is not true for videos because they have to be only played.
Could not agree more. I have been using cursor AI after the hype. But when it cannot figure out a bug it introduced on its own, even the dev can't figure it out because of the large amounts of bloated code added.
ive worked with high performing juniors that couldn't build that, and in the real world seniority has a lot more to do with your ability to communicate/organize and lead projects than it does pure coding ability. keep at it!
What it is 10x for me is understanding. I can ask a question and get feedback, instead of sifting. I'm not asking for code itself, but for understanding behind what I am doing. I ask it more questions about its answers, and sometimes cross reference with other AI. I mostly use Claude, and use Grok as a backup. I'm not in there going, "Make Me an Auth Component". I'm asking, "What are things to keep in mind when looking into auth solutions?"
And the basic reason why LLM can't code up something as relatively complex as the neetcode site is because THEY DON'T UNDERSTAND, THEY REGURGITATE, and more compute or more data (which they seem to have ran out of) can't fix that. Until the AI system can somehow reason what an app like that might need, and then work on it, it won't work. This would require a complete change in architecture, LLMs won't replace even half decent juinor devs. As they are now, it's just a glorified auto correct. Helpful for very simple stuff that's been replicated a million times, but it can't do more than that.
To say LLMs don't understand is an oversimplification of a model family that I don't think you quite understand yourself. You would be surprised with the level of intelligence at which LLMs operate.
You are wrong and you dont understand how they work Llms can complete unique tasks, that alone should tell you its not regurgitation Look into geoffrey hinton
@@robotron26 actually they can complete tasks that fit a template they're given based off their large corpus of data. See the Arc test. They actually can't solve unique tasks. If they do solve it, its very likely there's an almost identical complete template that its solving.
@@jpfdjsldfji No, you are completely wrong. LLMs are not intelligent because they just predict the next word. If you indeed understand what you're writing, then you're not really PREDICTING anything, aren't you?
I'm an engineer in my fifties. I've used GPT4O to help me control our test and measurement equipment from inside Excel. We already use semi-automated Excel templates to produce certification. I am fairly handy with VBA in Excel. But what I am now doing with automation is something I would never do without an LLM. I barely have the time to do my job. I most certainly don't have the time to learn to use the APIs that GPT4O 'knows'. So bear in mind the transformative nature of this new technology for those of us who use coding as just one of the tools in the box, and not their main skill base.
@@ryan-skeldon You'd be shocked how many companies are reliant on 20 year old excel files that just do all the data collection. It works and it works well, esp if they have really old equipment that's difficult to interface with.
It's funny. Artists warned about LLM's because people were using LLM's to say they could replace artists and you see the same problems there that you see here with coding.
You are correct that LLM's have difficulty with more complex projects, but the whole idea of good clean code in the first place, is to separate your complex architecture into simple snippets of code that interoperate but run independently of each other. This is basically what functions are. They don't need know what the other functions internals are. And LLM's can definitely help you write simple functions quicker than before. If you are an engineer at heart, you won't notice that much of a difference in speed, but if you are architect at heart, suddenly you have a bricklayer at your service that helps you build cathedrals one brick at a time. The fact that engineers, photographers, novelists and artists don't seem to grasp that its not about the skill behind the individual pieces of art (humans are way better), but about the composition of the whole (80% of the quality at 10x the speed). Its perhaps easier to see if you look outside your own profession where you aren't hindered by your own standards but merely judge the outcome. What is 10x more efficient, hiring a photographer or generating a couple of photo's from your favorite AI tool?
I completely agree with the main point of the video, with one caveat. I have seen people polarizing pretty fast on this topic, between people thinking that LLM can _already_ substitute junior engineers and people thinking that they will not be an issue for their jobs. You are perfectly right: we can observe that camp A is wrong. But I am as sure of the fact that camp B is wrong too. Even believing your claim "in the next 5 years LLM will not be able to substitute a JE", 5 years is _very few_ time. I have 30 years of work in front of me, years that 3 years ago I thought I would spend coding. If this revolution happens today or in 5 years, my choices are pretty much the same: I have to adapt to the change fast. And honestly I do not have the same confidence you have in the 5 years claim. Today looks like a far target, but considering the point where it was last year, and the continuous tech revolutions of the past two years, I would not rule out that next year an LLM will be able to code neetcode from scratch. Sure, I would be surprised. But I have been surprised many times by LLMs evolution speed.
@@Dom-zy1qy maybe :) I tried making a program that could generate graphical representations of trees some time ago, but failed because I thought it was too complicated. But now I'm curious again maybe I should take another shot ^^
Agree with every word you said. I've been learning coding constantly over the past 2 years, and while I do use AI, it is a small part of what I overall do. And I'm still relatively a total beginner. I work with a few people who are way less technical than they think they are, and they believe that coding will be dead soon, and that they could do what I could do using AI, but it would take them a little more time. None of them have attempted anything more advanced than setting up a spreadsheet to ingest Google Calendar events.
Bravo! Glad someone's talking about it. Those idiots talking about "10X" have no clue. They don't even understand the "1 + 1" level stuff. But they sure do love hyping things up.
The biggest win I've had with AI, was when I was working on a feature to add some telemetry to our software to track what reports are clients are using. For all the reports we had they were all defined in one bigass 3000+ line file. I needed to add a string to each report which had a english version of the report because the actual name would get translated if you were to change to french for example, and I needed to make sure I always sent the same name for each report when sending out the report click event. I dreaded that I would have to do literal hours of mindnumbing copy pasting for hundreds of reports, but instead I just pasted that whole file to ChatGPT and got it all done in less than 10mins. Now could have I also done the same with some scripting, yea. But it wouldn't have been nearly as fast to develop the script, test it, then handle all the inevitable edge cases. And it was way easier to just explain in english that I wanted this very simple thing done.
I saw a CEO of some company bragging that AI created a guest check-in app for some event he was hosting. It was basically a to-do list. Add the person's name and check them off when they arrive. Everyone in the comments was gushing about AI. And tbf, I'm not sure how many of the commenters are actually real and not AI bots because that's where we are on social media these days, but it was still ridiculous. The only cool thing about it was the app he used to prompt the AI also ran the code in a sandbox so you could just prompt and use whatever it created immediately. But that doesn't make up for the fact that anything beyond the most basic of apps is impossible to build with AI.
I don't think he's lying. His experience mirrors mine. You don't ask the LLM to design your app for you. There are a few ways in which they help. 1. When you're trying to do something you're unfamiliar with, ask for guidelines on the task. Give it as much context as possible. This helps you get up to speed quicker with relevant information. You can then either ask follow up questions or Google specific parts that you need more clarity on. 2. They automate grunt work. Stuff that's not complex, but still takes a lot of effort. Pattern matching stuff. Like converting SQL to query builder or ORM code and vice versa. 3. They can explain stuff that's hard to Google. Like if you give it a regular expression, it can tell you exactly what it does and beak it down into parts for you, so that you can edit it the way you need to. Explaining complex bash commands work well too. You can't easily Google this, but and LLM can explain it very well.
I can’t code, but I made a Facebook marketplace copycat using ai that’s fully functional with messaging and everything. It would be stupid to make a super complex startup with ai, but I am interested in business and ai helps me code enough to where I can get started, and worry about hiring a coder later.
This is so refreshing to hear as an engineer who was laid off from a company with a greedy lunatic CEO who firmly believed one - two engineers with a bunch of LLMs was all that was needed to do EVERYTHING for releasing production enterprise consulting software they could charge 10k a month for - insane. I took my time writing actual software but that didn’t fit his extremely fast timeline; we had no QA, no DevOps, just me and two other folks with days to spin up sellable entire products. On my own time I’ve tested building apps with LLMs in the drivers seat - these things are not able to reason around entire systems! I don’t think LLM tech alone will get us there
best dev’s I know tend to even delete AI plugins, because it waste time and is a distraction personally I go to LLM only for simple snippets and idea generation, anything more and it’s a waste of time
You're completely right. Even as a Master's student in Aerospace Engineering, LLMs can't help me with my problems beyond the most basic outlines. When you need to get more niche or technical, their answers make zero sense, and you're better off doing your own literature search.
You're definitely using it wrong, if it makes you slower not faster. Here's how to use it properly: 1. Yourself decide what the file should do, consider the design choices, technologies, structure. 2. Write up everything you thought of in step 1, as bullet points. 3. Provide pseudocode for anything non-boilerplate 4. If you have another file in the project with a structure or code style you want mantained provide that as context 5. Use either Gpt-4, Claude 3.5 sonnet, or Deepseek Coder v2, to generate the code. 6. (not yet readily available) Write test cases, and use a AI coding ide, to iteratively debug it's code until it passes the test cases. As a person who has many years of experience coding in python, but doesn't know every library under the sun, and every syntax perfectly, the llm's ability to code bug free code is amazing. I am at least 2-3x faster with it.
Thank you for calling this out. Adding to this, I heard another engineer recently call LLMs "fancy autocomplete". That's kind of what it feels like. It's amazing (but I suppose not surprising) that so many non-engineering folks are trying to tell engineers what LLMs are. The irony! Granted, there is complexity to LLMs and how they work, but I don't think most engineers saying that LLMs aren't "all that" is a matter of us trying to "save our jobs"; it's a matter of trying to tell the truth. I guess it more just feels like another example of a non-engineer trying to tell us why are job isn't "hard". Well, that and a bunch of marketing nonsense by big tech to cash in on the next big thing.
I think you’re just bad at prompting. I’m a .NET dev and ChatGPT 4o has easily made my work 10x faster. You just have to VERY clearly explain what you want, how you want it to go about performing the task, provide all the necessary context/background, and then iterate on the LLM’s first response over and over until it’s perfect. Tell it what it did wrong, what you don’t like, what you want improved, and keep going. It’s like having an ultra-fast programmer working for me who writes all the code and all I have to do is clearly explain what I want and then review it. I’m sorry you haven’t gotten good results using AI for programming work, but if you’re not getting good results, I tend to think that’s on you, not the LLMs. I think you’re bad at prompting, and probably pretty bad at explaining things interpersonally as well.
That part about explaining things interpersonally is actually interesting because that is a common problem that many of us programmers have. After all, when working at a lower level (not UI design or things like that) we are working with abstractions that are difficult to verbalize. And at some point you just say... let me do it myself. Because... If you have to invest time defining very well the functionality of a piece of code, then you are not being that efficient. You are just pushing a car though a supermarket aisle because you have become too dependent on that technology.
If LLMs make your work 10x faster than your work is extremely simple to begin with. That's why you're finding success with your prompting and others don't.
@@pelly5742 Such is the life of a full-stack dev. Some of my tasks are insanely complex, most are not. I don’t have a junior programmer working for me who I can give all the grunt work to so that I can just do the fun stuff. I have to do everything myself. GPT 4o has become that junior programmer who does all of the routine stuff, and does it incredibly fast, so that I can work on the more complex aspects that humans are still better at, and that’s how it has 10x’d my workflow. GPT 4o is like having a full time junior programmer who has come to me right out of school with a masters in computer science, writes code with superhuman speed in any language, and works for me for only $20/month. It’s revolutionary. If you’re not getting good results using the tool then you’re probably just not very good at using the tool. It takes an especially narrow mind to believe that all the people who are getting better results with the tool than you are are all just lying about it.
I'm not a coder, but I work with a lot of scripting and IaC (which I guess makes me a very junior coder in a way). No LLM has been able to whip me up a decent script that I don't have to spend the whole day cleaning up. Best results so far have been to request the code in parts, and piece them together myself afterwards. I think you're right, 5 years and it still won't be able to do what a human can do. But it will eliminate basically all low level data entry/data manipulation jobs.
you are are 10000% using it wrong. I setup orchestrated docker containers, terraform deploymemts with beautifully designed reusable components, open source vector store in a container with volume claims. All deployed to azure, pulling from my own private docker image container registry, provisioning an azure resource group into azure container apps service which is managed kubernetes behind the scenes. yes I have working knowledge but I wrote zero code just worked with the chat system while referencing and pasting actual documentation. it helps to use an editor like vim for quickly editing sections and pages of code without always having to use a mouse. Claude is literally a game changer for programmer/founder hybrids like me
Yeah, what I think it does for a non-tech guys like me, as a fast prototyping and idea generating so I could give devs much more info and starting points as before. I consider myself like a power user with some coding knowledge and even I can spot mistakes and total like bunkers implementing ideas using LLM's. So totally agree with the vid.
hell yeah! I just started learning coding for data science and man, it's scary. All this llm's coming out looking like they will take over jobs, and all companies are laying offs engineers, plus all these people showing off what they built using claude and cursor on Twitter, not understanding a thing of what it's made of. It's a breath of fresh air having this perspective come from a seasoned and respected programmer Thank you so much for saying this!
I completely agree with the video. It makes sense that LLM's would be able to reproduce things that are easy for an experienced engineer, because that would exist in it's training data. There's no reason to expect that LLMs can reason about the logic in the code it outputs, so the correctness of it's output will be based on either it's training data or a complete coincidence. There may be ways to still use this technology to work smarter, not harder, e.g. writing documentation, suggesting names for functions, generating boilerplate, writing html snippets of UI components without requiring context (such as a submit button, might be pretty similar to any other submit button). Basically things that are language based, or copy-paste, and don't require logic. Maybe one day, more intelligent AI models that include logic and language will exist and are more capable writing novel code. Anyone familiar with LLM workflows may have a headstart. But these don't exist yet.
I mostly agree. I think LLMs are very good on writing skeleton code or simple snippets if can actually describe correctly what kind of UI/Code you want, but anything more complex than a basic CRUD is beyond any AI. Not to mention the hallucinations crapping all over the code, if you don't know your code, you are going to get very weird errors.
As a student currently persuading master degree in data science - and had over 10 years of web dev work exp in the past...I agree with you. The sad thing is, CEOs or who has never placed their hands on actual 'operation' in management yapping about the LLM. They literally have threaten people with massive layoffs. So many layoffs were made already due to their belief. Later, the companies made huge layoffs keep showing recruiting ads over months constantly. Seems like they are still struggling to fill the gaps.
LLM actually have something called an effective context window, and this is not the maximum context window it can support. There is also a limit on the number of logical steps it can take, which is proportional to the number of transformer layers in the model. This puts a limit on how much information it can effectively process in the context. This means the right way to use LLM to code is to shrink the context if you find LLM cannot solve a task you give, i.e. By breaking down a complex problem into smaller problems. This the skill that all architects have, and this is the correct way to use LLM to code. I have personally found using claude has greatly increased my productivity. What used to take me few days now only takes a few hours. If you don't see this productivity gain then you have not mastered the skill of using LLM correctly, I.e. You have not done your part of the thinking properly and break down the problem into small enough well defined chunks
Hot take: If LLMs make you 10x faster at coding that says more about your coding ability than it does about how good LLMs are.
The tweet: x.com/neetcode1/status/1814919711437508899
This video may change your mind about the AI hype in general: th-cam.com/video/uB9yZenVLzg/w-d-xo.html
can you list some complex apps? i'm still learning i feel underpowered
you just made me feel good for getting shit code from gpt
W Neet
Skill issue. You’re probably shit at prompting. The equivalent of making a StackOverflow post and the people need to ask 50 follow ups to get the context of your issue
AI can seemingly dominate the world, not through a engineer perspective but on a mathematical perspective AI can really code of medium-difficulty tasks. These companies pour billions on transformer models and the best compact brilliance on other side which was not encouraged by those communities on a real scale
The alliance of "CEOs who hate paying salaries" and "students who hate doing homework" both wanting LLMs to code perfectly
It’s a perfect circle since latter drops out of college after meeting a potential investor to become the former
It could even be a Venn diagram
trust me, student here, we really dont want LLMs to code perfectly or even close to it. Our futures arent worth the few hours of homework 😭
Years ago I read somewhere on the internet the field was broken, totally broken. But that’s ok, I am from the medical field, I brought my stethoscope. Also, duck tape, and WD40 if that can help.
@@agnescroteau8960medicine wont lose work, but will dying to overwork though. Especially if you have "universal healthcare" which will take your negotiating power and income
Pre LLM => Devs expected to work 45 hours per week.
Post LLM => Devs expected to work 60 hours per week.
Somebody’s gotta fix all of that LLM spaghetti that looked like could work but just doesn’t 😂
@@hungrybeaverontheleaveri have seen that code in on of the client projects. If I am being honest. That would not pass any code review. The technical debt 📈😅😂😅
You nailed it. Non technical people don’t understand that the last 10% takes 90% of the time… and the problem with developing with LLMs give you the impression you are almost done, but you still have a long way to go. And if you started with an LLM and don’t know what you are doing… good luck with that last 10% 😂
The last 10% also makes 90% of the value. It's the edge that makes an app competitive, not the part that's the same as any other app.
Like me implementing a web server for aws basically in a single day, then spend the rest of the week at least figuring out the deployment and configurations that are missing. Gotta love how "helpful" aws errors can be.
I honestly love when chatgpt makes throwaway python scripts for me when I feel lazy; but man, maintaining that code going forward? I’d have to rewrite most of it!
@@pmlbeiraoif that difficult part didn’t exist likely someone else would’ve done it
Uhhhhh, hasn't that been the case since school or college? :s
“Founder” just means your side project has an LLC and a bank account
😂
And you're probably already looking for investors, lol
@@XHackManiacX actually no, I just want to put my app on the app store
You talk about me i take offense 😂
Bank account with a couple bucks is optional😂
"I made the rookie mistake of opening up Twitter" LOL :)
Yes, a "mistake" he made. And instead of quickly closing it and forgetting like a bad dream, he published a whole video about it.
🤣
@@TheCronix1 - the tweet author, probably.
😂😂😂😂
But I got this video on Twitter 😅
Q: Why would people lie/exaggerate like this?
A: To generate traffic on their channel/feed/blog via hype and/or because they have financial interests which benefit by hyping up the technology.
Basically your typically permutation of your typical Social Media borne scam.
Also, people want to feel like they've found "The Answer", so they lie to themselves as much as others to hype this up in their own mind.
just like this loser youtuber is doing. AI and especially claude has objectively saved me months of work in my business. he's just baiting anti-ai idiots and soon-to-be-replaced programmers who have nothing else going for them
10 x 0 positivity is still 0 productivity
Keep coping…
@@J3R3MI6 Stay useless
Actually, it's 0 positivity. You forgot to double check your math.
@@Icedanon from this equation 10 * 0 (positivity) = 0(productivity)
which is true since 0 = 0, LHS = RHS, so here lets take :
0 = x, so 10 * x(positivity) = x(productivity)
since x productivity is 0, we have divide both sides by 10, we get : x(positivity) = x(productivity) since dividing 0 by any number equals 0 unless the denominator is 0.
so going futher =
x/x = productivity/positivity. which is both 1 and infinity,
1. taking x/x as 1 :
therefore we conclude that productivity and positivity are inversely proportional,
the more productivity u have, the less positivity u get.
and vice versa.
2. taking x/x as infinity, we can also conclude that
productivity/positivity = infinity. Or productivity = infinity/positivity.
if your productivity was 2 units, then your positivity would be infinity/2 which is very large number, so we can take it as infinite.
therefore no matter what is your positivity, your productivity is actually infinite,
if your positivity is infinity, then your productivity would actually suffer.
By the whole answer, we conclude two things :
the more productivity u have, the less positivity u get.
and vice versa.
if your positivity is less than your productivity, then your productivity is actually infinite,
if your positivity is infinity, then your productivity would actually suffer.
so moral of the story :
keep your positivity low, be depressed, take some pills, do drugs, etc to further lower your positivity and increase productivity.
MATH SUCKS, SINCE INFINITY IS NOT A FRICKING NUMBER, ITS UNDEFINED IN MATHS, SO WHATEVER THAT I COOKED IS INVALID.
Terrence Howard: "Hold my abacus whilst I deal with this punk! "
This is not how you use LLMs to aid coding. You use it to write small self-contained functions, regexps, throwaway scripts, prototypes and non-customer facing utility code for stuff like data visualisations etc. It's not for non-technical people, it's for technical people that want to get through non-critical or simple parts of a task quicker.
Basically a replacement for StackOverflow and not much else.
You clearly get it.
Exactly. Been using for some DevOps tasks heavily. Python, bash - don't know and don't care. I have enough developer knowledge to debug it, but not to learn all the syntax and niche libs, frameworks, and language quirks.
It's like having a developer buddy on Discord ALWAYS ready to go. I agree completely with you. Consulting, sharing snippets, etc. is the way.
i.e. a novelty of little consequence to all but grifters and sheep
Weren't "no-code" tools hyped up in the exact same way? Or am I misremembering?
They did, now only governments use them.
Yeaaap :) that ship sank so hard no one's even talking about those any more
@@Jia-Tan Can you please explain what those mean? (The “no-code” tools)
@@yashghatti That's just not true, you have Webflow and Elementor, both widely used and battle-tested, they were also marketed as replacement for all developer, but they eventually find their own place, and got accepted by everyone as a good tool in some cases.
We've been through *several cycles* of this. It was CASE tools in the 1980s, and then UML in the 90s/early 2000s. Both of these were supposed to obviate the need for coding, you just had to go from requirements to running program. The problem is, any symbols you use to express a spec in such a way that it's specific enough to be executable by computer are isomorphic to constructs in some programming language. They just might not be easily diffable, version-controllable, or viewable or editable except by specialized tools.
"LLMs will replace all developers" said a person who's the most major accomplishment is a hello world app.
🤣
Nearly every TH-cam coding influencer who's entire business model is pretending they are actual professional Software Engineers but all of their projects are forks of somebody else's public GitHub project has entered the chat.
a 'Snake' game
@@AnimeGIFfy don't compare your hello world app to mine. Mine is like an amber alert and notifies every phone on planet earth of my presence🤣.
🤣
The reason why they're lying is because of money. I'm a senior engineer and I've been in the industry for 25 plus years, LLMs just waste time for anything that isn't the most trivial app on the planet. The AI hype is based around a bunch of VCs misrepresenting the difficulty of programming/engineering for the sake of selling a product.
I feel like the Twitter guy doesn't understand what 10X even means. If you can implement this stuff 10 times as fast then you can literally work for one day and then take the rest of the week off and no one will notice a difference. Naturally, I don't think he's a programmer in the first place which is probably why he sees a 10x improvement. This is just a big old case of the dunning kruger effect.
The funniest part of all of this stuff is that it just doesn't make sense logically for these LLMs to ever become proficient at writing code. The reason why I say this is because you need to have enough data to train the LLM on every use case. But the problem is that there are plenty of use cases where maybe there's only one or two examples in open source. These AIs have no ability to create anything new and so there's always going to be a distribution where the majority of problems simply can't be solved by the LLM because it doesn't have enough data to understand those problems. At the same time, they'll become really proficient at writing todo apps because there are thousands of those.
sadly most non tech employeer currently started to underestimate engineer, just like my former boss, saying that my $500 salary as fullstack dev is enough because AI can help. hahaha.
Like I've mentioned elsewhere it also really depends on the language, the more popular the language the easier or perhaps more options you can get, the less popular the language.. Well good luck getting AI to help you write in older languages or in custom ones. It's only sort of helpful anyway for problem solving because like you said it's based on examples already existing which might even be the wrong problem to solve leading you down rabbit holes if you don't realize it. The biggest problem with AI though is the cost of maintenance from both technical, and environmental viewpoints. It's like how some NFT's are supposed to "solve" climate change, good luck getting "green" AI.
The productivity gains equally massive as workflow difference may be for a specific repo mantainer against a full stack engineer.
I don't think LLMs can help much if ur been doing the same for 10+ years
understand what it is and now inescaple global arms-race for technical superiority and upgrading nation defence
LLMs / AI tech represents
I've been doing this stuff for even longer. There have always been people gaslighting us about the difficulty of producing quality software. This is just the same people latching onto a new tool. Before it was agile then iso 9001 and on and on.
Every 4 years silicon valley gets caught being shady and people just Pikachu face through it like it's the first time
Not limited to silicon valley. Any areas with great amount of cash flow are full of cheating and lying.
I am old enough to remember when they told us the future of gaming is pikachu hunting with your smartphone camera.
@@betadevb I will never forget the incident in NYC near central park where someone yelled out "Vaporeon is here" and people jumping out of their vehicles to catch this Pokemon. IN NYC / CENTRAL PARK !!! th-cam.com/video/MLdWbwQJWI0/w-d-xo.html Vaporeon Central park Stampede
@@betadevb To be fair, I still play pokemon Go
This is my favorite hot take. Thanks for this! 😂😂
Its an 80-20 thing. LLM's suck ass for the parts that actually take the time, and help with what most people dont need help with.
they're still really useful for "dumb tasks". i can tell gpt 4o to "Look at this project i made, now i need X project with x and x, make it based on my other code" and it will make me a working crud in less than a minute. sure, it might have some issues or be missing features. But it still saved me like half an hour of coding if not more.
i've done that a few times and personally i find it pretty satisfying to be able to generate a basic crud with 5 working endpoints in a few seconds.
@@SoyGriff Very much so :) I love them for learning new spoken languages too, i doubt there's a better tool other than actually practicing with other people. They have many uses, but the message I was trying to reinforce was neetcode's opinion on how they're not as advanced coding wise as they are made out to be.
In your case, the crud part can be found basically anywhere, since so many people have already implemented it. For implementing specific business logic their usefulness basically depends on your ability to modularize the problem. If you can break down your problem in to small enough chunks that you can ask chatgpt how to implement them, you've already done a lot of the "programming" yourself.
They're definitely useful in their own right.
@@Vancha112 the crud part can't be found easily because it's specific to my project and yet it can generate it in seconds based on my instructions, it saves me a lot of time.
i agree i'm doing most of the programming and just telling the AI to implement it, but that's the beauty of it. that's what AI is for. i only have to think and explain while it does all the heavy work. that's why my productivity increased so much since i started using it. i'm building in a month what my old team would build in 6, and i'm alone.
Been coding for 20 years here. The point is.. even if you don't "need help" with that part.. the LLM will do the job faster than you can do it.. thus your productivity is improved. In my opinion, if you are not figuring out how to include LLM's in your workflow, you are going to be left behind by those that do. Is it a 10x increase? For tasks the llm can do.. it's much more than a 10x increase!
It's not about "needing help with it" it's that it can do a bunch of tedious stuff in a few keystrokes rather than having to type it out
I think 90% of my job is figuring out how to solve the issue I have, 5% - 6% is bug fixing and testing what I added and the rest is typing code. I think, even if I could magically have all the code in my head on my computer in like a second, it would save me a couple of working hours per week. I think the people who create these tools don't actually understand what programmers actually need. If for example I could have an AI that quickly tests my code, then we could start talking, probably would save me lots of time.
Yes! Automatic testing would be fantastic. If someone could train an AI to do *just that*, and nothinh else, it would be amazing. In general I think AI tries to be too much. It would be more practical with AI that was really good at something very, very specific and worthless outside of that.
Engagement farming is a real thing on twitter and that's what is been going on. People just post anything and if your Post by any means contains the word "AI" and fear mongers among general public, It's sure to get reactions from left and right.
I believe X is paying based on your posts interactions, that’s why that thing is full of bots
they only make like a few bucks too. so it's really pathetic when actual humans do it.
now the bots are excused, they are raking it money through annoying others. that's genius level of hustling. the american way.
I'm absolutely sure that if I post anything, even if it contains the word "AI:" it will get at most 4 views because that's what every tweet I've ever posted since Twitter started has got. Unless you pay them or something, there's no way to get views or followers in that thing.
@@rogue_minimaThe only way is to post something really eye catching or just spam everyday.
Enragement farming
Everybody nowdays is "Founder" or "building X" with no technical background, few years ago it was the hype ride with no code tools and now it is with LLMs.
Apple just published a study confirming that LLMs can't reason, they can only replicate information based on the pattern recognition of the data that they were trained on. That's why they can't handle medium difficulty tasks that may require complex problem-solving. Especially if it's nuanced and particular to your project, and therefore maybe not found in an outside project.
You're not crazy, you're just not trying to be an AI consultant or guru and therefore you aren't lying to you yourself constantly about what these LLMs can do.
I work as an AI implementer and have been in IT for over 30 years, and while I agree that yes, LLM reasoning is weak... I also think most people are missing the point. We... the people paid professionally to do this, use it for tasks that it's good at.
We come from a world in which relational database do all the heaving lifting. Unstructured data was a dumping ground only really navigable by humans.
LLM's are good at mining this dump and following logic. More specifically, you use LLM's for tasks with a Narrow focus, unprepared data and a tight definition of outcomes.
That approach yields the optimal results. ML then 'in essence' is the same as weakly coded 'on the spot' application. The same as what an human does.
So, early days for now of this new world in which 'applications' don't exist and data being enjoined as needed on the fly.
I wrote an 'LLM' based app, that acts as a log enhanced protocol gateway. Many to one. Normally you would have an ESB do this right? Nope, shove in the crap, tell it what to do, give it the schema for the destination JSON API and poof, it generates the cleaned up stream. TERRIBLE efficiency, but, I have no input structure schema to worry about. So, just lob data at it, it worked out how to clean it in to a consumable stream.
We're only scratching the surface of applications based on current tech, and the tech is moving faster than our ability to seize all the opportunities it offers.
"Yes... I'm a consultant' But, I don't even find the use cases. I'm just involved in making it happen.
LLM's for coding are IMHO, a TOTAL mixed bag. Right now getting it to do what you really want takes longer than doing it yourself and the hard parts of code... no chance. But, it has had moments of brilliance when I learnt new ways of doing things.
@@TelithsRage I understand your sentiment. I personally use it for game development, and even created a plugin in godot so I can type prompts from chatGPT without leaving godot and have it populate in my scripts directly so I can quickly see any issues and then fix from there.
BUT - I still have to design the game. ChatGPT can't design a game for me, and there are major companies (Like EA) trying to tell people it can. Design and asset creation are not the same thing.
Not only that, but as someone who also works within at one of the big Sillicon Valley tech companies, I have witnessed the lack of breadth that AI creation has when we work with agencies.
The work of 6 difference creative agencies, ended up looking like it all came from the same one.
LLMs work really well for all the purposes you mentioned. But there are companies jumping the gun to say it can be in the driver seat and not just the assistant, and I haven't seen a single piece of evidence that that's true.
Shock!! LLMs actually do what they are designed to do
@@NickKeighley The point is that there's too many professional consultants trying to sell LLMs saying they do more than what they're designed to do.
We wouldn't need Apple to run a whole report, if people weren't always being misleading and mytifying technology
The biggest issue with these LLMs is that they lose context SO FAST. 3 prompts in and you need to tell them again and again to keep in mind what you had mentioned in the previous prompts. I was using Chatgpt and Copilot for the leetcode problem "flight assignment" and I accidentally forgot to mention "Flights in this coding question" in my 3rd or 4th prompt and it started giving me Airline flight info. Which is completely bonkers because how could it think I am talking to it about Airlines instead of a coding problem that we were working on a few seconds ago!!
You should increase token count.
@@PanicAtProductionit will be crazy. Even LSPs start to struggle on bigger projects.
I find the more I know about a specific task the less useful an LLM is. When I’m new to something it’s a great place to start by talking to chatgpt or something.
I'm not a programmer but I've made a few JS sites and Python apps for fun, and one thing I learnt to do is to start new chats. Once you get too deep it starts going batshit. Granted this is all very basic level, so it probably wouldn't help on anything too big or technical anyway, but basically if you spend some time starting new chats and being very specific and detailed with your prompts it does help. With Claude I'll tell it I've updated the files in the project knowledge section and for it to refer to the newest version. There are ways of getting it to stay on track but it probably is a waste of time for an actual programmer.
lmao 😂 it giving you flight info is wild
I made an app completely for work with ai. It’s a 400 line csv parser in essence. This is about the max ai is capable of, making something you already could have made but with the downside of having to verbally explain computer science to a toddler
They’re lying because they’re trying to get rid of competition. Propaganda, basically. Yes, I know this sounds crazy, but it worked on me. When AI was first released, there were millions of videos and articles floating around about how AI was going to replace humans, and me, who was at the time learning how to code, gave up on coding because AI scared me. I chose to go on a different path. I’m sure there are more people who gave up on coding because of AI propaganda. Fortunately, though, stopping learning how to code didn’t have a big impact on me since I was 13 at the time and even though I wasted almost two years not learning how to code, I’m back at it and will not give up no matter what 💪 You shouldn’t either. AI will not replace software engineers. Period.
Understandable, felt that too. I'm a little decent at coding and felt totally replacable when devin was on the hype train.
NGL I was gonna use AI as an excuse to finally quit capitalism and move to a mountain with my savings (not trolling this was going to happen). But the only thing that ended up happening is that once again in the industry greed won and a with the layoffs a lot of us devs are being exploited af. We are trapped in the promise of a transition that will take decades with CEOs who just want to keep on cutting numbers on one side and the AI bubble on the other.
In the mean time tons of really good people cannot even find internships because interviewers also fell on the AI bubble trap and are now asking freshly graduate kids to code skynet in a blackboard in 15 minutes.
The industry really sucks rn.
Good on you, I only managed to start learning at age 17
Why does your profile pic look like you're 35
It's not him, that's Robert the Niro form "taxi driver" movie @@anon3118
Totally agree. That is the major difference between looking from a non-tech person and a tech person's point of view.
From a non-tech person's point of view, they are now able to create a "non-working, working-looking site" (lol), whereas before, they would need to have a UI designer/engineer create it for them, which cost them money and meeting time.
From a tech person's point of view, LLM is just a snippet where now I don't need to go to StackOverflow. Using it more than that is just wasting time as mentioned in your video.
And the most hyped people that go around talking shit are the non-tech people who work for a tech company that knows nothing about systems but think they do so they start using these LLM tools thinking they can replace engineers..
The worst part is that they use these tools to create so-called prototypes and then give it to the engineers to make it production-ready but don't know why it takes longer than the traditional way (*Cough CEO/Project managers Cough*)
A saying that's always valid "Stupid people are the loudest" , that's how I see all those Twitter "influencers/founders" with their AI takes, LLMs, carrers,etc... They need to get good themselves before talking. Wake me up when Primeagen agrees with their nonsense.
Good take Neetcode!
Except it's the opposite. Most of these takes against AI for dev productivity are people who havent progressed beyond senior engineer, including Primeagen.
@@OCamlChad maybe because the AI itself has not progressed beyond the level of an intern.
@@OCamlChadsince that's not good enough for you maybe you should ask John Carmack next time.
You are definitely right... Now, how do we shut Elon up?
@@OCamlChadLol okay Mr Junior Dev
agree with what you're saying, i've been doing software for +10 years and I do think it has made my productivity go like 10x up, but the difference is that I know what I need and I use chatgpt 4o as a rubber duck, specially when doing architecture decisions and tradeoffs I have a vague idea of lets say 3 different ways of building x product so I just ask for pros/cons, describe my ideas and so on and it works. The thing that i've noticed is that if I spend + 2 hours discussing/bouncing ideas with an LLM it becomes stale really fast, forgets my previous input and just hallucinates, but as an initial technical document writing or small shit like basic components it works VERY good
This. I agree with this a million times over. I treat it like a rubber duck that has 130 IQ. At the end of the day it's *my* hand that is writing the code. The LLM just provides input and feedback. The claim made by the tweet OP is definitely exaugurated, but if you strip out the hyperbole and 'zoom out' a little, it's pretty realistic.
It’s about pain vs complexity.
Like he said if it can handle snippets it can handle big projects in chunks. That’s how I use it. I edit more code than I write but my jump of is always an AI.
It just physically writes code faster… I can do the thinking and editing but it write 1000-500 lines a min.
The problem with this video is that he starts with his emotional opinion and then finds examples that proves him right
@@rocketPower047 literally autistic
Ah yes, the old AI made me a 10x engineer. It's always cap...chances are that these individuals that claim this are the ones who push absolute dog water to production because they don't actually understand the code or know how to debug. Personally if I'm prompting this LLM to write something and then having to double-check it and if it is wrong prompt it again and do that whole process till it gets it right, It would have been faster if I did it all myself in the first place.
I don't know man. Personally, I personally find that its much easier to edit 'half-way there' code, than to write from scratch. It might take a while to get used to the peculiarities and bad habits of the LLM and figure out what's the best point to stop prompting and start coding by yourself, but once you figure it out, I do find that relying on AI makes me a lot more productive. Not 10x but definitely at least 3x on a good day. (Although there are obviously also bad days where its barely 1x). I find that its great at data visualization code, complicated refactorings, explaining an existing (not too large) project I'm trying to get started with, and basically speeding up any annoying, slightly complex, tedious process. And it really shines for quick dirty projects in languages you're unfamiliar with (need to google how to init array) but can read just fine once the code's there in front of you, since you can basically just wing it, as long as you've a LLM to get your back.
@@ReiyICN Oh boy I'd never ever rely on AI for "complicated refactoring". Sounds strikingly similar to shooting yourself on the foot. To be fair I've only found AI useful for common boilerplate you don't want to write, or in the case of copilot when you're creating a structure, it is quite good at completing the structure, for example switch or else statements
@@ReiyICNmore like 1.3x
Even if LLM can do 90% code, the 10% will take 90% of your time. Its "last mile problem" pattern
The issue with a lot of people is PROMPTING, you don't prompt LLMS. You don't have to find the correct prompts or keywords. You just talk to them as if they were a human being, a dumb one. It works really well in my experience.
It's better to write 2 paragraphs explaining what you want than trying to make it work 10 times while only writing basic prompts and not providing the whole context
@@SoyGriff some projects are to complex to explain the entire context , I find once iv explained it I already know how to solve it anyway
the engagement-based payout model on social media platforms is proving to be quite the catalyst to the enshittification of the internet
Yeah it’s lizard-brain on steroids, because it’s not just SEO guys anymore, anyone can get a rage bait payout.
Yes, the problem is that they can get close to the spec you give them, but it's not close *enough* and has to be rewritten. This has been frustrating for me many times where I tell the LLM to change one small detail and it goes round in circles before finally admitting something can't be done without starting from scratch. Huge waste of time in a lot of cases
Pretty sure my project manager could say the same about our dev team 😂😂
Thats part of you learning
If you learn what tools exist or what libraries can actually do, it should be able to help you code it just fine
Its literally translating your prompt from english into code
You asking it to do something impossible is partly your fault
@@wforbes87100%
As someone using it to write basic code, its a godsend, i dont need to wait a day or submit a ticket or whatever to have to talk to an engineer
These guys are vastly overestimating the amount of mundane work that goes on outside of faang lol, most coders or code jobs are not frontier
@@robotron26 Sure, but the LLMs sometimes thing something impossible is in fact possible and lead you on.
i dont think you used it properly..
I believe that LLMs won't ever replace Software Engineers because, to get quality outputs, the time and effort you need to detail your problem and how you want it solved is, for the most part, already the job software people are hired to do. I deal with machine learning and many times I opened a conversation and realized that I knew already the answer to the problem by just framing it and putting constraints on the solution and, no shock, that's called thinking!
On the other hand, when you plug the entire script of your model and ask "Why is the gradient not backpropagating correctly?" the LLM will provide a fancy list of overly smart solutions totally ignoring your specific problem resulting in a massive waste of time.
Having that said, removing all those time-consuming moments when you are solving low-level problems like finding the correct function that does the job from a cool library, is a massive quality-of-life improvement and allows you to focus on the interesting aspects of the job.
Important to keep in mind that a lot of the hype is either manufactured by folks that have invested a lot of money into the current AI boom or folks that have fallen for said marketing.
The only people who can fall for said marketing are those who haven't actually tried the product. The rest, like the guy writing this article, they're CLEARLY stakeholders. I bet this guy bought some Anthropic stock beforehand, or is just a paid actor
I work a lot with writing quick utility tools and api integrations for enterprise tool marketplaces and this is extremely useful for making hyper specific private apps to help a team handle one tiny piece of straightforward automation + hooking together a couple if APIs + maybe a super quick interface. LLMs are really powerful for things like this and have prob made me 10x faster for certain easy but tedious tasks.
Man those replies and comments are AI themselves. :(
That's what I thought. Bots are hyping things up deceiving humans into the trend. If you tell a lie too many times...
I'm starting to think that the bots are manufactured by twitter. What benefit does anyone outside the company have to run bots that respond to posts like that. Not to mention the captcha when registering, I literally could not pass it myself after like 3 tries of having to get 20/20 answers correct to the point that I gave up. Maybe I'm stupid and AI can solve that better than me, I don't know, seems fishy. It's probably 90% of posts I see are AI.
@@ltpfdev Well if these are indeed Twitter's own bots, then they'd just bypass the captcha and probably post via API
what even is real anymore ;(
"Feels like I'm living in a different universe than people on Twitter" actually, true for literally every topic of discussion, not just SE
It is overhyped but at the same time it does do my work much faster. It can't build entire systems or even big parts of a system, but It can work on small parts. For example, writing simple functions, components, UI elements etc. I mainly use it to speed up my work, instead of coding, its mostly me checking over the code generated and fixing small things. Sometimes its frustrating and gets it very wrong, but I usually just have to fix the prompt. Overall its definitely sped up my work flow, maybe not 10x, but 2-3x is reasonable.
And thats enough for it to be a massive change
Now your company can make you do 3x the work instead of hiring one or two more people
They will absolutely do that
And AI will advance more to the point where eventually, you will not be needed
There is no arcane or unknown laws of coding libraries, they are all manmade and documented, the ai will get better
Your thinking in zero sum terms. The demand for code will simply increase... the reality is.. most companies want to use a lot more software than they currently do.. so they will simply create more applications and better tools for users.
@@robotron26 And your point is? If developers will be replaced, it's would mean that pretty much all of intellectual jobs are done.
LLMs and LMMs are currently effective for generating boilerplate code or providing insights into topics I'm unfamiliar with, without needing to sift through documentation.
yeh so much better than documentation because documentation has so many missing knowledge .
i like your style , calm composed and very genuine and non-toxic ,
you know you're seeing bullshit yet you respond respectfully and give everyone the benefit of the doubt.
I wouldn't call him calm, rather hysteric. :D
@@Neomadra that's not hysterical at all
i agree with the first tweet after trying to work on some project using claude 3.5. it is true he doesnt get complex stuff like your entire app but if you just constantly ask it questions about small parts it gets those small parts done very fast. for example my UI was very bad so i took a screenshot of it and gave it that and the code for the component and told it to make the UI better and it just did it in 1 try. same with asking for specific small changes one at a time. you dont ask "write an app that does x" but write "change this function to also do y" and then it does it way better if you give it the minimal context thats actually neccesary instead of the entire app
The people that succeed in this industry are the ones that embrace change and figure out how to use new tools. I still know people that use oldschool vi in their coding, and never adopted IDEs.. or said git offered "nothing new". In reality.. these folks simply didn't want to do the work to learn new things.
yeh that makes more sense. just be specific , how hard is that
The thing is AI is like a fast worker that catapults to the answer quickly so you have to seer it with the correct type of questions so it is not ambiguous in its output, I had to code a task component some features (add task, remove them with a cross button, add due date with a calendar etc, I had its figma file and have Claude 3.5 all the details to remove ambiguity and it made a surprisingly good boilerplate component as I knew its training data would have something similar.
For run of the mill tasks it is a game changer but for something requiring a spark of imagination (nil training data) it fails pretty badly.
There are around 10 gazillion implemenations of "my first task list" on github, of course it managed to do that. Now ask it to design an async cache that fits into the constraints of your existing application...
I think a good analogy to the new AI chatbots is: quite a while ago, we could've had a computer play chess for the first maybe 8 moves, they would just lookup the correct responses in an opening book, and equally in endgames with 7 or fewer pieces, chess is a solved game and again a computer could do these to a perfect level. Asking a computer in the early 90s to play chess in the middle-game was a nonstarter, they were hopeless. Eventually we figured out how to get the computer to do more complex things and now it can play all the way through.
That's been my experience as well. Even with snippets, it works best when I effectively solve the core logic first and just ask for code, or give it complete code and ask for suggestions. For anything beyond snippets, I've spent more time holding the LLM's hand to not go x or y route, and eventually just figure it out myself. LLMs are definitely far, far away from getting to the point where a lot of people praise them, like 10xing. They definitely are very handy tools, but have a lot of limitations.
so, you basically trained it for free...good job! do it more often, please, we love free work!
@@strigoiu13 , I did not. You have the option to remove your data from being part of the training set. Then, for security purposes, I delete conversations as well. Even then, they have plenty of training examples from other sources.
@@strigoiu13 Also, if it actually learned from its users automatically, it would be saying slurs constantly within days of launch. We've seen that happen to chatbots like that repeatedly.
@@strigoiu13So what? It was useful to him, sounds like a fair trade
That is absolutely true! I am tired of having to explain this to people over and over again just because some people keep over-exaggerating what current LLMs can actually do.
I think this is the expected behavior of non-technical people, they will be defensive and want to believe that they can do anything a software developer/engineer can do with the help of LLM,
it's just human nature
Lmao it's even worse than that, they believe software engineers and programmers are gatekeeping coding from the common people lmao
@@vishnu2407 Yeah exactly 😂
It's not human nature, it's what they've been told by the people they're paying for the service. The error is blindly believing what the salesmen tell you
it's expected behavior of people with no common sense and a thought process of an elementary school kid on a good day.. which describes most of these parasites "working" in management
Sad part is you’re all wrong… LLMs will create a Revolution where non technical founders CAN build a company and one that will rival companies as large Microsoft and bigger. 💎
Yeah you need to make architectural choices before starting
But Claude definitely makes you faster
Coding is less about writing code and more about planning out what you’re building beforehand
he just explained why it doesn't "make it faster" and instead "makes it slower". look up technical debt and to a lesser extent code smells. AI solves easy problems and still only easy problems and still in a vacuum. I don't need AI to plan for me because if you have an idea you need a plan anyway. why would I need all this extra boilerplate generated for me? lucas, you don't sound like you have ever coded in your life and don't understand that 'architectural' choices are made well before you sit in for a developer role.
💀💀💀 wait 2 years bro
Totally with you, Neetcode! The AI hype is like calling a microwave a personal chef. Thanks for cutting through the noise!
ooh i like that analogy
@@minhuang8848 it's a good analogy if you look at it this way: microwave heats up food. you can heat up left over pasta. you can heat up microwaveable food. it's alright and it'll fill the stomach but it's not that great compared to the food you make in oven or stove.
llms will give you small working code snippets but they won't solve your complicated application. they don't come up with novel ideas. in that sense it's a microwave. you give it a prompt and it gives you mediocre code in a short time.
making food by yourself is like programming while pushing a button on a microwave which is just like prompting.
i just don't see how llms are like cnc machines or 3d printers. if anything they would be helpless and inconsistent cnc machine or 3d printer operators. i don't see them as tools in that sense, perhaps assistants at best
Totally agree. It can not develop medium or hard projects. The way I use LLM’s is first to architect the project, and break it down into smaller manageable chunks. Once done with that, ask the model to code those pieces, with specific interfaces. With current capability, LLM’s can not replace developers.
I've realized that using AI for a small function, or even an issue where I ask it to make what I want to give me "ideas" (I guess) of another way to do it, has lead me to waste a lot of time trying to get the right answer out of it instead of looking on stackoverflow for example
it's when I find myself basically yelling at it, asking it if its stupid kind of thing
Man, I am an artist for nearly three last decades and I feel exactly like you when I am listening to other artists praising using LLMs for art. I tried many image generators and they work great... for people that just want random picture to jump out of it :D The more specific need it is - the more problems with generating simple picture. You will just waste time trying to describe what you need and getting random pictures that are sometimes not even remotely connected to what you need. And thats just simple pictures. When it comes to 3D models - LLMs are laughably simplistic. I see so many YT videos where people are AMAZED by the results - while showing something absurdly simple and still in need of manual fixing. LLMs cant even get good topology and people keep on talking about how it will replace us. More so - some people are claiming that they already lost a job to generator... HOW? What the hell they were doing? How simple thing that they could be replaced by something so deeply flawed? I recently started to learn a bit of coding for a simple, proof of concept game I am making. I didnt even tried LLM because I dont want to waste time. I rather ACTUALLY LEARN and understand how code works instead of trying to copy-paste it and then repeat it 1000 times because something isnt working and I will dont know why while LLM will tell me "ow, I am sorry let me fix it. Heres improved solution!". And then spit something wrong once again :D
The generator doesn't have to actually be good to replace people, see. All it has to do is be shiny enough for the people marketing it to convince people's upper management that it can replace them. Or be a convenient excuse to have mass layoffs and rehire at lower price or overseas.
@@tlilmiztli hi fellow artist, before AI hype, fortunately I also already switching to fullstack dev and I think being artistic give something different to what you build.
It's actually quite simple man, non-technical people don't really understand the complexity of the application. They see it looks the same, so it must be the same!
Edge cases?! what is that
if i dont know what edge case is, then i ask the ai ? simple. its so funny why are all the comments like this? calm down
@@playversetv3877 yeah good luck with that!
You actually think the ai can give you a viable answer to this? At some point, its time to use ur own brain to solve problems.
@@playversetv3877use you own brain, you think AI can spoonfed you everything you want?
I’ve recently had a very nice experience with Claude. The only downside is that the amount of time you have to interact with it is limited even with the pro plan. Every now and then it will tell you to wait a few hours to continue. But aside from that, I’m building an app I could not have done in the time I had without it. I’m an expert JS dev but there are some things that I don’t understand at all like audio engineering. I’m building an app that is music based using JS and so I prompted Claude to teach me tonejs (not build the app) through a series of small lessons and building up from there until through the lessons I had a working prototype of what I’m after. Major game changer
its not 10x faster, but it is often around 1.2x to 2x, depending on level of expertise you have with the programming that needs be done.
Doing stuff like:
- "I have this code {code paste here} and I want to test it for x y and z, write a unit test for it."
- "rewrite this code to do the same thing but async, or for a different kind of object, etc."
- "write an algorithm for this class {paste code} which should do: {something boilerplate-y}"
- allot of graph rendering done with python/matplotlib is imo way faster doing a first draft with an LLM and then optimizing certain things as opposed to reading documentation. If I last used matplotlib 6 months ago to plot a scatter-plot with color coded disks, I won't remember that the cmap param for the scatter plot function is called cmap, for example)
- Porting code between languages (yes, it still makes sense to read and test it)
The list isn't really exhaustive.
Agree on all of these, especially porting code. I'm very familiar with C and Python but my Go is very rusty, but I can have it convert entire parsing pipelines from Python into Go with minimal issue. It's a godsend
Bro I kid you not I thought the same as you, but recently I have been getting so frustrated with it not being able to complete even these simple tasks optimally.
ChatGPT made my work slower yesterday. I tried to use Python to fill product descriptions in .csv file using ChatGPT API, but the code it gave, errored and couldn't find a solution and fix it. I had to read documentation about library I was using and found out, my .csv file was separated by semicolons, not commas, which had to be properly configured in python .csv tool. I would put that kind of task as easy, yet LLM failed.
@@qbek_san Sometimes working on your own is the most efficient way to go. AI still has huge recognition problems. It's not advanced.
Idk I'd trust AI unit tests xD And if I have to read through it anyway to make sure it's correct, might as well write it myself, idk.
When I was studying computer science, I did a lot of extra teaching for other students for cash on the side. LLMs became a big thing just after I graduated, but I did see the effect it had on a lot of students I still worked with.
1) It can't do complex stuff. LLMs could not solve the homework assignments you'd get by the end of your first semester at my uni.
2) What they can do, they often do badly. When my students had to familiarize themselves with a new language, I'd have them reimplement some common algorithms and data structures in it. (sorting, searching, linked lists, etc.) One of my guys did quick sort with chat gpt, and it was wrong. It sorted, but it allocated twice as much memory as it should have. This type of bug is the worst thing for new coders - you'll look at the output, see it's correct, and move on, while not learning what makes it a wrong solution.
Well I am an LLM engineer, and truth be told, as Microsoft too responded the same after the copilot outrage, its just tools to help the professionals in their domains. People especially from non programming or beginner level programming background always get it wrong, they gets baffled with the smallest of code snippet. If you have no knowledge of the background of the task that you are trying to solve, LLM sure can waste your time, they are specifically designed to assist people with background. LLM sure can save your time and help you as a tool, it is not intended to replace an engineer, and the number 10x is an exaggerated number. However, it is current state of the art, it does not mean LLMs won't be better in the future. As a personal example I used LLMs all the time to prototype by creating interfaces, but I do have a degree in Computer Science, and many times I have to rewrite prompts, overall I can say it saves you 1.5x to 2x time at max, maybe more in some rare occasions, but it cannot be generalized.
This. If you know what you’re doing and looking at, your understand what the LLMs are and are not, they are fantastic, fantastic tools for speed and productivity. They are insanely helpful for documentation. The code and documents aren’t perfect, but I can iterate so fast, that ultimately I’ve pushed better code and faster
a knife in a chef's hands is not the same as one in a child's hands.
IMO it's the exact opposite, if I work on an existing project, know the tech stack well and the duplication is reduced, the gain from LLM is really minimal or even negative. Negative, cause accepting and modifying suggested solution often ends up time-wise worse than just doing it from scratch, you can also pass bugs that you'd not do, but you won't notice in generated code. Also sometimes I make up a special cases for copilot to prove itself, cause it's kind of satisfying.... lol
It's different when prototyping, working with unknown tech stack or where duplication is by design (huge CRUD services), or inherited as bad design, or for e.g. unit testing where simplicity and duplication is desired. And I love copilot for Powershell, exactly because I don't know it well, it's 10x speed up in some cases there, and 5% in my core activity.
@@kocot. that 100 makes sense. At my job, I’m normally prototyping or building from scratch
Dude I'm another software engineer (I'm technically a security engineer with a SE background) and I felt THE EXACT SAME WAY you described - any time there is a problem that is more complex than "show me a basic example of ...", LLMs completely fail and waste your time. I have spent 45 minutes to an hour trying to get something from an LLM that took me 5-10 minutes to do after simply googling or looking at StackOverflow. I had the same feelings when ChatGPT first got big and I still echo the same sentiment now. In fact, as a security engineer, I've seen LLMs introduce critical vulnerabilities in code silently...
ChatGPT couldn't code something I made in python when I was only halfway through a 100 days of code course. I started with zero experience.I could even tell it which libraries to use and it couldn't do it.
"Anytime I need to accomplish something of medium difficulty, LLMs waste my time." I can’t agree with you more.
6:13 - Here's the bottom line. That's exactly what I think. Great video, BTW!
I’ve been an engineer for 20 years and I’ve been building a new SaaS product with Claude 3.5, my experience lets me ask the exact questions and give it the exact context I need to create what I want. So far it’s helped me build vue frontend components, node js backend, helped me configure typescript, it helped me configure vercel. It helped me build out authentication, the middleware, firebase integration wasn’t smooth but it helped. Helped me debug Cors issues and also build out the copy.
I think the development process has been at least 5-8x faster.
I did commit 3 PRs last week that were coded entirely with an LLM.
Describe the problem and provide sample similar code, review the solution, maybe a couple back and forth with LLM iterating on the solution, request tests, put everything on the repo, run tests, and feed errors into the LLM until code is fixed. I am the person who would have coded this anyways so I have the needed technical skills. The idea of a non technical person doing this today (or soon) is risible, however I did get a huge improvement days of works condensed in a day. Also the idea that engineers spend most their time on “hard” problems is strange tbh. I spent most my time finding existing solutions to non novel issues. Maybe we work on very different problems, idk.
Have you considered maybe people are not lying but are seeing different time wasters disappear overnight due to LLMs?
I'm curious about the solution implemented in your 3 PRs?
LLMs work. Especially ones trained on large amounts of code, can handle large context, and the person using them is good at prompting.
@@gershommaes902 a management script I wrote for a manager who had a last minute question about data (I took 30 minutes between create, test, iterate, submit), a Django query to retrieve the roots from the subforest that remains when you apply RBAC to a forest on the db (mind you, minimizing data access and avoiding unnecessary data fetch), a pair of mixin classes to decorate models and query sets to emit a signal any time you make any change to the underlaying data on the db and a handler to track that on a separate model of the db. None of these really worked out of the box or were perfect, but I had a good sense of what I wanted and the test cases (which I generated via Claude itself) and I iterated several times over requirements or even over design options (I tried several options until I settled with the mixins). I got working results on a fraction of time and with more coverage than I would have otherwise.
This is a revolution and it’s only going to get better. I’m waiting for better Claude-IDE integration, more agentic like workflow. Also, live testing on dev or stg environment is a time drain I wish I will be able to automate soon with some sort of bot that reads the PR and runs some “manual” tests on a local version of the whole site.
I agree. Working with an LLM feels similar to working with a junior developer fresh out of college-someone who knows a vast number of algorithms and can implement them incredibly fast. However, much like with junior developers, the time savings aren’t always as significant as one might expect. I also worry that LLMs may not become dramatically more powerful than they are today.
That said, just as we adapt to using auto-complete and collaborating with junior developers, learning how to effectively work alongside LLMs is becoming increasingly essential.
I agree, as someone who loves LLMs and has been using them for my work as a junior dev, it saves time from stack overflowing and googling syntax, boilerplate and code snippets. It has saved me from bugging my senior engineers plenty of times as well. But I would be amazed if in 5 years things improve significantly, let alone replace a whole dev team. Things are looking to have some level of diminishing returns already so if we get EVEN 2x more "effectiveness" within the coming years and it can solve medium level complex tasks I would be thrilled.
You’re absolutely right.
For small things, I breeze through.
When I was trying to have multistate logic, not only did it waste my time, but it literally ruined the code.
When you try to guide it, it would literally look like it was agreeing with you then it will literally disconnect parts of the code that was supposed to be building
What I’ve been working on is not mission critical. It was possibly a test. But it is clearly limitation, and we need to figure out how to integrate it in understanding how they are limited.
People tend to forget that in essence LLMs are just fancy-shmancy search engines which translate prompt to output in one flyover. As long as you stay within the range of whatever prompt->something translations they were trained on, it can work pretty well. When you leave that area, they break down horribly.
I think Claude really helps speed things up in a few ways. It helps as another pair of eyes for bugfixes. It helps when you have no idea how to even get started in a sphere. Its really good at variable and function naming. And it can type faster than me so I can often tell it exactly what I want a function to do and it will be done about twice as fast as I can write it. Claude is not going to write your app but it is a pretty good copilot
so its better than gpt and copilot? what are those good at?
@GoodByeSkyHarborLive Yes, it's better than GPT. GPT is still useful, but Claude seems much more with it and able to correct its mistakes where GPT gets things wrong a lot more and gets stuck. For example, Claude will start to suggest debug techniques when you keep getting the same error. It will even ask for you to share other classes or methods. It seems to creatively think about the problem. Gpt just gets into a fail loop and can't get out.
Your bugs must be extremely trivial. IME when it comes to bugfixes that "you have no idea how to even get started in a sphere" means you start with bugs in MLoC codebase (and you have no idea which part of code is called without spending hours) only to discover that bug is caused by calling external site across the way which returns warning code which is not even documented and there is no information about that site anywhere other than source code for dll written in 2010. (And by IME I mean what happened this morning, at least not evening)
100% agree with you. LLMs are currently OK at scaffolding easy stuff & small chunks of code. Fetch some data, pass it on to the view, generate some basic UI. But 10x coding? Nope.
Anyone at this point in tech who still thinks AI is not a stock pump and dump scheme is probably still a toddler.
Wishing all ML Engineers and AI "experts" a merry AI Winter
ok bro if you think ai itself is a pump and dump scheme, then you're clearly being biased for a reason. ai helps a lot. you're missing out if you dont use it
@@usernamesrbacknowthx ai development is da best
I honestly can't wait for the AI bubble to burst. It seriously can't burst soon enough.
But only because I'm selfish. I want cheap GPUs.
Nvidia been hoarding them VRAM chips for their "AI" shovels. Everyone is in a gold mining rush rn with "AI" and Nvidia is selling the shovels. The pickaxes. It's sickening. And they're completely ignoring the gamers, the people who they actually BUILT their empire off of. 16GB cards should have been standard with the RTX 3000 series. Instead, with the "Ada Lovelace" cards (4000 series) they had the lowest GPU sales in over 20 years. Gee, I wonder why! When the "4070 SUPER" is really a 60-class and the "real" 70-class is now $800. Nvidia can suck it.
AI can't code or solve novel math problems but it can make trippy videos, songs, images. Because a code is as good as useless if there is one major bug or some minor bugs. But the same is not true for videos because they have to be only played.
Could not agree more. I have been using cursor AI after the hype. But when it cannot figure out a bug it introduced on its own, even the dev can't figure it out because of the large amounts of bloated code added.
Se a field newbee i would be happy to see less qualified programmers getting out and when the bubble burst the real ones only become more valuable
You built that as a junior? I'm finished!
ive worked with high performing juniors that couldn't build that, and in the real world seniority has a lot more to do with your ability to communicate/organize and lead projects than it does pure coding ability. keep at it!
@@dehancedmedia2900 thanks!
@@dehancedmedia2900 Right? seems tough for a junior
He is not saying that he coded exactly "that" as a junior. He is saying that this platform started at the hands of junior developer.
What it is 10x for me is understanding. I can ask a question and get feedback, instead of sifting. I'm not asking for code itself, but for understanding behind what I am doing. I ask it more questions about its answers, and sometimes cross reference with other AI. I mostly use Claude, and use Grok as a backup. I'm not in there going, "Make Me an Auth Component". I'm asking, "What are things to keep in mind when looking into auth solutions?"
And the basic reason why LLM can't code up something as relatively complex as the neetcode site is because THEY DON'T UNDERSTAND, THEY REGURGITATE, and more compute or more data (which they seem to have ran out of) can't fix that.
Until the AI system can somehow reason what an app like that might need, and then work on it, it won't work. This would require a complete change in architecture, LLMs won't replace even half decent juinor devs.
As they are now, it's just a glorified auto correct. Helpful for very simple stuff that's been replicated a million times, but it can't do more than that.
To say LLMs don't understand is an oversimplification of a model family that I don't think you quite understand yourself. You would be surprised with the level of intelligence at which LLMs operate.
You are wrong and you dont understand how they work
Llms can complete unique tasks, that alone should tell you its not regurgitation
Look into geoffrey hinton
@@robotron26 actually they can complete tasks that fit a template they're given based off their large corpus of data. See the Arc test. They actually can't solve unique tasks. If they do solve it, its very likely there's an almost identical complete template that its solving.
@@jpfdjsldfji No, you are completely wrong. LLMs are not intelligent because they just predict the next word. If you indeed understand what you're writing, then you're not really PREDICTING anything, aren't you?
@@jpfdjsldfji what they do is not intelligence. But their design is absolutely intelligent.
Well, it’s progressing..> we won’t totally dismiss that fact, but totally agree
I'm an engineer in my fifties. I've used GPT4O to help me control our test and measurement equipment from inside Excel. We already use semi-automated Excel templates to produce certification.
I am fairly handy with VBA in Excel. But what I am now doing with automation is something I would never do without an LLM. I barely have the time to do my job. I most certainly don't have the time to learn to use the APIs that GPT4O 'knows'.
So bear in mind the transformative nature of this new technology for those of us who use coding as just one of the tools in the box, and not their main skill base.
Sounds like your company should hire a SWE to work on better tools for you so you can focus on your job.
@@ryan-skeldon You'd be shocked how many companies are reliant on 20 year old excel files that just do all the data collection. It works and it works well, esp if they have really old equipment that's difficult to interface with.
@@ryan-skeldon Until the employees protest it cause they're used to Excel and basically just want Excel
It's funny. Artists warned about LLM's because people were using LLM's to say they could replace artists and you see the same problems there that you see here with coding.
You are correct that LLM's have difficulty with more complex projects, but the whole idea of good clean code in the first place, is to separate your complex architecture into simple snippets of code that interoperate but run independently of each other. This is basically what functions are. They don't need know what the other functions internals are. And LLM's can definitely help you write simple functions quicker than before.
If you are an engineer at heart, you won't notice that much of a difference in speed, but if you are architect at heart, suddenly you have a bricklayer at your service that helps you build cathedrals one brick at a time. The fact that engineers, photographers, novelists and artists don't seem to grasp that its not about the skill behind the individual pieces of art (humans are way better), but about the composition of the whole (80% of the quality at 10x the speed).
Its perhaps easier to see if you look outside your own profession where you aren't hindered by your own standards but merely judge the outcome. What is 10x more efficient, hiring a photographer or generating a couple of photo's from your favorite AI tool?
I completely agree with the main point of the video, with one caveat. I have seen people polarizing pretty fast on this topic, between people thinking that LLM can _already_ substitute junior engineers and people thinking that they will not be an issue for their jobs. You are perfectly right: we can observe that camp A is wrong. But I am as sure of the fact that camp B is wrong too. Even believing your claim "in the next 5 years LLM will not be able to substitute a JE", 5 years is _very few_ time. I have 30 years of work in front of me, years that 3 years ago I thought I would spend coding. If this revolution happens today or in 5 years, my choices are pretty much the same: I have to adapt to the change fast.
And honestly I do not have the same confidence you have in the 5 years claim. Today looks like a far target, but considering the point where it was last year, and the continuous tech revolutions of the past two years, I would not rule out that next year an LLM will be able to code neetcode from scratch. Sure, I would be surprised. But I have been surprised many times by LLMs evolution speed.
Interesting point, how *did* you make that fancy directed graph? :D
Perhaps he procedurally generates SVGs, was curious myself. Probably gonna try and replicate it.
@@Dom-zy1qy maybe :) I tried making a program that could generate graphical representations of trees some time ago, but failed because I thought it was too complicated. But now I'm curious again maybe I should take another shot ^^
Agree with every word you said. I've been learning coding constantly over the past 2 years, and while I do use AI, it is a small part of what I overall do. And I'm still relatively a total beginner.
I work with a few people who are way less technical than they think they are, and they believe that coding will be dead soon, and that they could do what I could do using AI, but it would take them a little more time. None of them have attempted anything more advanced than setting up a spreadsheet to ingest Google Calendar events.
Yes you're not doing it right. Breaking those complex tasks into simple tasks and then feeding it to LLM is a skill too
Bravo! Glad someone's talking about it.
Those idiots talking about "10X" have no clue.
They don't even understand the "1 + 1" level stuff.
But they sure do love hyping things up.
The biggest win I've had with AI, was when I was working on a feature to add some telemetry to our software to track what reports are clients are using.
For all the reports we had they were all defined in one bigass 3000+ line file. I needed to add a string to each report which had a english version of the report because the actual name would get translated if you were to change to french for example, and I needed to make sure I always sent the same name for each report when sending out the report click event.
I dreaded that I would have to do literal hours of mindnumbing copy pasting for hundreds of reports, but instead I just pasted that whole file to ChatGPT and got it all done in less than 10mins.
Now could have I also done the same with some scripting, yea. But it wouldn't have been nearly as fast to develop the script, test it, then handle all the inevitable edge cases. And it was way easier to just explain in english that I wanted this very simple thing done.
I saw a CEO of some company bragging that AI created a guest check-in app for some event he was hosting. It was basically a to-do list. Add the person's name and check them off when they arrive. Everyone in the comments was gushing about AI. And tbf, I'm not sure how many of the commenters are actually real and not AI bots because that's where we are on social media these days, but it was still ridiculous. The only cool thing about it was the app he used to prompt the AI also ran the code in a sandbox so you could just prompt and use whatever it created immediately. But that doesn't make up for the fact that anything beyond the most basic of apps is impossible to build with AI.
I don't think he's lying. His experience mirrors mine. You don't ask the LLM to design your app for you.
There are a few ways in which they help.
1. When you're trying to do something you're unfamiliar with, ask for guidelines on the task. Give it as much context as possible. This helps you get up to speed quicker with relevant information. You can then either ask follow up questions or Google specific parts that you need more clarity on.
2. They automate grunt work. Stuff that's not complex, but still takes a lot of effort. Pattern matching stuff. Like converting SQL to query builder or ORM code and vice versa.
3. They can explain stuff that's hard to Google. Like if you give it a regular expression, it can tell you exactly what it does and beak it down into parts for you, so that you can edit it the way you need to. Explaining complex bash commands work well too. You can't easily Google this, but and LLM can explain it very well.
You’re spot on here. For easy tasks like a skeleton it works fine but as soon as the task is multi layered it somewhat complex it doesn’t work AT ALL
I can’t code, but I made a Facebook marketplace copycat using ai that’s fully functional with messaging and everything. It would be stupid to make a super complex startup with ai, but I am interested in business and ai helps me code enough to where I can get started, and worry about hiring a coder later.
This is so refreshing to hear as an engineer who was laid off from a company with a greedy lunatic CEO who firmly believed one - two engineers with a bunch of LLMs was all that was needed to do EVERYTHING for releasing production enterprise consulting software they could charge 10k a month for - insane.
I took my time writing actual software but that didn’t fit his extremely fast timeline; we had no QA, no DevOps, just me and two other folks with days to spin up sellable entire products.
On my own time I’ve tested building apps with LLMs in the drivers seat - these things are not able to reason around entire systems! I don’t think LLM tech alone will get us there
10x devs can get another 10x using the best LLMs (myself included). the smarter you are the better effect you get on LLMs.
best dev’s I know tend to even delete AI plugins, because it waste time and is a distraction
personally I go to LLM only for simple snippets and idea generation, anything more and it’s a waste of time
So you have no idea of how to use LLMs...
@@vitalyl1327 sure, like I need to waste more time prompting to get them to work, instead of just getting it done myself
Me (a 20x dev, don't ask my secrets) using self-reasoning AI (AutoGPT) to get another 50x
You're completely right. Even as a Master's student in Aerospace Engineering, LLMs can't help me with my problems beyond the most basic outlines. When you need to get more niche or technical, their answers make zero sense, and you're better off doing your own literature search.
You're definitely using it wrong, if it makes you slower not faster. Here's how to use it properly:
1. Yourself decide what the file should do, consider the design choices, technologies, structure.
2. Write up everything you thought of in step 1, as bullet points.
3. Provide pseudocode for anything non-boilerplate
4. If you have another file in the project with a structure or code style you want mantained provide that as context
5. Use either Gpt-4, Claude 3.5 sonnet, or Deepseek Coder v2, to generate the code.
6. (not yet readily available) Write test cases, and use a AI coding ide, to iteratively debug it's code until it passes the test cases.
As a person who has many years of experience coding in python, but doesn't know every library under the sun, and every syntax perfectly, the llm's ability to code bug free code is amazing. I am at least 2-3x faster with it.
Yeah a lot of people get off talking about what it can't do instead of just utilize it lol!!!
The point is that for a non-technical person the LLMs are useless even in that limited context. Because to get to step 5 you need to be a programmer.
Thank you for calling this out. Adding to this, I heard another engineer recently call LLMs "fancy autocomplete". That's kind of what it feels like. It's amazing (but I suppose not surprising) that so many non-engineering folks are trying to tell engineers what LLMs are. The irony! Granted, there is complexity to LLMs and how they work, but I don't think most engineers saying that LLMs aren't "all that" is a matter of us trying to "save our jobs"; it's a matter of trying to tell the truth.
I guess it more just feels like another example of a non-engineer trying to tell us why are job isn't "hard". Well, that and a bunch of marketing nonsense by big tech to cash in on the next big thing.
I think you’re just bad at prompting. I’m a .NET dev and ChatGPT 4o has easily made my work 10x faster. You just have to VERY clearly explain what you want, how you want it to go about performing the task, provide all the necessary context/background, and then iterate on the LLM’s first response over and over until it’s perfect. Tell it what it did wrong, what you don’t like, what you want improved, and keep going. It’s like having an ultra-fast programmer working for me who writes all the code and all I have to do is clearly explain what I want and then review it. I’m sorry you haven’t gotten good results using AI for programming work, but if you’re not getting good results, I tend to think that’s on you, not the LLMs. I think you’re bad at prompting, and probably pretty bad at explaining things interpersonally as well.
That part about explaining things interpersonally is actually interesting because that is a common problem that many of us programmers have. After all, when working at a lower level (not UI design or things like that) we are working with abstractions that are difficult to verbalize. And at some point you just say... let me do it myself.
Because... If you have to invest time defining very well the functionality of a piece of code, then you are not being that efficient. You are just pushing a car though a supermarket aisle because you have become too dependent on that technology.
If LLMs make your work 10x faster than your work is extremely simple to begin with. That's why you're finding success with your prompting and others don't.
@@pelly5742 Such is the life of a full-stack dev. Some of my tasks are insanely complex, most are not. I don’t have a junior programmer working for me who I can give all the grunt work to so that I can just do the fun stuff. I have to do everything myself. GPT 4o has become that junior programmer who does all of the routine stuff, and does it incredibly fast, so that I can work on the more complex aspects that humans are still better at, and that’s how it has 10x’d my workflow. GPT 4o is like having a full time junior programmer who has come to me right out of school with a masters in computer science, writes code with superhuman speed in any language, and works for me for only $20/month. It’s revolutionary. If you’re not getting good results using the tool then you’re probably just not very good at using the tool. It takes an especially narrow mind to believe that all the people who are getting better results with the tool than you are are all just lying about it.
The art of coding involves breaking things down into simple subtasks. Once that is done, an LLM can work on that extremely simple stuff.
.NET dev, explains it all
I'm not a coder, but I work with a lot of scripting and IaC (which I guess makes me a very junior coder in a way). No LLM has been able to whip me up a decent script that I don't have to spend the whole day cleaning up. Best results so far have been to request the code in parts, and piece them together myself afterwards. I think you're right, 5 years and it still won't be able to do what a human can do. But it will eliminate basically all low level data entry/data manipulation jobs.
you are are 10000% using it wrong. I setup orchestrated docker containers, terraform deploymemts with beautifully designed reusable components, open source vector store in a container with volume claims. All deployed to azure, pulling from my own private docker image container registry, provisioning an azure resource group into azure container apps service which is managed kubernetes behind the scenes.
yes I have working knowledge but I wrote zero code just worked with the chat system while referencing and pasting actual documentation.
it helps to use an editor like vim for quickly editing sections and pages of code without always having to use a mouse.
Claude is literally a game changer for programmer/founder hybrids like me
none of those things require code to do in the first place lol
Yeah, what I think it does for a non-tech guys like me, as a fast prototyping and idea generating so I could give devs much more info and starting points as before. I consider myself like a power user with some coding knowledge and even I can spot mistakes and total like bunkers implementing ideas using LLM's. So totally agree with the vid.
Keep coping, AI will take over jobs. Stay in denial, if it helps you sleep at night.
the sad truth
LLMs provide some help, but to think that you can replace a junior developer with an LLM... Well... Just give it a try and see how it goes.
Two words "Skill issue"
@@minhuang8848 False, I use SOTA and it sucks dick. It can generate 1-2 complex functions and that's it
Couldn't agree more, most of the time I turn off copilot when dealing with our legacy codebase
aww first it was the artists time to be mad and now its the coders
hell yeah! I just started learning coding for data science and man, it's scary. All this llm's coming out looking like they will take over jobs, and all companies are laying offs engineers, plus all these people showing off what they built using claude and cursor on Twitter, not understanding a thing of what it's made of. It's a breath of fresh air having this perspective come from a seasoned and respected programmer Thank you so much for saying this!
I completely agree with the video. It makes sense that LLM's would be able to reproduce things that are easy for an experienced engineer, because that would exist in it's training data. There's no reason to expect that LLMs can reason about the logic in the code it outputs, so the correctness of it's output will be based on either it's training data or a complete coincidence. There may be ways to still use this technology to work smarter, not harder, e.g. writing documentation, suggesting names for functions, generating boilerplate, writing html snippets of UI components without requiring context (such as a submit button, might be pretty similar to any other submit button). Basically things that are language based, or copy-paste, and don't require logic.
Maybe one day, more intelligent AI models that include logic and language will exist and are more capable writing novel code. Anyone familiar with LLM workflows may have a headstart. But these don't exist yet.
I mostly agree. I think LLMs are very good on writing skeleton code or simple snippets if can actually describe correctly what kind of UI/Code you want, but anything more complex than a basic CRUD is beyond any AI. Not to mention the hallucinations crapping all over the code, if you don't know your code, you are going to get very weird errors.
As a student currently persuading master degree in data science - and had over 10 years of web dev work exp in the past...I agree with you. The sad thing is, CEOs or who has never placed their hands on actual 'operation' in management yapping about the LLM. They literally have threaten people with massive layoffs. So many layoffs were made already due to their belief. Later, the companies made huge layoffs keep showing recruiting ads over months constantly. Seems like they are still struggling to fill the gaps.
LLM actually have something called an effective context window, and this is not the maximum context window it can support. There is also a limit on the number of logical steps it can take, which is proportional to the number of transformer layers in the model. This puts a limit on how much information it can effectively process in the context. This means the right way to use LLM to code is to shrink the context if you find LLM cannot solve a task you give, i.e. By breaking down a complex problem into smaller problems. This the skill that all architects have, and this is the correct way to use LLM to code.
I have personally found using claude has greatly increased my productivity. What used to take me few days now only takes a few hours. If you don't see this productivity gain then you have not mastered the skill of using LLM correctly, I.e. You have not done your part of the thinking properly and break down the problem into small enough well defined chunks