Why is everyone LYING?

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024
  • 🚀 neetcode.io/ - A better way to prepare for Coding Interviews
    🧑‍💼 LinkedIn: / navdeep-singh-3aaa14161
    🐦 Twitter: / neetcode1
    ⭐ BLIND-75 PLAYLIST: • Two Sum - Leetcode 1 -...
    #neetcode #leetcode #python

ความคิดเห็น • 1.5K

  • @NeetCodeIO
    @NeetCodeIO  หลายเดือนก่อน +1185

    Hot take: If LLMs make you 10x faster at coding that says more about your coding ability than it does about how good LLMs are.
    The tweet: x.com/neetcode1/status/1814919711437508899
    This video may change your mind about the AI hype in general: th-cam.com/video/uB9yZenVLzg/w-d-xo.html

    • @user-dk8lw1ly3l
      @user-dk8lw1ly3l หลายเดือนก่อน +2

      can you list some complex apps? i'm still learning i feel underpowered

    • @pastori2672
      @pastori2672 หลายเดือนก่อน +13

      you just made me feel good for getting shit code from gpt

    • @chrischika7026
      @chrischika7026 หลายเดือนก่อน +1

      W Neet

    • @MikhailFederov
      @MikhailFederov หลายเดือนก่อน

      Skill issue. You’re probably shit at prompting. The equivalent of making a StackOverflow post and the people need to ask 50 follow ups to get the context of your issue

    • @godgivespizza238
      @godgivespizza238 หลายเดือนก่อน +1

      AI can seemingly dominate the world, not through a engineer perspective but on a mathematical perspective AI can really code of medium-difficulty tasks. These companies pour billions on transformer models and the best compact brilliance on other side which was not encouraged by those communities on a real scale

  • @jamesarthurkimbell
    @jamesarthurkimbell หลายเดือนก่อน +3515

    The alliance of "CEOs who hate paying salaries" and "students who hate doing homework" both wanting LLMs to code perfectly

    • @fenkraken
      @fenkraken หลายเดือนก่อน +155

      It’s a perfect circle since latter drops out of college after meeting a potential investor to become the former

    • @stacklysm
      @stacklysm หลายเดือนก่อน +23

      It could even be a Venn diagram

    • @MahmoodAlSaghiri
      @MahmoodAlSaghiri หลายเดือนก่อน +53

      trust me, student here, we really dont want LLMs to code perfectly or even close to it. Our futures arent worth the few hours of homework 😭

    • @agnescroteau8960
      @agnescroteau8960 หลายเดือนก่อน +5

      Years ago I read somewhere on the internet the field was broken, totally broken. But that’s ok, I am from the medical field, I brought my stethoscope. Also, duck tape, and WD40 if that can help.

    • @retardo-qo4uj
      @retardo-qo4uj หลายเดือนก่อน

      ​@@agnescroteau8960medicine wont lose work, but will dying to overwork though. Especially if you have "universal healthcare" which will take your negotiating power and income

  • @meowmix4life86
    @meowmix4life86 หลายเดือนก่อน +1670

    "I made the rookie mistake of opening up Twitter" LOL :)

    • @TheCronix1
      @TheCronix1 หลายเดือนก่อน +34

      Yes, a "mistake" he made. And instead of quickly closing it and forgetting like a bad dream, he published a whole video about it.

    • @brainites
      @brainites หลายเดือนก่อน +3

      🤣

    • @datchkh
      @datchkh หลายเดือนก่อน +12

      @@TheCronix1 - the tweet author, probably.

    • @jamad-y7m
      @jamad-y7m หลายเดือนก่อน +11

      @@TheCronix1 that's why you never open Twitter it just gives you something to complain about

    • @viber1430
      @viber1430 หลายเดือนก่อน +4

      😂😂😂😂

  • @yang5843
    @yang5843 หลายเดือนก่อน +1000

    10 x 0 positivity is still 0 productivity

    • @J3R3MI6
      @J3R3MI6 หลายเดือนก่อน +9

      Keep coping…

    • @Sacrifice.Online.for.Offline
      @Sacrifice.Online.for.Offline หลายเดือนก่อน +76

      @@J3R3MI6 Stay useless

    • @Icedanon
      @Icedanon หลายเดือนก่อน +16

      Actually, it's 0 positivity. You forgot to double check your math.

    • @gamingdude7959
      @gamingdude7959 หลายเดือนก่อน

      ​@@Icedanon from this equation 10 * 0 (positivity) = 0(productivity)
      which is true since 0 = 0, LHS = RHS, so here lets take :
      0 = x, so 10 * x(positivity) = x(productivity)
      since x productivity is 0, we have divide both sides by 10, we get : x(positivity) = x(productivity) since dividing 0 by any number equals 0 unless the denominator is 0.
      so going futher =
      x/x = productivity/positivity. which is both 1 and infinity,
      1. taking x/x as 1 :
      therefore we conclude that productivity and positivity are inversely proportional,
      the more productivity u have, the less positivity u get.
      and vice versa.
      2. taking x/x as infinity, we can also conclude that
      productivity/positivity = infinity. Or productivity = infinity/positivity.
      if your productivity was 2 units, then your positivity would be infinity/2 which is very large number, so we can take it as infinite.
      therefore no matter what is your positivity, your productivity is actually infinite,
      if your positivity is infinity, then your productivity would actually suffer.
      By the whole answer, we conclude two things :
      the more productivity u have, the less positivity u get.
      and vice versa.
      if your positivity is less than your productivity, then your productivity is actually infinite,
      if your positivity is infinity, then your productivity would actually suffer.
      so moral of the story :
      keep your positivity low, be depressed, take some pills, do drugs, etc to further lower your positivity and increase productivity.
      MATH SUCKS, SINCE INFINITY IS NOT A FRICKING NUMBER, ITS UNDEFINED IN MATHS, SO WHATEVER THAT I COOKED IS INVALID.

    • @laughingalien
      @laughingalien หลายเดือนก่อน +1

      Terrence Howard: "Hold my abacus whilst I deal with this punk! "

  • @bwhit7919
    @bwhit7919 หลายเดือนก่อน +737

    “Founder” just means your side project has an LLC and a bank account

    • @ridabrahim7604
      @ridabrahim7604 หลายเดือนก่อน +8

      😂

    • @XHackManiacX
      @XHackManiacX หลายเดือนก่อน +8

      And you're probably already looking for investors, lol

    • @bwhit7919
      @bwhit7919 หลายเดือนก่อน +10

      @@XHackManiacX actually no, I just want to put my app on the app store

    • @malcomgreen4747
      @malcomgreen4747 หลายเดือนก่อน +6

      You talk about me i take offense 😂

    • @yli8888
      @yli8888 หลายเดือนก่อน +1

      Bank account with a couple bucks is optional😂

  • @Jia-Tan
    @Jia-Tan หลายเดือนก่อน +922

    Weren't "no-code" tools hyped up in the exact same way? Or am I misremembering?

    • @wondays654
      @wondays654 หลายเดือนก่อน

      They did, now only governments use them.

    • @yashghatti
      @yashghatti หลายเดือนก่อน +167

      Yeaaap :) that ship sank so hard no one's even talking about those any more

    • @datchkh
      @datchkh หลายเดือนก่อน

      @@Jia-Tan Can you please explain what those mean? (The “no-code” tools)

    • @nikolanedic3381
      @nikolanedic3381 หลายเดือนก่อน

      ​@@yashghatti That's just not true, you have Webflow and Elementor, both widely used and battle-tested, they were also marketed as replacement for all developer, but they eventually find their own place, and got accepted by everyone as a good tool in some cases.

    • @bitwize
      @bitwize หลายเดือนก่อน +137

      We've been through *several cycles* of this. It was CASE tools in the 1980s, and then UML in the 90s/early 2000s. Both of these were supposed to obviate the need for coding, you just had to go from requirements to running program. The problem is, any symbols you use to express a spec in such a way that it's specific enough to be executable by computer are isomorphic to constructs in some programming language. They just might not be easily diffable, version-controllable, or viewable or editable except by specialized tools.

  • @omdaryani3630
    @omdaryani3630 หลายเดือนก่อน +494

    Engagement farming is a real thing on twitter and that's what is been going on. People just post anything and if your Post by any means contains the word "AI" and fear mongers among general public, It's sure to get reactions from left and right.

    • @EverAfterBreak2
      @EverAfterBreak2 หลายเดือนก่อน +14

      I believe X is paying based on your posts interactions, that’s why that thing is full of bots

    • @Jupa
      @Jupa หลายเดือนก่อน

      they only make like a few bucks too. so it's really pathetic when actual humans do it.
      now the bots are excused, they are raking it money through annoying others. that's genius level of hustling. the american way.

    • @ronilevarez901
      @ronilevarez901 หลายเดือนก่อน +1

      I'm absolutely sure that if I post anything, even if it contains the word "AI:" it will get at most 4 views because that's what every tweet I've ever posted since Twitter started has got. Unless you pay them or something, there's no way to get views or followers in that thing.

    • @poleve5409
      @poleve5409 11 วันที่ผ่านมา +1

      ​@@ronilevarez901The only way is to post something really eye catching or just spam everyday.

    • @rey82rey82
      @rey82rey82 3 วันที่ผ่านมา

      Enragement farming

  • @madeye0
    @madeye0 หลายเดือนก่อน +197

    This is not how you use LLMs to aid coding. You use it to write small self-contained functions, regexps, throwaway scripts, prototypes and non-customer facing utility code for stuff like data visualisations etc. It's not for non-technical people, it's for technical people that want to get through non-critical or simple parts of a task quicker.

    • @clray123
      @clray123 หลายเดือนก่อน +56

      Basically a replacement for StackOverflow and not much else.

    • @leversofpower
      @leversofpower หลายเดือนก่อน +9

      You clearly get it.

    • @artemv3160
      @artemv3160 หลายเดือนก่อน +28

      Exactly. Been using for some DevOps tasks heavily. Python, bash - don't know and don't care. I have enough developer knowledge to debug it, but not to learn all the syntax and niche libs, frameworks, and language quirks.

    • @Matt-dk3wl
      @Matt-dk3wl หลายเดือนก่อน +13

      It's like having a developer buddy on Discord ALWAYS ready to go. I agree completely with you. Consulting, sharing snippets, etc. is the way.

    • @hulkingmass
      @hulkingmass หลายเดือนก่อน +5

      i.e. a novelty of little consequence to all but grifters and sheep

  • @AnimeGIFfy
    @AnimeGIFfy หลายเดือนก่อน +244

    "LLMs will replace all developers" said a person who's the most major accomplishment is a hello world app.

    • @youssoufcameroon2565
      @youssoufcameroon2565 15 วันที่ผ่านมา +3

      🤣

    • @thecollector6746
      @thecollector6746 3 วันที่ผ่านมา

      Nearly every TH-cam coding influencer who's entire business model is pretending they are actual professional Software Engineers but all of their projects are forks of somebody else's public GitHub project has entered the chat.

    • @ruslansmirnov9006
      @ruslansmirnov9006 วันที่ผ่านมา

      a 'Snake' game

  • @Vancha112
    @Vancha112 หลายเดือนก่อน +302

    Its an 80-20 thing. LLM's suck ass for the parts that actually take the time, and help with what most people dont need help with.

    • @SoyGriff
      @SoyGriff หลายเดือนก่อน +28

      they're still really useful for "dumb tasks". i can tell gpt 4o to "Look at this project i made, now i need X project with x and x, make it based on my other code" and it will make me a working crud in less than a minute. sure, it might have some issues or be missing features. But it still saved me like half an hour of coding if not more.
      i've done that a few times and personally i find it pretty satisfying to be able to generate a basic crud with 5 working endpoints in a few seconds.

    • @Vancha112
      @Vancha112 หลายเดือนก่อน +11

      @@SoyGriff Very much so :) I love them for learning new spoken languages too, i doubt there's a better tool other than actually practicing with other people. They have many uses, but the message I was trying to reinforce was neetcode's opinion on how they're not as advanced coding wise as they are made out to be.
      In your case, the crud part can be found basically anywhere, since so many people have already implemented it. For implementing specific business logic their usefulness basically depends on your ability to modularize the problem. If you can break down your problem in to small enough chunks that you can ask chatgpt how to implement them, you've already done a lot of the "programming" yourself.
      They're definitely useful in their own right.

    • @SoyGriff
      @SoyGriff หลายเดือนก่อน +2

      @@Vancha112 the crud part can't be found easily because it's specific to my project and yet it can generate it in seconds based on my instructions, it saves me a lot of time.
      i agree i'm doing most of the programming and just telling the AI to implement it, but that's the beauty of it. that's what AI is for. i only have to think and explain while it does all the heavy work. that's why my productivity increased so much since i started using it. i'm building in a month what my old team would build in 6, and i'm alone.

    • @technokicksyourass
      @technokicksyourass หลายเดือนก่อน

      Been coding for 20 years here. The point is.. even if you don't "need help" with that part.. the LLM will do the job faster than you can do it.. thus your productivity is improved. In my opinion, if you are not figuring out how to include LLM's in your workflow, you are going to be left behind by those that do. Is it a 10x increase? For tasks the llm can do.. it's much more than a 10x increase!

    • @themartdog
      @themartdog หลายเดือนก่อน +2

      It's not about "needing help with it" it's that it can do a bunch of tedious stuff in a few keystrokes rather than having to type it out

  • @draakisback
    @draakisback หลายเดือนก่อน +29

    The reason why they're lying is because of money. I'm a senior engineer and I've been in the industry for 25 plus years, LLMs just waste time for anything that isn't the most trivial app on the planet. The AI hype is based around a bunch of VCs misrepresenting the difficulty of programming/engineering for the sake of selling a product.
    I feel like the Twitter guy doesn't understand what 10X even means. If you can implement this stuff 10 times as fast then you can literally work for one day and then take the rest of the week off and no one will notice a difference. Naturally, I don't think he's a programmer in the first place which is probably why he sees a 10x improvement. This is just a big old case of the dunning kruger effect.
    The funniest part of all of this stuff is that it just doesn't make sense logically for these LLMs to ever become proficient at writing code. The reason why I say this is because you need to have enough data to train the LLM on every use case. But the problem is that there are plenty of use cases where maybe there's only one or two examples in open source. These AIs have no ability to create anything new and so there's always going to be a distribution where the majority of problems simply can't be solved by the LLM because it doesn't have enough data to understand those problems. At the same time, they'll become really proficient at writing todo apps because there are thousands of those.

    • @megatronreaction
      @megatronreaction 3 วันที่ผ่านมา

      sadly most non tech employeer currently started to underestimate engineer, just like my former boss, saying that my $500 salary as fullstack dev is enough because AI can help. hahaha.

    • @megatronreaction
      @megatronreaction 3 วันที่ผ่านมา

      $500 monthly salary. flat. no bonus and not allowed to have a side dev job
      hahaha

    • @Bytedesigning
      @Bytedesigning 5 ชั่วโมงที่ผ่านมา

      Like I've mentioned elsewhere it also really depends on the language, the more popular the language the easier or perhaps more options you can get, the less popular the language.. Well good luck getting AI to help you write in older languages or in custom ones. It's only sort of helpful anyway for problem solving because like you said it's based on examples already existing which might even be the wrong problem to solve leading you down rabbit holes if you don't realize it. The biggest problem with AI though is the cost of maintenance from both technical, and environmental viewpoints. It's like how some NFT's are supposed to "solve" climate change, good luck getting "green" AI.

  • @handleneeds3charactersormore
    @handleneeds3charactersormore หลายเดือนก่อน +113

    Every 4 years silicon valley gets caught being shady and people just Pikachu face through it like it's the first time

    • @wtfJunk
      @wtfJunk หลายเดือนก่อน

      Not limited to silicon valley. Any areas with great amount of cash flow are full of cheating and lying.

    • @betadevb
      @betadevb หลายเดือนก่อน +9

      I am old enough to remember when they told us the future of gaming is pikachu hunting with your smartphone camera.

    • @hammerheadcorvette4
      @hammerheadcorvette4 หลายเดือนก่อน

      @@betadevb I will never forget the incident in NYC near central park where someone yelled out "Vaporeon is here" and people jumping out of their vehicles to catch this Pokemon. IN NYC / CENTRAL PARK !!! th-cam.com/video/MLdWbwQJWI0/w-d-xo.html Vaporeon Central park Stampede

    • @daveliu8365
      @daveliu8365 22 วันที่ผ่านมา +3

      @@betadevb To be fair, I still play pokemon Go

    • @JohnCoughlan_JAC
      @JohnCoughlan_JAC 6 วันที่ผ่านมา

      This is my favorite hot take. Thanks for this! 😂😂

  • @magnetholik3618
    @magnetholik3618 หลายเดือนก่อน +22

    Q: Why would people lie/exaggerate like this?
    A: To generate traffic on their channel/feed/blog via hype and/or because they have financial interests why benefit by hyping up the technology.

    • @thecollector6746
      @thecollector6746 3 วันที่ผ่านมา

      Basically your typically permutation of your typical Social Media borne scam.

  • @chun7798
    @chun7798 หลายเดือนก่อน +34

    Totally agree. That is the major difference between looking from a non-tech person and a tech person's point of view.
    From a non-tech person's point of view, they are now able to create a "non-working, working-looking site" (lol), whereas before, they would need to have a UI designer/engineer create it for them, which cost them money and meeting time.
    From a tech person's point of view, LLM is just a snippet where now I don't need to go to StackOverflow. Using it more than that is just wasting time as mentioned in your video.
    And the most hyped people that go around talking shit are the non-tech people who work for a tech company that knows nothing about systems but think they do so they start using these LLM tools thinking they can replace engineers..
    The worst part is that they use these tools to create so-called prototypes and then give it to the engineers to make it production-ready but don't know why it takes longer than the traditional way (*Cough CEO/Project managers Cough*)

  • @kedarkulkarni9464
    @kedarkulkarni9464 หลายเดือนก่อน +360

    The biggest issue with these LLMs is that they lose context SO FAST. 3 prompts in and you need to tell them again and again to keep in mind what you had mentioned in the previous prompts. I was using Chatgpt and Copilot for the leetcode problem "flight assignment" and I accidentally forgot to mention "Flights in this coding question" in my 3rd or 4th prompt and it started giving me Airline flight info. Which is completely bonkers because how could it think I am talking to it about Airlines instead of a coding problem that we were working on a few seconds ago!!

    • @PanicAtProduction
      @PanicAtProduction หลายเดือนก่อน +6

      You should increase token count.

    • @3_smh_3
      @3_smh_3 หลายเดือนก่อน +3

      ​@@PanicAtProductionit will be crazy. Even LSPs start to struggle on bigger projects.

    • @tomh9553
      @tomh9553 หลายเดือนก่อน +53

      I find the more I know about a specific task the less useful an LLM is. When I’m new to something it’s a great place to start by talking to chatgpt or something.

    • @sa9245
      @sa9245 หลายเดือนก่อน

      I'm not a programmer but I've made a few JS sites and Python apps for fun, and one thing I learnt to do is to start new chats. Once you get too deep it starts going batshit. Granted this is all very basic level, so it probably wouldn't help on anything too big or technical anyway, but basically if you spend some time starting new chats and being very specific and detailed with your prompts it does help. With Claude I'll tell it I've updated the files in the project knowledge section and for it to refer to the newest version. There are ways of getting it to stay on track but it probably is a waste of time for an actual programmer.

    • @Bobab0y
      @Bobab0y หลายเดือนก่อน +7

      lmao 😂 it giving you flight info is wild

  • @datchkh
    @datchkh หลายเดือนก่อน +337

    They’re lying because they’re trying to get rid of competition. Propaganda, basically. Yes, I know this sounds crazy, but it worked on me. When AI was first released, there were millions of videos and articles floating around about how AI was going to replace humans, and me, who was at the time learning how to code, gave up on coding because AI scared me. I chose to go on a different path. I’m sure there are more people who gave up on coding because of AI propaganda. Fortunately, though, stopping learning how to code didn’t have a big impact on me since I was 13 at the time and even though I wasted almost two years not learning how to code, I’m back at it and will not give up no matter what 💪 You shouldn’t either. AI will not replace software engineers. Period.

    • @abhishekraparti5691
      @abhishekraparti5691 หลายเดือนก่อน +15

      Understandable, felt that too. I'm a little decent at coding and felt totally replacable when devin was on the hype train.

    • @Illmare
      @Illmare หลายเดือนก่อน +19

      NGL I was gonna use AI as an excuse to finally quit capitalism and move to a mountain with my savings (not trolling this was going to happen). But the only thing that ended up happening is that once again in the industry greed won and a with the layoffs a lot of us devs are being exploited af. We are trapped in the promise of a transition that will take decades with CEOs who just want to keep on cutting numbers on one side and the AI bubble on the other.
      In the mean time tons of really good people cannot even find internships because interviewers also fell on the AI bubble trap and are now asking freshly graduate kids to code skynet in a blackboard in 15 minutes.
      The industry really sucks rn.

    • @diamondkingdiamond6289
      @diamondkingdiamond6289 หลายเดือนก่อน +1

      Good on you, I only managed to start learning at age 17

    • @anon3118
      @anon3118 หลายเดือนก่อน +2

      Why does your profile pic look like you're 35

    • @luiggymacias5735
      @luiggymacias5735 หลายเดือนก่อน

      It's not him, that's Robert the Niro form "taxi driver" movie ​@@anon3118

  • @business_central
    @business_central หลายเดือนก่อน +242

    A saying that's always valid "Stupid people are the loudest" , that's how I see all those Twitter "influencers/founders" with their AI takes, LLMs, carrers,etc... They need to get good themselves before talking. Wake me up when Primeagen agrees with their nonsense.
    Good take Neetcode!

    • @OCamlChad
      @OCamlChad หลายเดือนก่อน +5

      Except it's the opposite. Most of these takes against AI for dev productivity are people who havent progressed beyond senior engineer, including Primeagen.

    • @cipher01
      @cipher01 หลายเดือนก่อน +25

      ​​@@OCamlChad maybe because the AI itself has not progressed beyond the level of an intern.

    • @cipher01
      @cipher01 หลายเดือนก่อน +2

      ​@@OCamlChadsince that's not good enough for you maybe you should ask John Carmack next time.

    • @NeroX-nh8se
      @NeroX-nh8se หลายเดือนก่อน +3

      You are definitely right... Now, how do we shut Elon up?

    • @nou4605
      @nou4605 หลายเดือนก่อน +9

      ​@@OCamlChadLol okay Mr Junior Dev

  • @threeone6012
    @threeone6012 หลายเดือนก่อน +8

    Pre LLM => Devs expected to work 45 hours per week.
    Post LLM => Devs expected to work 60 hours per week.

  • @roelljr
    @roelljr 12 วันที่ผ่านมา +5

    You nailed it. Non technical people don’t understand that the last 10% takes 90% of the time… and the problem with developing with LLMs give you the impression you are almost done, but you still have a long way to go. And if you started with an LLM and don’t know what you are doing… good luck with that last 10% 😂

  • @arbitrandomuser
    @arbitrandomuser หลายเดือนก่อน +38

    i like your style , calm composed and very genuine and non-toxic ,
    you know you're seeing bullshit yet you respond respectfully and give everyone the benefit of the doubt.

    • @Neomadra
      @Neomadra หลายเดือนก่อน

      I wouldn't call him calm, rather hysteric. :D

    • @birthdayzrock1426
      @birthdayzrock1426 หลายเดือนก่อน +3

      @@Neomadra that's not hysterical at all

  • @wondays654
    @wondays654 หลายเดือนก่อน +143

    Ah yes, the old AI made me a 10x engineer. It's always cap...chances are that these individuals that claim this are the ones who push absolute dog water to production because they don't actually understand the code or know how to debug. Personally if I'm prompting this LLM to write something and then having to double-check it and if it is wrong prompt it again and do that whole process till it gets it right, It would have been faster if I did it all myself in the first place.

    • @ReiyICN
      @ReiyICN หลายเดือนก่อน +4

      I don't know man. Personally, I personally find that its much easier to edit 'half-way there' code, than to write from scratch. It might take a while to get used to the peculiarities and bad habits of the LLM and figure out what's the best point to stop prompting and start coding by yourself, but once you figure it out, I do find that relying on AI makes me a lot more productive. Not 10x but definitely at least 3x on a good day. (Although there are obviously also bad days where its barely 1x). I find that its great at data visualization code, complicated refactorings, explaining an existing (not too large) project I'm trying to get started with, and basically speeding up any annoying, slightly complex, tedious process. And it really shines for quick dirty projects in languages you're unfamiliar with (need to google how to init array) but can read just fine once the code's there in front of you, since you can basically just wing it, as long as you've a LLM to get your back.

    • @Titere05
      @Titere05 หลายเดือนก่อน +11

      @@ReiyICN Oh boy I'd never ever rely on AI for "complicated refactoring". Sounds strikingly similar to shooting yourself on the foot. To be fair I've only found AI useful for common boilerplate you don't want to write, or in the case of copilot when you're creating a structure, it is quite good at completing the structure, for example switch or else statements

    • @retardo-qo4uj
      @retardo-qo4uj หลายเดือนก่อน

      ​@@ReiyICNmore like 1.3x
      Even if LLM can do 90% code, the 10% will take 90% of your time. Its "last mile problem" pattern

    • @SoyGriff
      @SoyGriff หลายเดือนก่อน +2

      The issue with a lot of people is PROMPTING, you don't prompt LLMS. You don't have to find the correct prompts or keywords. You just talk to them as if they were a human being, a dumb one. It works really well in my experience.
      It's better to write 2 paragraphs explaining what you want than trying to make it work 10 times while only writing basic prompts and not providing the whole context

    • @gabrielpauna62
      @gabrielpauna62 หลายเดือนก่อน +5

      @@SoyGriff some projects are to complex to explain the entire context , I find once iv explained it I already know how to solve it anyway

  • @JimBob1937
    @JimBob1937 หลายเดือนก่อน +14

    That's been my experience as well. Even with snippets, it works best when I effectively solve the core logic first and just ask for code, or give it complete code and ask for suggestions. For anything beyond snippets, I've spent more time holding the LLM's hand to not go x or y route, and eventually just figure it out myself. LLMs are definitely far, far away from getting to the point where a lot of people praise them, like 10xing. They definitely are very handy tools, but have a lot of limitations.

    • @strigoiu13
      @strigoiu13 หลายเดือนก่อน

      so, you basically trained it for free...good job! do it more often, please, we love free work!

    • @JimBob1937
      @JimBob1937 หลายเดือนก่อน

      @@strigoiu13 , I did not. You have the option to remove your data from being part of the training set. Then, for security purposes, I delete conversations as well. Even then, they have plenty of training examples from other sources.

    • @somenameidk5278
      @somenameidk5278 หลายเดือนก่อน

      @@strigoiu13 Also, if it actually learned from its users automatically, it would be saying slurs constantly within days of launch. We've seen that happen to chatbots like that repeatedly.

    • @Bomberman66Hell
      @Bomberman66Hell หลายเดือนก่อน

      ​@@strigoiu13So what? It was useful to him, sounds like a fair trade

  • @sansu1947
    @sansu1947 หลายเดือนก่อน +27

    agree with what you're saying, i've been doing software for +10 years and I do think it has made my productivity go like 10x up, but the difference is that I know what I need and I use chatgpt 4o as a rubber duck, specially when doing architecture decisions and tradeoffs I have a vague idea of lets say 3 different ways of building x product so I just ask for pros/cons, describe my ideas and so on and it works. The thing that i've noticed is that if I spend + 2 hours discussing/bouncing ideas with an LLM it becomes stale really fast, forgets my previous input and just hallucinates, but as an initial technical document writing or small shit like basic components it works VERY good

    • @kell7689
      @kell7689 หลายเดือนก่อน +5

      This. I agree with this a million times over. I treat it like a rubber duck that has 130 IQ. At the end of the day it's *my* hand that is writing the code. The LLM just provides input and feedback. The claim made by the tweet OP is definitely exaugurated, but if you strip out the hyperbole and 'zoom out' a little, it's pretty realistic.

    • @6lack5ushi
      @6lack5ushi หลายเดือนก่อน +1

      It’s about pain vs complexity.
      Like he said if it can handle snippets it can handle big projects in chunks. That’s how I use it. I edit more code than I write but my jump of is always an AI.
      It just physically writes code faster… I can do the thinking and editing but it write 1000-500 lines a min.

    • @rocketPower047
      @rocketPower047 หลายเดือนก่อน +2

      The problem with this video is that he starts with his emotional opinion and then finds examples that proves him right

    • @vash47
      @vash47 25 วันที่ผ่านมา

      @@rocketPower047 literally autistic

  • @dark_lord98
    @dark_lord98 หลายเดือนก่อน +10

    Everybody nowdays is "Founder" or "building X" with no technical background, few years ago it was the hype ride with no code tools and now it is with LLMs.

  • @philadams9254
    @philadams9254 หลายเดือนก่อน +83

    Yes, the problem is that they can get close to the spec you give them, but it's not close *enough* and has to be rewritten. This has been frustrating for me many times where I tell the LLM to change one small detail and it goes round in circles before finally admitting something can't be done without starting from scratch. Huge waste of time in a lot of cases

    • @wforbes87
      @wforbes87 หลายเดือนก่อน +12

      Pretty sure my project manager could say the same about our dev team 😂😂

    • @robotron26
      @robotron26 หลายเดือนก่อน +2

      Thats part of you learning
      If you learn what tools exist or what libraries can actually do, it should be able to help you code it just fine
      Its literally translating your prompt from english into code
      You asking it to do something impossible is partly your fault

    • @robotron26
      @robotron26 หลายเดือนก่อน

      @@wforbes87100%
      As someone using it to write basic code, its a godsend, i dont need to wait a day or submit a ticket or whatever to have to talk to an engineer
      These guys are vastly overestimating the amount of mundane work that goes on outside of faang lol, most coders or code jobs are not frontier

    • @philadams9254
      @philadams9254 หลายเดือนก่อน +8

      ​@@robotron26 Sure, but the LLMs sometimes thing something impossible is in fact possible and lead you on.

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน +4

      i dont think you used it properly..

  • @cjjb
    @cjjb หลายเดือนก่อน +50

    Important to keep in mind that a lot of the hype is either manufactured by folks that have invested a lot of money into the current AI boom or folks that have fallen for said marketing.

    • @Titere05
      @Titere05 หลายเดือนก่อน +6

      The only people who can fall for said marketing are those who haven't actually tried the product. The rest, like the guy writing this article, they're CLEARLY stakeholders. I bet this guy bought some Anthropic stock beforehand, or is just a paid actor

  • @Jayveersinh_Raj
    @Jayveersinh_Raj หลายเดือนก่อน +22

    Well I am an LLM engineer, and truth be told, as Microsoft too responded the same after the copilot outrage, its just tools to help the professionals in their domains. People especially from non programming or beginner level programming background always get it wrong, they gets baffled with the smallest of code snippet. If you have no knowledge of the background of the task that you are trying to solve, LLM sure can waste your time, they are specifically designed to assist people with background. LLM sure can save your time and help you as a tool, it is not intended to replace an engineer, and the number 10x is an exaggerated number. However, it is current state of the art, it does not mean LLMs won't be better in the future. As a personal example I used LLMs all the time to prototype by creating interfaces, but I do have a degree in Computer Science, and many times I have to rewrite prompts, overall I can say it saves you 1.5x to 2x time at max, maybe more in some rare occasions, but it cannot be generalized.

    • @matthewwoodard9810
      @matthewwoodard9810 หลายเดือนก่อน

      This. If you know what you’re doing and looking at, your understand what the LLMs are and are not, they are fantastic, fantastic tools for speed and productivity. They are insanely helpful for documentation. The code and documents aren’t perfect, but I can iterate so fast, that ultimately I’ve pushed better code and faster

    • @steamerSama
      @steamerSama หลายเดือนก่อน

      a knife in a chef's hands is not the same as one in a child's hands.

    • @kocot.
      @kocot. หลายเดือนก่อน +1

      IMO it's the exact opposite, if I work on an existing project, know the tech stack well and the duplication is reduced, the gain from LLM is really minimal or even negative. Negative, cause accepting and modifying suggested solution often ends up time-wise worse than just doing it from scratch, you can also pass bugs that you'd not do, but you won't notice in generated code. Also sometimes I make up a special cases for copilot to prove itself, cause it's kind of satisfying.... lol
      It's different when prototyping, working with unknown tech stack or where duplication is by design (huge CRUD services), or inherited as bad design, or for e.g. unit testing where simplicity and duplication is desired. And I love copilot for Powershell, exactly because I don't know it well, it's 10x speed up in some cases there, and 5% in my core activity.

    • @matthewwoodard9810
      @matthewwoodard9810 หลายเดือนก่อน

      @@kocot. that 100 makes sense. At my job, I’m normally prototyping or building from scratch

  • @MightyMoud
    @MightyMoud หลายเดือนก่อน +17

    It's actually quite simple man, non-technical people don't really understand the complexity of the application. They see it looks the same, so it must be the same!
    Edge cases?! what is that

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน

      if i dont know what edge case is, then i ask the ai ? simple. its so funny why are all the comments like this? calm down

    • @MightyMoud
      @MightyMoud หลายเดือนก่อน

      @@playversetv3877 yeah good luck with that!

    • @scythazz
      @scythazz หลายเดือนก่อน +1

      You actually think the ai can give you a viable answer to this? At some point, its time to use ur own brain to solve problems.

  • @theonewhobullies
    @theonewhobullies หลายเดือนก่อน +11

    The thing is AI is like a fast worker that catapults to the answer quickly so you have to seer it with the correct type of questions so it is not ambiguous in its output, I had to code a task component some features (add task, remove them with a cross button, add due date with a calendar etc, I had its figma file and have Claude 3.5 all the details to remove ambiguity and it made a surprisingly good boilerplate component as I knew its training data would have something similar.
    For run of the mill tasks it is a game changer but for something requiring a spark of imagination (nil training data) it fails pretty badly.

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน +3

      Meh, even now it can do pretty hefty work requiring extrapolation from underlying training data. And I feel like the naysayers are really oblivious how insanely easy it suddenly is to really lay out boilerplate code - where 10x speed-up alone is a fierce underestimation on account of how much stupid stuff you have to type again and again. Don't even get me started with customization and style templates, there are sites that would take an entire afternoon to properly put together that, with some finagling, Claude 3.5 gets done in 15 minutes.
      I mean, sure there are tasks that are not going to see great performance, but what does it matter when the core hypothesis is clearly correct? People who know how to code aren't getting speed-ups? What are people even talking about when the very people who already know how to structure and code their projects are the ones with the biggest upside here? Just because you're an elite "10xer" doesn't mean you're awfully quick with the mundane, boring type-work. Just a matter of physical limitations, you bet your ass they already have a fairly vivid idea of how to implement a given stack... and having a model take that rough abstraction and spit out code you can just read and QC is definitely providing massive improvements.
      And for non-coders? Same deal, it's not like they ever freaking put out anything. Their prompt-behemoths might be clunky and awful as far as idiomatic code is concerned, but it's not like they ever got close to putting together any arbitrary game or site or database - whatever. Same for down-the-middle-of-the-road kind of devs who just aren't that great, i.e. 95 % of all worker in almost any domain. They barely do routine maintenance well, but at least they have heuristic and kind of manage in their everyday job. Most of them do not code in their free time or put together hobby projects, mostly because they're slightly sick of the job they never really got great at. Now they at least do something, and doing something, regardless of whether some magical entity produces the source for you, will always cause you to learn... at least more than they did before.
      Also: again with all these moving goalposts. It was kind of astonishing to look at three years ago - i.e. mostly incoherent code that at least looked like it does something. People dismissed it as never being a thing in a decade (with the estimates becoming more ridiculous the more you're dealing with programmers), none that produces function code. Then it produced functioning code. Then it produced functioning code while keeping pretty nice consistency and context over the course of 20-30 macro prompts, asking the LLM to change or add some sort of functionality.
      Maybe we're not there yet, but honestly, it's also kind of a moot discussion. There absolutely are 10x efficiency improvements for certain coders, and arguably much larger ones for the significant majority of software engineers. Barely even debatable - and that is ignoring all the benefits of having a model at your fingertips that might explain architectures to you or help you understand why some code is doing one thing over the other.

    • @TheMrKeksLp
      @TheMrKeksLp 9 วันที่ผ่านมา

      There are around 10 gazillion implemenations of "my first task list" on github, of course it managed to do that. Now ask it to design an async cache that fits into the constraints of your existing application...

  • @majesticwizardcat
    @majesticwizardcat หลายเดือนก่อน +2

    I think 90% of my job is figuring out how to solve the issue I have, 5% - 6% is bug fixing and testing what I added and the rest is typing code. I think, even if I could magically have all the code in my head on my computer in like a second, it would save me a couple of working hours per week. I think the people who create these tools don't actually understand what programmers actually need. If for example I could have an AI that quickly tests my code, then we could start talking, probably would save me lots of time.

  • @briangreenberg5986
    @briangreenberg5986 หลายเดือนก่อน +6

    I work a lot with writing quick utility tools and api integrations for enterprise tool marketplaces and this is extremely useful for making hyper specific private apps to help a team handle one tiny piece of straightforward automation + hooking together a couple if APIs + maybe a super quick interface. LLMs are really powerful for things like this and have prob made me 10x faster for certain easy but tedious tasks.

  • @mountains4000
    @mountains4000 หลายเดือนก่อน +53

    Man those replies and comments are AI themselves. :(

    • @anamuslim6700
      @anamuslim6700 หลายเดือนก่อน +11

      That's what I thought. Bots are hyping things up deceiving humans into the trend. If you tell a lie too many times...

    • @ltpfdev
      @ltpfdev หลายเดือนก่อน +9

      I'm starting to think that the bots are manufactured by twitter. What benefit does anyone outside the company have to run bots that respond to posts like that. Not to mention the captcha when registering, I literally could not pass it myself after like 3 tries of having to get 20/20 answers correct to the point that I gave up. Maybe I'm stupid and AI can solve that better than me, I don't know, seems fishy. It's probably 90% of posts I see are AI.

    • @Titere05
      @Titere05 หลายเดือนก่อน

      @@ltpfdev Well if these are indeed Twitter's own bots, then they'd just bypass the captcha and probably post via API

    • @SItgix
      @SItgix หลายเดือนก่อน +2

      what even is real anymore ;(

  • @a_name_a
    @a_name_a หลายเดือนก่อน +25

    It is overhyped but at the same time it does do my work much faster. It can't build entire systems or even big parts of a system, but It can work on small parts. For example, writing simple functions, components, UI elements etc. I mainly use it to speed up my work, instead of coding, its mostly me checking over the code generated and fixing small things. Sometimes its frustrating and gets it very wrong, but I usually just have to fix the prompt. Overall its definitely sped up my work flow, maybe not 10x, but 2-3x is reasonable.

    • @robotron26
      @robotron26 หลายเดือนก่อน +6

      And thats enough for it to be a massive change
      Now your company can make you do 3x the work instead of hiring one or two more people
      They will absolutely do that
      And AI will advance more to the point where eventually, you will not be needed
      There is no arcane or unknown laws of coding libraries, they are all manmade and documented, the ai will get better

    • @technokicksyourass
      @technokicksyourass หลายเดือนก่อน

      Your thinking in zero sum terms. The demand for code will simply increase... the reality is.. most companies want to use a lot more software than they currently do.. so they will simply create more applications and better tools for users.

    • @artemv3160
      @artemv3160 หลายเดือนก่อน

      @@robotron26 And your point is? If developers will be replaced, it's would mean that pretty much all of intellectual jobs are done.

  • @tlilmiztli
    @tlilmiztli หลายเดือนก่อน +10

    Man, I am an artist for nearly three last decades and I feel exactly like you when I am listening to other artists praising using LLMs for art. I tried many image generators and they work great... for people that just want random picture to jump out of it :D The more specific need it is - the more problems with generating simple picture. You will just waste time trying to describe what you need and getting random pictures that are sometimes not even remotely connected to what you need. And thats just simple pictures. When it comes to 3D models - LLMs are laughably simplistic. I see so many YT videos where people are AMAZED by the results - while showing something absurdly simple and still in need of manual fixing. LLMs cant even get good topology and people keep on talking about how it will replace us. More so - some people are claiming that they already lost a job to generator... HOW? What the hell they were doing? How simple thing that they could be replaced by something so deeply flawed? I recently started to learn a bit of coding for a simple, proof of concept game I am making. I didnt even tried LLM because I dont want to waste time. I rather ACTUALLY LEARN and understand how code works instead of trying to copy-paste it and then repeat it 1000 times because something isnt working and I will dont know why while LLM will tell me "ow, I am sorry let me fix it. Heres improved solution!". And then spit something wrong once again :D

    • @manoflead643
      @manoflead643 28 วันที่ผ่านมา +1

      The generator doesn't have to actually be good to replace people, see. All it has to do is be shiny enough for the people marketing it to convince people's upper management that it can replace them. Or be a convenient excuse to have mass layoffs and rehire at lower price or overseas.

    • @megatronreaction
      @megatronreaction 4 ชั่วโมงที่ผ่านมา

      @@tlilmiztli hi fellow artist, before AI hype, fortunately I also already switching to fullstack dev and I think being artistic give something different to what you build.

  • @user-j5ja95
    @user-j5ja95 หลายเดือนก่อน +45

    I wish i had a roommate/friend like Neet, i would hang out with him all the time. Imagine all the intellectual conversations and debates you'll be having. Being around someone like him I feel like would make me smarter :x

    • @MS-rw3rh
      @MS-rw3rh หลายเดือนก่อน +2

      stop it, smart mouth!

    • @NeetCodeIO
      @NeetCodeIO  หลายเดือนก่อน +45

      I appreciate that but I think you would get tired of all my dick jokes 😂

    • @box-mt3xv
      @box-mt3xv หลายเดือนก่อน +1

      ​@@NeetCodeIOlmao

    • @Antnix732
      @Antnix732 หลายเดือนก่อน +7

      Wow just put it in your mouth already

    • @chrischika7026
      @chrischika7026 หลายเดือนก่อน +6

      thats insane glaze

  • @kimbariri
    @kimbariri 4 ชั่วโมงที่ผ่านมา

    As a student currently persuading master degree in data science - and had over 10 years of web dev work exp in the past...I agree with you. The sad thing is, CEOs or who has never placed their hands on actual 'operation' in management yapping about the LLM. They literally have threaten people with massive layoffs. So many layoffs were made already due to their belief. Later, the companies made huge layoffs keep showing recruiting ads over months constantly. Seems like they are still struggling to fill the gaps.

  • @chrisreed5463
    @chrisreed5463 หลายเดือนก่อน +6

    I'm an engineer in my fifties. I've used GPT4O to help me control our test and measurement equipment from inside Excel. We already use semi-automated Excel templates to produce certification.
    I am fairly handy with VBA in Excel. But what I am now doing with automation is something I would never do without an LLM. I barely have the time to do my job. I most certainly don't have the time to learn to use the APIs that GPT4O 'knows'.
    So bear in mind the transformative nature of this new technology for those of us who use coding as just one of the tools in the box, and not their main skill base.

    • @ryan-skeldon
      @ryan-skeldon หลายเดือนก่อน +2

      Sounds like your company should hire a SWE to work on better tools for you so you can focus on your job.

    • @isaac10231
      @isaac10231 18 วันที่ผ่านมา +2

      @@ryan-skeldon You'd be shocked how many companies are reliant on 20 year old excel files that just do all the data collection. It works and it works well, esp if they have really old equipment that's difficult to interface with.

  •  วันที่ผ่านมา

    Could not agree more, i was working as a consultant for this mega insurance company and the leadership wants us to use LLM everywhere they ( i m understand for the conversation ) want to fire all the data science team they have ( except for us data engineers ) and use LLM and train , test and deploy using AI i was like do you even understand what LLM is people now days when they hear the word AI think its a genie with unlimited wishes.
    this senior manager literally said in the meeting "all you guys have to do is to tell the AI to clean the data, extract the features and train a good model " these business grads really think its that easy.

  • @7mada89
    @7mada89 หลายเดือนก่อน +33

    I think this is the expected behavior of non-technical people, they will be defensive and want to believe that they can do anything a software developer/engineer can do with the help of LLM,
    it's just human nature

    • @vishnu2407
      @vishnu2407 หลายเดือนก่อน +23

      Lmao it's even worse than that, they believe software engineers and programmers are gatekeeping coding from the common people lmao

    • @7mada89
      @7mada89 หลายเดือนก่อน

      ​@@vishnu2407 Yeah exactly 😂

    • @Titere05
      @Titere05 หลายเดือนก่อน +5

      It's not human nature, it's what they've been told by the people they're paying for the service. The error is blindly believing what the salesmen tell you

    • @StSava-zm8tf
      @StSava-zm8tf หลายเดือนก่อน

      it's expected behavior of people with no common sense and a thought process of an elementary school kid on a good day.. which describes most of these parasites "working" in management

    • @J3R3MI6
      @J3R3MI6 หลายเดือนก่อน +3

      Sad part is you’re all wrong… LLMs will create a Revolution where non technical founders CAN build a company and one that will rival companies as large Microsoft and bigger. 💎

  • @EpicVideos2
    @EpicVideos2 หลายเดือนก่อน +5

    You're definitely using it wrong, if it makes you slower not faster. Here's how to use it properly:
    1. Yourself decide what the file should do, consider the design choices, technologies, structure.
    2. Write up everything you thought of in step 1, as bullet points.
    3. Provide pseudocode for anything non-boilerplate
    4. If you have another file in the project with a structure or code style you want mantained provide that as context
    5. Use either Gpt-4, Claude 3.5 sonnet, or Deepseek Coder v2, to generate the code.
    6. (not yet readily available) Write test cases, and use a AI coding ide, to iteratively debug it's code until it passes the test cases.
    As a person who has many years of experience coding in python, but doesn't know every library under the sun, and every syntax perfectly, the llm's ability to code bug free code is amazing. I am at least 2-3x faster with it.

    • @wpdfreak
      @wpdfreak หลายเดือนก่อน +1

      Yeah a lot of people get off talking about what it can't do instead of just utilize it lol!!!

    • @inmydelorean6025
      @inmydelorean6025 วันที่ผ่านมา

      The point is that for a non-technical person the LLMs are useless even in that limited context. Because to get to step 5 you need to be a programmer.

  • @hansu7474
    @hansu7474 หลายเดือนก่อน +20

    I can't agree with you more on this. Even in my company, there are engineers heavily depending on LLMs. And I can see that their coding skills stopped developing. I told them it's good to use LLMs as a source of information and to generate snippets of code (which is often faster than using Google, but not always better), but to try to master the basics for themselves (basic here means that mastery of often used language syntax, commands, tools, and algorithms skills and so on). But they won't listen.
    And the same thing I see on X. But what's shocking to me is that some people who should know better, like Garry, is promoting it.

    • @usernamesrbacknowthx
      @usernamesrbacknowthx หลายเดือนก่อน +6

      Just don't use LLMs for anything period, an inherent part of learning is being able to determine what sources of information are accurate and time efficient and which ones are inaccurate and waste your time.

    • @DankMemes-xq2xm
      @DankMemes-xq2xm หลายเดือนก่อน +4

      @@usernamesrbacknowthx Ehh, I wouldn't go so far as to say never use them. I've had some success copy pasting a broken snippet of code into Gemini and asking "why doesn't this do X thing", and occasionally getting a response that either solves the problem or points me in the direction of solving it.

    • @QuantumVoid-ro3hi
      @QuantumVoid-ro3hi หลายเดือนก่อน

      Your comment doesn't really have anything to do with the video.

    • @GoblinUrNuts
      @GoblinUrNuts หลายเดือนก่อน

      Intricacies of languages in coding will be less and less relevant each year.
      Focus on architecture

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน +3

      That's utter nonsense. First off:
      >Even in my company, there are engineers heavily depending on LLMs. And I can see that their coding skills stopped developing
      You frame it like you have close-to-perfect recall of your coworkers, who does what with LLMs and, most of all, you track their performance and, allegedly, can "see that their coding skills stopped." That's like saying your writing skills stagnate because you're reading too much, never mind the fact that no dev just completely stops coding despite having LLMs available to the,.
      The very fact that they are still working with you should clue you in that most of them perform just as well as before, likely while either doing much less actual work themselves or while being way more productive - regardless, all you said is just the opposite type of wild conjecture biased folks would lap up immediately because it affirms their (vaguely frightened) intuition about how bad this thing is. You have no metric for how much LLM devs' abilities degrade, for how much this allows them to even learn in the first place, nor do you know whether they should be listening to you anyway.

  • @AbishaiSingh
    @AbishaiSingh หลายเดือนก่อน +1

    Totally agree. It can not develop medium or hard projects. The way I use LLM’s is first to architect the project, and break it down into smaller manageable chunks. Once done with that, ask the model to code those pieces, with specific interfaces. With current capability, LLM’s can not replace developers.

  • @nadavvvv
    @nadavvvv หลายเดือนก่อน +16

    i agree with the first tweet after trying to work on some project using claude 3.5. it is true he doesnt get complex stuff like your entire app but if you just constantly ask it questions about small parts it gets those small parts done very fast. for example my UI was very bad so i took a screenshot of it and gave it that and the code for the component and told it to make the UI better and it just did it in 1 try. same with asking for specific small changes one at a time. you dont ask "write an app that does x" but write "change this function to also do y" and then it does it way better if you give it the minimal context thats actually neccesary instead of the entire app

    • @technokicksyourass
      @technokicksyourass หลายเดือนก่อน

      The people that succeed in this industry are the ones that embrace change and figure out how to use new tools. I still know people that use oldschool vi in their coding, and never adopted IDEs.. or said git offered "nothing new". In reality.. these folks simply didn't want to do the work to learn new things.

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน

      yeh that makes more sense. just be specific , how hard is that

  • @sand-barry
    @sand-barry หลายเดือนก่อน +1

    the engagement-based payout model on social media platforms is proving to be quite the catalyst to the enshittification of the internet

  • @hemanthsavasere934
    @hemanthsavasere934 หลายเดือนก่อน +4

    LLMs and LMMs are currently effective for generating boilerplate code or providing insights into topics I'm unfamiliar with, without needing to sift through documentation.

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน

      yeh so much better than documentation because documentation has so many missing knowledge .

  • @marcoelhodev
    @marcoelhodev 2 วันที่ผ่านมา

    I mostly agree. I think LLMs are very good on writing skeleton code or simple snippets if can actually describe correctly what kind of UI/Code you want, but anything more complex than a basic CRUD is beyond any AI. Not to mention the hallucinations crapping all over the code, if you don't know your code, you are going to get very weird errors.

  • @AliHamza-lt2xu
    @AliHamza-lt2xu หลายเดือนก่อน +25

    Totally with you, Neetcode! The AI hype is like calling a microwave a personal chef. Thanks for cutting through the noise!

    • @ardnys35
      @ardnys35 หลายเดือนก่อน +1

      ooh i like that analogy

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน +2

      @@ardnys35 that's... an awful analogy, considering LLMs execute more than a binary function on your "input," as it were. LLMs are like CNC machines or 3D printers for arbitrarily abstract concepts, but you're probably not going to acknowledge that fact if you legitimately compared them to microwaves in the first place. Literally the only commonalities are that you need electricity and press a button at some point. That's where the analogy breaks apart, pretty much immediately. The irony, of course, being that any modern LLM is way better at coming up with suitable analogies.

    • @ardnys35
      @ardnys35 หลายเดือนก่อน +2

      @@minhuang8848 it's a good analogy if you look at it this way: microwave heats up food. you can heat up left over pasta. you can heat up microwaveable food. it's alright and it'll fill the stomach but it's not that great compared to the food you make in oven or stove.
      llms will give you small working code snippets but they won't solve your complicated application. they don't come up with novel ideas. in that sense it's a microwave. you give it a prompt and it gives you mediocre code in a short time.
      making food by yourself is like programming while pushing a button on a microwave which is just like prompting.
      i just don't see how llms are like cnc machines or 3d printers. if anything they would be helpless and inconsistent cnc machine or 3d printer operators. i don't see them as tools in that sense, perhaps assistants at best

  • @angelazhang3393
    @angelazhang3393 26 วันที่ผ่านมา +1

    I found your rage very entertaining. I would definitely not consider myself near a solid programmer, but have enough qualifications to "have my own thought every once in a while." LLMs have sped up my work tremendously, but it's because I prompt specifically for use cases and I break those use cases down into single components. I also read the code to gauge an understanding if it's optimal or not. A lot of times, it's great in bridging the gap between what I know and coding syntax. I do agree that if I absolutely knew next to nothing about what I'm coding, it's borderline useless.
    Also, I definitely would not waste an entire day just prompting an LLM. Especially if I had my own company to build.

  • @marksmod
    @marksmod หลายเดือนก่อน +18

    its not 10x faster, but it is often around 1.2x to 2x, depending on level of expertise you have with the programming that needs be done.
    Doing stuff like:
    - "I have this code {code paste here} and I want to test it for x y and z, write a unit test for it."
    - "rewrite this code to do the same thing but async, or for a different kind of object, etc."
    - "write an algorithm for this class {paste code} which should do: {something boilerplate-y}"
    - allot of graph rendering done with python/matplotlib is imo way faster doing a first draft with an LLM and then optimizing certain things as opposed to reading documentation. If I last used matplotlib 6 months ago to plot a scatter-plot with color coded disks, I won't remember that the cmap param for the scatter plot function is called cmap, for example)
    - Porting code between languages (yes, it still makes sense to read and test it)
    The list isn't really exhaustive.

    • @Terszel
      @Terszel หลายเดือนก่อน +2

      Agree on all of these, especially porting code. I'm very familiar with C and Python but my Go is very rusty, but I can have it convert entire parsing pipelines from Python into Go with minimal issue. It's a godsend

    • @3pleFly
      @3pleFly หลายเดือนก่อน

      Bro I kid you not I thought the same as you, but recently I have been getting so frustrated with it not being able to complete even these simple tasks optimally.

    • @qbek_san
      @qbek_san หลายเดือนก่อน +4

      ChatGPT made my work slower yesterday. I tried to use Python to fill product descriptions in .csv file using ChatGPT API, but the code it gave, errored and couldn't find a solution and fix it. I had to read documentation about library I was using and found out, my .csv file was separated by semicolons, not commas, which had to be properly configured in python .csv tool. I would put that kind of task as easy, yet LLM failed.

    • @GeometryEX-hp9zs
      @GeometryEX-hp9zs 12 วันที่ผ่านมา

      @@qbek_san Sometimes working on your own is the most efficient way to go. AI still has huge recognition problems. It's not advanced.

  • @opstube
    @opstube 8 วันที่ผ่านมา

    I think you nailed it pretty much. The people who claim such things, are doing trivial to easy stuff. The best use of AI currently is to selectively toggle it on when you are about to write some boilerplate that exists already in the code a couple of times.

  • @almightyzentaco
    @almightyzentaco หลายเดือนก่อน +9

    I think Claude really helps speed things up in a few ways. It helps as another pair of eyes for bugfixes. It helps when you have no idea how to even get started in a sphere. Its really good at variable and function naming. And it can type faster than me so I can often tell it exactly what I want a function to do and it will be done about twice as fast as I can write it. Claude is not going to write your app but it is a pretty good copilot

    • @GoodByeSkyHarborLive
      @GoodByeSkyHarborLive หลายเดือนก่อน +1

      so its better than gpt and copilot? what are those good at?

    • @almightyzentaco
      @almightyzentaco หลายเดือนก่อน +1

      @GoodByeSkyHarborLive Yes, it's better than GPT. GPT is still useful, but Claude seems much more with it and able to correct its mistakes where GPT gets things wrong a lot more and gets stuck. For example, Claude will start to suggest debug techniques when you keep getting the same error. It will even ask for you to share other classes or methods. It seems to creatively think about the problem. Gpt just gets into a fail loop and can't get out.

    • @AM-yk5yd
      @AM-yk5yd 22 วันที่ผ่านมา +2

      Your bugs must be extremely trivial. IME when it comes to bugfixes that "you have no idea how to even get started in a sphere" means you start with bugs in MLoC codebase (and you have no idea which part of code is called without spending hours) only to discover that bug is caused by calling external site across the way which returns warning code which is not even documented and there is no information about that site anywhere other than source code for dll written in 2010. (And by IME I mean what happened this morning, at least not evening)

  • @Max_Moura
    @Max_Moura หลายเดือนก่อน +2

    6:13 - Here's the bottom line. That's exactly what I think. Great video, BTW!

  • @drmonkeys852
    @drmonkeys852 หลายเดือนก่อน +3

    The biggest win I've had with AI, was when I was working on a feature to add some telemetry to our software to track what reports are clients are using.
    For all the reports we had they were all defined in one bigass 3000+ line file. I needed to add a string to each report which had a english version of the report because the actual name would get translated if you were to change to french for example, and I needed to make sure I always sent the same name for each report when sending out the report click event.
    I dreaded that I would have to do literal hours of mindnumbing copy pasting for hundreds of reports, but instead I just pasted that whole file to ChatGPT and got it all done in less than 10mins.
    Now could have I also done the same with some scripting, yea. But it wouldn't have been nearly as fast to develop the script, test it, then handle all the inevitable edge cases. And it was way easier to just explain in english that I wanted this very simple thing done.

  • @Aurelian1123
    @Aurelian1123 3 วันที่ผ่านมา

    hell yeah! I just started learning coding for data science and man, it's scary. All this llm's coming out looking like they will take over jobs, and all companies are laying offs engineers, plus all these people showing off what they built using claude and cursor on Twitter, not understanding a thing of what it's made of. It's a breath of fresh air having this perspective come from a seasoned and respected programmer Thank you so much for saying this!

  • @cuentadeyoutube5903
    @cuentadeyoutube5903 หลายเดือนก่อน +8

    I did commit 3 PRs last week that were coded entirely with an LLM.
    Describe the problem and provide sample similar code, review the solution, maybe a couple back and forth with LLM iterating on the solution, request tests, put everything on the repo, run tests, and feed errors into the LLM until code is fixed. I am the person who would have coded this anyways so I have the needed technical skills. The idea of a non technical person doing this today (or soon) is risible, however I did get a huge improvement days of works condensed in a day. Also the idea that engineers spend most their time on “hard” problems is strange tbh. I spent most my time finding existing solutions to non novel issues. Maybe we work on very different problems, idk.
    Have you considered maybe people are not lying but are seeing different time wasters disappear overnight due to LLMs?

    • @gershommaes902
      @gershommaes902 หลายเดือนก่อน

      I'm curious about the solution implemented in your 3 PRs?

    • @poshsims4016
      @poshsims4016 หลายเดือนก่อน

      LLMs work. Especially ones trained on large amounts of code, can handle large context, and the person using them is good at prompting.

    • @cuentadeyoutube5903
      @cuentadeyoutube5903 หลายเดือนก่อน

      @@gershommaes902 a management script I wrote for a manager who had a last minute question about data (I took 30 minutes between create, test, iterate, submit), a Django query to retrieve the roots from the subforest that remains when you apply RBAC to a forest on the db (mind you, minimizing data access and avoiding unnecessary data fetch), a pair of mixin classes to decorate models and query sets to emit a signal any time you make any change to the underlaying data on the db and a handler to track that on a separate model of the db. None of these really worked out of the box or were perfect, but I had a good sense of what I wanted and the test cases (which I generated via Claude itself) and I iterated several times over requirements or even over design options (I tried several options until I settled with the mixins). I got working results on a fraction of time and with more coverage than I would have otherwise.
      This is a revolution and it’s only going to get better. I’m waiting for better Claude-IDE integration, more agentic like workflow. Also, live testing on dev or stg environment is a time drain I wish I will be able to automate soon with some sort of bot that reads the PR and runs some “manual” tests on a local version of the whole site.

  • @noduslabs
    @noduslabs 10 วันที่ผ่านมา

    Let’s also not forget about the joy of writing code by yourself instead of checking bad code written by an LLM. There are times when I prefer to do things twice slower but to enjoy them twice more :)

  • @warrenhenning8064
    @warrenhenning8064 หลายเดือนก่อน +27

    I wanted to learn how to make apps that can make Bluetooth connections. I had no idea how Bluetooth worked a few weeks ago. Claude helped point me in the right direction. It generated some useful skeleton code that was helpful for discovering how other people write Bluetooth code (specifically Bluetooth LE which has a specific way you structure having an app advertise its capabilities and transmit data using GATT that when I first saw it was confusing). It answered important questions when I was confused when Google searches were not giving anything helpful (I tried Google first). But I still have to put everything together because when you refine it and point out edge cases, the code it emits in response won't work with the code it emitted before, etc.

    • @lululucaschae
      @lululucaschae หลายเดือนก่อน

      This. It's more of an 'educational tool', and replaces having to troubleshoot and deep searching StackOverflow yourself - in that sense, it does boost my productivity A LOT.

    • @leonardosoteldo9542
      @leonardosoteldo9542 หลายเดือนก่อน

      Google is dog shit nowadays. Use a better search engine.

    • @MasterXelpud
      @MasterXelpud หลายเดือนก่อน +1

      Yeah--I wouldn't dare put anything Claude wrote into production as is, but it's been great for getting maybe 60-70% of the way to conceptual understanding when it comes to learning a new topic, and it's been genuinely helpful as a wall to bounce things off of when debugging

    • @darekmistrz4364
      @darekmistrz4364 หลายเดือนก่อน +1

      This is hurtful to SWE market because it will basically cut all juniors out of market (which was already a problem before AI/LLMs). And will cause all sorts of problems like: "I created this app/website using AI, but I'm stuck on these stupid bugs which it cannot fix" - ergo it market will want only very good architects that know the language/framework's edge cases on the spot.
      Also its becoming noticeable that many engineers that are using LLMs in work have stopped developing their skills and they plateaued at medium-prompt developer

    • @upobir
      @upobir หลายเดือนก่อน

      Exactly LLM's are basically Google on steroids. I have had to visit stackoverflow very less since LLM tools. But bigger decisions, LLM can rarely help.

  • @CottidaeSEA
    @CottidaeSEA 27 วันที่ผ่านมา +1

    LLMs can absolutely do some complex stuff, but on a large scale, no. Besides, even if the LLM *can* do that complex stuff, you'll have to tell it exactly what to do and at that point you're probably better off just writing the code yourself. *That* is the problem I have with LLMs. They at most help me write boilerplate code which in some cases can be a lot of the actual code depending on what you're doing. So in that sense it can save a lot of time, but at that point it's just solving a repetitive task, which they *have to* be good at.

  • @aldayel98
    @aldayel98 หลายเดือนก่อน +9

    Actually, I agree with the OG post, it does make you considerably faster and will only get better. When GPT4 came out, I was writing a PHP API and designing an SQL database. I asked myself, let's assume all I have is GPT4, can I complete this task? It took me little over 4 hours to do the whole thing. Yes, if I knew exactly what I wanted I would've coded it myself way faster, but this idea of brainstorming the framework and steps with LLMs, creating a plan, executing the tasks with ample context, attaching the parts together, and debugging actually worked. I ended up with very functional code at the end of the day only using natural language.
    This is a new direction in application development which involves LLMs throughout the whole development journey. Especially as the first post said "technical founder" since it implies you need to use a pretty varied or wide-ranged stack where some technologies you're not very familiar with but you need to work with. 10x seems a lot, but as I've seen in many cases, it's actually true. What takes 20 minutes of tweaking and coding, would take a 1 min prompt and a 1 min Ctrl+C, Ctrl+V

  • @TehGlowStick
    @TehGlowStick หลายเดือนก่อน

    It's not just about coding. On a day to day basis at an office it can cover a lot. I personally use it to learn a lot faster rather than having to shift through 10 different pages of documentation, stackoverflow, and github links for an answer. The pace I'm moving at work via that has improved a lot. 10x number is an hyperbole but it definitely does feel like that

  • @AlbertCloete
    @AlbertCloete หลายเดือนก่อน +11

    I don't think he's lying. His experience mirrors mine. You don't ask the LLM to design your app for you.
    There are a few ways in which they help.
    1. When you're trying to do something you're unfamiliar with, ask for guidelines on the task. Give it as much context as possible. This helps you get up to speed quicker with relevant information. You can then either ask follow up questions or Google specific parts that you need more clarity on.
    2. They automate grunt work. Stuff that's not complex, but still takes a lot of effort. Pattern matching stuff. Like converting SQL to query builder or ORM code and vice versa.
    3. They can explain stuff that's hard to Google. Like if you give it a regular expression, it can tell you exactly what it does and beak it down into parts for you, so that you can edit it the way you need to. Explaining complex bash commands work well too. You can't easily Google this, but and LLM can explain it very well.

  • @fadocodecamp
    @fadocodecamp 6 วันที่ผ่านมา

    100% agree with you. LLMs are currently OK at scaffolding easy stuff & small chunks of code. Fetch some data, pass it on to the view, generate some basic UI. But 10x coding? Nope.

  • @ThisIsntmyrealnameGoogle
    @ThisIsntmyrealnameGoogle หลายเดือนก่อน +4

    I agree, as someone who loves LLMs and has been using them for my work as a junior dev, it saves time from stack overflowing and googling syntax, boilerplate and code snippets. It has saved me from bugging my senior engineers plenty of times as well. But I would be amazed if in 5 years things improve significantly, let alone replace a whole dev team. Things are looking to have some level of diminishing returns already so if we get EVEN 2x more "effectiveness" within the coming years and it can solve medium level complex tasks I would be thrilled.

  • @tubopedia
    @tubopedia หลายเดือนก่อน

    Dude, thank you for making this. As a dev I have been so self conscious about my lack of using AI to do dev work to turbo charge my work. I thought I was missing something.

  • @dera_ng
    @dera_ng หลายเดือนก่อน +13

    Anyone at this point in tech who still thinks AI is not a stock pump and dump scheme is probably still a toddler.

    • @usernamesrbacknowthx
      @usernamesrbacknowthx หลายเดือนก่อน +2

      Wishing all ML Engineers and AI "experts" a merry AI Winter

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน +2

      ok bro if you think ai itself is a pump and dump scheme, then you're clearly being biased for a reason. ai helps a lot. you're missing out if you dont use it

    • @playversetv3877
      @playversetv3877 หลายเดือนก่อน

      @@usernamesrbacknowthx ai development is da best

    • @zdspider6778
      @zdspider6778 หลายเดือนก่อน +1

      I honestly can't wait for the AI bubble to burst. It seriously can't burst soon enough.
      But only because I'm selfish. I want cheap GPUs.
      Nvidia been hoarding them VRAM chips for their "AI" shovels. Everyone is in a gold mining rush rn with "AI" and Nvidia is selling the shovels. The pickaxes. It's sickening. And they're completely ignoring the gamers, the people who they actually BUILT their empire off of. 16GB cards should have been standard with the RTX 3000 series. Instead, with the "Ada Lovelace" cards (4000 series) they had the lowest GPU sales in over 20 years. Gee, I wonder why! When the "4070 SUPER" is really a 60-class and the "real" 70-class is now $800. Nvidia can suck it.

    • @himanshusingh5214
      @himanshusingh5214 หลายเดือนก่อน +1

      AI can't code or solve novel math problems but it can make trippy videos, songs, images. Because a code is as good as useless if there is one major bug or some minor bugs. But the same is not true for videos because they have to be only played.

  • @BinaryAdventure
    @BinaryAdventure หลายเดือนก่อน

    Dude I'm another software engineer (I'm technically a security engineer with a SE background) and I felt THE EXACT SAME WAY you described - any time there is a problem that is more complex than "show me a basic example of ...", LLMs completely fail and waste your time. I have spent 45 minutes to an hour trying to get something from an LLM that took me 5-10 minutes to do after simply googling or looking at StackOverflow. I had the same feelings when ChatGPT first got big and I still echo the same sentiment now. In fact, as a security engineer, I've seen LLMs introduce critical vulnerabilities in code silently...

  • @takeuchi5760
    @takeuchi5760 หลายเดือนก่อน +16

    And the basic reason why LLM can't code up something as relatively complex as the neetcode site is because THEY DON'T UNDERSTAND, THEY REGURGITATE, and more compute or more data (which they seem to have ran out of) can't fix that.
    Until the AI system can somehow reason what an app like that might need, and then work on it, it won't work. This would require a complete change in architecture, LLMs won't replace even half decent juinor devs.
    As they are now, it's just a glorified auto correct. Helpful for very simple stuff that's been replicated a million times, but it can't do more than that.

    • @jpfdjsldfji
      @jpfdjsldfji หลายเดือนก่อน +5

      To say LLMs don't understand is an oversimplification of a model family that I don't think you quite understand yourself. You would be surprised with the level of intelligence at which LLMs operate.

    • @robotron26
      @robotron26 หลายเดือนก่อน +1

      You are wrong and you dont understand how they work
      Llms can complete unique tasks, that alone should tell you its not regurgitation
      Look into geoffrey hinton

    • @Ivcota
      @Ivcota หลายเดือนก่อน +7

      @@robotron26 actually they can complete tasks that fit a template they're given based off their large corpus of data. See the Arc test. They actually can't solve unique tasks. If they do solve it, its very likely there's an almost identical complete template that its solving.

    • @Titere05
      @Titere05 หลายเดือนก่อน

      @@jpfdjsldfji No, you are completely wrong. LLMs are not intelligent because they just predict the next word. If you indeed understand what you're writing, then you're not really PREDICTING anything, aren't you?

    • @takeuchi5760
      @takeuchi5760 หลายเดือนก่อน +1

      @@jpfdjsldfji what they do is not intelligence. But their design is absolutely intelligent.

  • @selinovaldes
    @selinovaldes วันที่ผ่านมา

    LLMs have almost doubled my throughput, but I'm a product designer. I use LLMs to write slide decks, do lots of grunt work on data sets, or parse user interview text. I use them to reformat chunks of JSON or correct a bunch of Tailwind. I consider myself a senior designer and a junior-level front-end dev. That said, after many attempts to use LLMs to write "finished" React within an enterprise app… dude… it's a joke. 70% of the time, it shows me a pattern that I can use.
    Front-end dev has always been a patchwork of legacy scripting & Elmer's Glue, IMO. A specialized LLM with very narrow capabilities could build usable component sets within a highly established application if trained on all possible patterns within a static-ish part of the app. Would it be flexible enough to pivot when a core dependency changes, or it finds that kludge that shouldn't even be working? Probably not. It would have to relearn, right? So what's the point?
    The thing is that America runs on irrational optimism. "Fake it 'til you make it" is propping up 75% of the startups out there.

  • @kalilinux8682
    @kalilinux8682 หลายเดือนก่อน +4

    Two words "Skill issue"

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน +1

      the skill issue begins with people even selecting proper models, honestly
      I keep seeing so many folks trying to disprove models' usefulness by trying it themselves... with hilariously outdated LLMs from years ago, literally being offered for free. It's wild seeing tech-adjacent people struggle with the notion of not using shoddy language models, but it happens all the time.

    • @axelramirezludewig306
      @axelramirezludewig306 หลายเดือนก่อน

      @@minhuang8848 False, I use SOTA and it sucks dick. It can generate 1-2 complex functions and that's it

  • @ashleycanning1450
    @ashleycanning1450 หลายเดือนก่อน

    I’ve been an engineer for 20 years and I’ve been building a new SaaS product with Claude 3.5, my experience lets me ask the exact questions and give it the exact context I need to create what I want. So far it’s helped me build vue frontend components, node js backend, helped me configure typescript, it helped me configure vercel. It helped me build out authentication, the middleware, firebase integration wasn’t smooth but it helped. Helped me debug Cors issues and also build out the copy.
    I think the development process has been at least 5-8x faster.

  • @SanusiAdewale
    @SanusiAdewale หลายเดือนก่อน +5

    You built that as a junior? I'm finished!

    • @dehancedmedia2900
      @dehancedmedia2900 หลายเดือนก่อน +6

      ive worked with high performing juniors that couldn't build that, and in the real world seniority has a lot more to do with your ability to communicate/organize and lead projects than it does pure coding ability. keep at it!

    • @SanusiAdewale
      @SanusiAdewale หลายเดือนก่อน

      @@dehancedmedia2900 thanks!

    • @kyatt_
      @kyatt_ หลายเดือนก่อน

      @@dehancedmedia2900 Right? seems tough for a junior

    • @darekmistrz4364
      @darekmistrz4364 หลายเดือนก่อน +1

      He is not saying that he coded exactly "that" as a junior. He is saying that this platform started at the hands of junior developer.

  • @kenzorman
    @kenzorman 10 วันที่ผ่านมา

    Weak foundations means the house falls own. 50% there does not mean anything unless you can build on it.

  • @merdanethubar-sarum9031
    @merdanethubar-sarum9031 หลายเดือนก่อน +4

    You are correct that LLM's have difficulty with more complex projects, but the whole idea of good clean code in the first place, is to separate your complex architecture into simple snippets of code that interoperate but run independently of each other. This is basically what functions are. They don't need know what the other functions internals are. And LLM's can definitely help you write simple functions quicker than before.
    If you are an engineer at heart, you won't notice that much of a difference in speed, but if you are architect at heart, suddenly you have a bricklayer at your service that helps you build cathedrals one brick at a time. The fact that engineers, photographers, novelists and artists don't seem to grasp that its not about the skill behind the individual pieces of art (humans are way better), but about the composition of the whole (80% of the quality at 10x the speed).
    Its perhaps easier to see if you look outside your own profession where you aren't hindered by your own standards but merely judge the outcome. What is 10x more efficient, hiring a photographer or generating a couple of photo's from your favorite AI tool?

  • @ltpfdev
    @ltpfdev หลายเดือนก่อน +2

    I've realized that using AI for a small function, or even an issue where I ask it to make what I want to give me "ideas" (I guess) of another way to do it, has lead me to waste a lot of time trying to get the right answer out of it instead of looking on stackoverflow for example

    • @ltpfdev
      @ltpfdev หลายเดือนก่อน +1

      it's when I find myself basically yelling at it, asking it if its stupid kind of thing

  • @helgeh
    @helgeh หลายเดือนก่อน +5

    10x devs can get another 10x using the best LLMs (myself included). the smarter you are the better effect you get on LLMs.

    • @dszmaj
      @dszmaj หลายเดือนก่อน +2

      best dev’s I know tend to even delete AI plugins, because it waste time and is a distraction
      personally I go to LLM only for simple snippets and idea generation, anything more and it’s a waste of time

    • @vitalyl1327
      @vitalyl1327 หลายเดือนก่อน

      So you have no idea of how to use LLMs...

    • @dszmaj
      @dszmaj หลายเดือนก่อน

      @@vitalyl1327 sure, like I need to waste more time prompting to get them to work, instead of just getting it done myself

  • @hidalgobc
    @hidalgobc 27 วันที่ผ่านมา

    "Feels like I'm living in a different universe than people on Twitter" actually, true for literally every topic of discussion, not just SE

  • @bottleflips3617
    @bottleflips3617 หลายเดือนก่อน +3

    I can’t code, but I made a Facebook marketplace copycat using ai that’s fully functional with messaging and everything. It would be stupid to make a super complex startup with ai, but I am interested in business and ai helps me code enough to where I can get started, and worry about hiring a coder later.

  • @zshn
    @zshn 6 วันที่ผ่านมา

    Social Media runs on hype. It doesn't care if the marketing is false. All that matters is hype. It makes them feel 'ahead of the curve', and anytime you question them back to reality, they call you a doomer.

  • @Vancha112
    @Vancha112 หลายเดือนก่อน +4

    Interesting point, how *did* you make that fancy directed graph? :D

    • @Dom-zy1qy
      @Dom-zy1qy หลายเดือนก่อน +2

      Perhaps he procedurally generates SVGs, was curious myself. Probably gonna try and replicate it.

    • @Vancha112
      @Vancha112 หลายเดือนก่อน

      @@Dom-zy1qy maybe :) I tried making a program that could generate graphical representations of trees some time ago, but failed because I thought it was too complicated. But now I'm curious again maybe I should take another shot ^^

  • @pruthvikumarmore7546
    @pruthvikumarmore7546 หลายเดือนก่อน

    i totally agree with this, most of the companies just lying to all of us. Instead of lying just focus on making the product better and when people use it, they will become your marketers

  • @JohnnysaidWhat
    @JohnnysaidWhat 29 วันที่ผ่านมา +5

    you are are 10000% using it wrong. I setup orchestrated docker containers, terraform deploymemts with beautifully designed reusable components, open source vector store in a container with volume claims. All deployed to azure, pulling from my own private docker image container registry, provisioning an azure resource group into azure container apps service which is managed kubernetes behind the scenes.
    yes I have working knowledge but I wrote zero code just worked with the chat system while referencing and pasting actual documentation.
    it helps to use an editor like vim for quickly editing sections and pages of code without always having to use a mouse.
    Claude is literally a game changer for programmer/founder hybrids like me

  • @feliperamos3322
    @feliperamos3322 5 วันที่ผ่านมา

    I see this atmosphere on linkedin too, it looks like they are in another dimension where everything is beautiful, we have flying cars and AI already took all programming jobs

  • @user-qm2zg6rj9m
    @user-qm2zg6rj9m หลายเดือนก่อน +5

    To indulge them in the conversation is too much respect already insults would just make you look like a "raging traditional programmer" in their opinion, programmers that think of LLMs as their savior or is going to replace them are inadequate in talent or mental capacity.
    they also happen to forget that they set a bad precedent for their journey to becoming better and let's not get started with the actual risk of addiction and as a hyperbole as this might seem
    LLMs are a drug for programmers i could really feel lazy and go ahead get Claude to prep a load of snippets for me but what if i get used to not doing effort and achieving some acceptable quality using LLMs? that's it...
    We've been given most resources if not all to learn so much yet some choose the worst ways and always lean to shortcuts.
    Thank you for your videos btw, cool stuff.

    • @minhuang8848
      @minhuang8848 หลายเดือนก่อน

      "people who keep an eye open for useful abstractions for my job are raging, that's why I just ignore all the outside voices" definitely is one way to frame this debate, rather than just do what a good software engineer would do and break it down to quantifiable metrics and prove your intuition about why LLMs are wrong or bad for any sort of project.
      The fact that you talk about the risk of addiction alone is kind of bonkers and definitely works against your own argument here.

  • @esmekeats
    @esmekeats หลายเดือนก่อน

    This is so refreshing to hear as an engineer who was laid off from a company with a greedy lunatic CEO who firmly believed one - two engineers with a bunch of LLMs was all that was needed to do EVERYTHING for releasing production enterprise consulting software they could charge 10k a month for - insane.
    I took my time writing actual software but that didn’t fit his extremely fast timeline; we had no QA, no DevOps, just me and two other folks with days to spin up sellable entire products.
    On my own time I’ve tested building apps with LLMs in the drivers seat - these things are not able to reason around entire systems! I don’t think LLM tech alone will get us there

  • @Niiwastaken
    @Niiwastaken หลายเดือนก่อน +3

    aww first it was the artists time to be mad and now its the coders

  • @orrinjonesjr
    @orrinjonesjr หลายเดือนก่อน

    I 100% agree. For complex problems it's a waste of time. I only use it to generate commonly rewritten code. But I don't use it for logic. I rely on my brain for that

  • @HenryBloggit
    @HenryBloggit หลายเดือนก่อน +11

    I think you’re just bad at prompting. I’m a .NET dev and ChatGPT 4o has easily made my work 10x faster. You just have to VERY clearly explain what you want, how you want it to go about performing the task, provide all the necessary context/background, and then iterate on the LLM’s first response over and over until it’s perfect. Tell it what it did wrong, what you don’t like, what you want improved, and keep going. It’s like having an ultra-fast programmer working for me who writes all the code and all I have to do is clearly explain what I want and then review it. I’m sorry you haven’t gotten good results using AI for programming work, but if you’re not getting good results, I tend to think that’s on you, not the LLMs. I think you’re bad at prompting, and probably pretty bad at explaining things interpersonally as well.

    • @SystemBD
      @SystemBD หลายเดือนก่อน +2

      That part about explaining things interpersonally is actually interesting because that is a common problem that many of us programmers have. After all, when working at a lower level (not UI design or things like that) we are working with abstractions that are difficult to verbalize. And at some point you just say... let me do it myself.
      Because... If you have to invest time defining very well the functionality of a piece of code, then you are not being that efficient. You are just pushing a car though a supermarket aisle because you have become too dependent on that technology.

    • @pelly5742
      @pelly5742 หลายเดือนก่อน +8

      If LLMs make your work 10x faster than your work is extremely simple to begin with. That's why you're finding success with your prompting and others don't.

    • @HenryBloggit
      @HenryBloggit หลายเดือนก่อน

      @@pelly5742 Such is the life of a full-stack dev. Some of my tasks are insanely complex, most are not. I don’t have a junior programmer working for me who I can give all the grunt work to so that I can just do the fun stuff. I have to do everything myself. GPT 4o has become that junior programmer who does all of the routine stuff, and does it incredibly fast, so that I can work on the more complex aspects that humans are still better at, and that’s how it has 10x’d my workflow. GPT 4o is like having a full time junior programmer who has come to me right out of school with a masters in computer science, writes code with superhuman speed in any language, and works for me for only $20/month. It’s revolutionary. If you’re not getting good results using the tool then you’re probably just not very good at using the tool. It takes an especially narrow mind to believe that all the people who are getting better results with the tool than you are are all just lying about it.

    • @reinerheiner1148
      @reinerheiner1148 หลายเดือนก่อน +2

      The art of coding involves breaking things down into simple subtasks. Once that is done, an LLM can work on that extremely simple stuff.

    • @Mmuitd
      @Mmuitd 14 วันที่ผ่านมา

      .NET dev, explains it all

  • @dirtrockground4543
    @dirtrockground4543 2 วันที่ผ่านมา

    I’m just good at prompting it and setting a context that’s sufficiently detailed for the LLM to nail a response. It also finds stuff that I don’t know about. YMMV but I’m only a few years into my career lol so they’re very helpful

  • @jenjerx
    @jenjerx หลายเดือนก่อน

    Well, it’s progressing..> we won’t totally dismiss that fact, but totally agree

  • @tj2375
    @tj2375 10 วันที่ผ่านมา

    I agee with you, and the worst thing is that the people over hyping LLMs are making our jobs ten times worse because now companies are putting a lot of pressure on their engineering teams to have the kind of results that are being promised.

  • @____2080_____
    @____2080_____ หลายเดือนก่อน

    You’re absolutely right.
    For small things, I breeze through.
    When I was trying to have multistate logic, not only did it waste my time, but it literally ruined the code.
    When you try to guide it, it would literally look like it was agreeing with you then it will literally disconnect parts of the code that was supposed to be building
    What I’ve been working on is not mission critical. It was possibly a test. But it is clearly limitation, and we need to figure out how to integrate it in understanding how they are limited.

    • @clray123
      @clray123 หลายเดือนก่อน

      People tend to forget that in essence LLMs are just fancy-shmancy search engines which translate prompt to output in one flyover. As long as you stay within the range of whatever prompt->something translations they were trained on, it can work pretty well. When you leave that area, they break down horribly.

  • @ChapperorYapper
    @ChapperorYapper หลายเดือนก่อน +2

    Yes you're not doing it right. Breaking those complex tasks into simple tasks and then feeding it to LLM is a skill too

  • @dpscloud3324
    @dpscloud3324 หลายเดือนก่อน

    Another big issue with the mindset of, "it got you 50% of the way there" is that in my experiences the last 10% of a project is what takes the longest. The polish, edge cases, development, etc necessary at the end to give a good quality project takes the longest. AI as of now cant understand that properly and help maintain a project like a software dev would. The progression of a software app, I mean almost anything where you have to build it from scratch, is not linear. And I think most non technical people who actually dont spend their day creating stuff don't understand that. Managers/C suite people who get frustrated when a project is taking longer than what they expect it to take also dont factor in road blocks or congestions in the creative process that AI will simply never be able to solve as effectively as a human who has actual experience would.

    • @doc8527
      @doc8527 หลายเดือนก่อน +2

      The society rewards speed instead of reliability. The funny thing is that if nothing breaks, they all act like your effort on stability is dragging others down. Once something is wrong, they will start to blame someone.
      That's why I sometimes hate "faster" teammates, as they often create more bugs. Fast and reliable are kinda counter against each others in practice. Now people who work slightly slow with higher quality got blamed for being slow, and the whole team got blamed for more bugs mostly generated by those "work hard" people.
      I also don't believe so called "No blame" culture. There is only a culture that "the issue is relatively small so we move on". "Relative" is situation by situation. Everyone all acts like there is one (so called true no-blame).
      The reality is that everyone defends hard when there is an issue caused by them.
      LLM can intensify this problem more.

  • @thecrowbarofirony3703
    @thecrowbarofirony3703 หลายเดือนก่อน

    Claude is good at rapid prototyping, but eventually will start collapsing on itself. They can generate good starter React code but it can’t be in multiple components. Even for Cursor, it will revert a bug that has been fixed when implementing a new feature. What they are good for is for the tedious busywork - like generating reducer templates, function-based components for views etc
    There are ways to get LLM to do what you want, but you would need to know programming in the first place. That also applies for debugging.
    Claude also have issues when trying to integrate two or three features that have never been combined together. However to be fair, I have learnt some new techniques from the generated code

  • @AngeloXification
    @AngeloXification หลายเดือนก่อน

    I wouldnt say 10x but it has 100% enabled me to build stuff that would take MUCH longer before. I dunno if I can say this here but I've copied the "Jamie" app a meeting minute taker that gives insights, action items, tasks etc... Over time it meta analyses meetings and create daily checklists and follow ups for the tasks as well as initialising some of the tasks I need to finish like compiling metrics reports.
    LLMs are fantastic advanced language parsers. I think most people misunderstand what kind of tool they are.
    Let me add, I am NOT a coder. The fact that me, a non-coder, can now build these things is the important part.

  • @ollicron7397
    @ollicron7397 หลายเดือนก่อน +1

    LLMs are alright I don't rely on them they just allow me to pick functions faster.

  • @gpsx
    @gpsx หลายเดือนก่อน

    If you ask a good LLM a very vague coding spec, like "an e-comerce web site" and you put in the right words to get production level code, it can make a good e-commerce site. I think some generalists see this an infer great capabilities. The problem is, once you ask the LLM to do something specific, it falls apart. It just can't take many instructions. But I think there is hope this can improve eventually.

  • @shayrow9480
    @shayrow9480 หลายเดือนก่อน

    The thing is, without having a clear design it will probably take longer to clean up the mess the AI made than to make a design and build it up from there as shown with the Devin AI skeleton.

  • @MadsterV
    @MadsterV หลายเดือนก่อน

    In my experience, LLMs speed up the part that managers think is the 90% but coders know it's actually the 10%.
    Like a mockup you'd throw up in a day and they'll be impressed and want to go to production with it. The mockup. To production.

  • @boontawatkumpiroj5595
    @boontawatkumpiroj5595 23 วันที่ผ่านมา

    In the end, coding is all about specifically commanding silicon to do what we want. A programmer or software engineer is needed to make that happen. AI will never fully grasp the specific details of what we want.