Torvalds Speaks: Impact of Artificial Intelligence on Programming

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ย. 2024

ความคิดเห็น • 1.6K

  • @modernrice
    @modernrice 10 หลายเดือนก่อน +6728

    These are the true Linus tech tips

    • @ginogarcia8730
      @ginogarcia8730 10 หลายเดือนก่อน +33

      hahaha

    • @rooot_
      @rooot_ 9 หลายเดือนก่อน +23

      so true lmao

    • @denisblack9897
      @denisblack9897 9 หลายเดือนก่อน +156

      This!
      Hate that lame wannabe dude pretending to know stuff

    • @authentic_101
      @authentic_101 9 หลายเดือนก่อน +5

      😅

    • @viktorsincic8039
      @viktorsincic8039 9 หลายเดือนก่อน +165

      @@denisblack9897don't hate anyone man, the guy is responsible for countless kids getting into tech, people tend to sort out the educational "bugs" on the way up :)

  • @the.elsewhere
    @the.elsewhere 10 หลายเดือนก่อน +1810

    "Sometimes you have to be a bit too optimistic to make a difference"

    • @bartonfarnsworth7690
      @bartonfarnsworth7690 10 หลายเดือนก่อน +17

      -Stockton Rush

    • @harmez7
      @harmez7 10 หลายเดือนก่อน

      it;s actually originally from William Paul Young, The Shack@@bartonfarnsworth7690

    • @Martinit0
      @Martinit0 9 หลายเดือนก่อน +5

      Understatement of the day, LOL.

    • @MommysGoodPuppy
      @MommysGoodPuppy 9 หลายเดือนก่อน +10

      hell of a motivational quote

    • @harmez7
      @harmez7 9 หลายเดือนก่อน +4

      that is also what a scammer wants from you.
      dont put everything that looks fancy in you mind kiddo.

  • @alakani
    @alakani 9 หลายเดือนก่อน +1884

    Man Linus is always such a refreshing glimpse of sanity

    • @JosiahWarren
      @JosiahWarren 9 หลายเดือนก่อน +5

      His argumet was bugs are shallow .we have compliers for shallow bugs llm can gind not so shallow .he is not the brightest

    • @rickgray
      @rickgray 9 หลายเดือนก่อน +174

      ​@@JosiahWarren Try that again with proper grammar chief.

    • @Ryochan7
      @Ryochan7 9 หลายเดือนก่อน +5

      He let his own kernel and dev community get destroyed. Screw him. RIP Linux

    • @alakani
      @alakani 9 หลายเดือนก่อน +82

      @@Ryochan7 Fun fact, fMRI studies show trolling has the same neural activation patterns as psychopaths thinking about torturing puppies; it's very specific, right down to the part where they vacillate between thinking it's their universal right, and that they're helping someone somehow

    • @Phirebirdphoenix
      @Phirebirdphoenix 9 หลายเดือนก่อน +2

      ​@@alakani and some people who troll do not think about it at all. they're easier to deal with if we aren't ascribing beneficial qualities to them.

  • @lexsongtw
    @lexsongtw 10 หลายเดือนก่อน +2160

    LLMs write way better commit messages than I do and I appreciate that.

    • @SaintNath
      @SaintNath 10 หลายเดือนก่อน +218

      And they actually comment their code 😂

    • @Sindoku
      @Sindoku 10 หลายเดือนก่อน +89

      @@SaintNathcomments are usually bad though but are good if you’re learning I suppose but they can be out of date and thus misleading

    • @steffanstelzer3071
      @steffanstelzer3071 10 หลายเดือนก่อน +313

      @@Sindoku i hope your comment gets out of date quickly because its already misleading

    • @NetherFX
      @NetherFX 10 หลายเดือนก่อน +154

      @@Sindoku While I get your point, comments are definitely a good thing.
      Yes code should be self-explanatory, and if it isn't you try your best to fix this. But there's definitely cases where it's best to add a short comment explaining why you've done something. It shouldn't describe *what* but *why*

    • @JonathanFraser-i7h
      @JonathanFraser-i7h 10 หลายเดือนก่อน +58

      @@NetherFX That's the point, a comment is worthless unless it touches on the why. A comment that just discusses the what is absolutely garbage because the code documents the what.

  • @Hobbitstomper
    @Hobbitstomper 9 หลายเดือนก่อน +450

    Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

    • @mercster
      @mercster 9 หลายเดือนก่อน +3

      Where was this talk held?

    • @DavidHnilica
      @DavidHnilica 9 หลายเดือนก่อน +19

      thanks "so" much! It's pretty appalling that these folks don't even quote the source

    • @kurshadqaya1684
      @kurshadqaya1684 8 หลายเดือนก่อน +3

      Thank you a ton!

    • @KaiCarver
      @KaiCarver 8 หลายเดือนก่อน

      Thank you th-cam.com/video/OvuEYtkOH88/w-d-xo.html

    • @captaincaption
      @captaincaption 8 หลายเดือนก่อน +2

      Thank you so much!

  • @porky1118
    @porky1118 9 หลายเดือนก่อน +704

    1:06 "Now we're moving on from C to Rust" This is much more interesting than the title. I always thought, Torvalds viewed Rust as an experiment.

    • @feignit
      @feignit 9 หลายเดือนก่อน +111

      Rust just isn't his expertise. It's going in the kernel, he's just letting others oversee it.

    • @SecretAgentBartFargo
      @SecretAgentBartFargo 9 หลายเดือนก่อน +67

      @@feignit It's already in the mainline kernel for a while. It's very stable and Rust just works really well now.

    • @yifeiren8004
      @yifeiren8004 9 หลายเดือนก่อน +7

      I actually think go is better than Rust

    • @speedytruck
      @speedytruck 9 หลายเดือนก่อน +242

      @@yifeiren8004 You want a garbage collector running in the kernel?

    • @catmanmovie8759
      @catmanmovie8759 9 หลายเดือนก่อน +12

      ​@@SecretAgentBartFargoRust isn't even close to the stable.

  • @MethodOverRide
    @MethodOverRide 9 หลายเดือนก่อน +430

    I am senior software engineer and I use chat gpt sometimes at work to write powershell scripts. They usually provide a good enough start for me to modify to do what i want. That saves me time and allows me to create more scripts to automate more. Its not my main programming task, but it definitely saves me time when I need to do it.

    • @falkensmaize
      @falkensmaize 9 หลายเดือนก่อน +51

      Same. ChatGPT is great for throwing together a quick shell or python script to do boring data tasks that would otherwise take much longer.

    • @alakani
      @alakani 9 หลายเดือนก่อน +20

      Yep, saves me so much time with data preprocessing, and adds nice little features that I wouldn't normally bother with for a 1 time use throwaway script

    • @jsrjsr
      @jsrjsr 9 หลายเดือนก่อน +11

      Quit your job.

    • @alakani
      @alakani 9 หลายเดือนก่อน +33

      @@jsrjsr And light a fart?

    • @jsrjsr
      @jsrjsr 9 หลายเดือนก่อน +2

      @@alakani he should do worse than that.

  • @mikicerise6250
    @mikicerise6250 9 หลายเดือนก่อน +679

    If you let the LLM author code without checking it, then inevitably you will just get broken code. If you don't use LLMs you will take twice as long. If you use LLMs and review and verify what it says and proposes, and use it as Linus rightly suggests as a code reviewer who will actually read your code and can guess at your intent, you get more reliable code much faster. At least that is the state of things as of today.

    • @keyser456
      @keyser456 9 หลายเดือนก่อน +35

      Perhaps anecdotal, but it (AI Assistant in my case, I'm using JB Rider, pretty sure that's tied to ChatGPT) seems to get better with time. After finishing a method, I have another method already in mind. I move the cursor and put a blank line or two in under the method I just created in prep for the new method. If I let it sit for just a second or two before any keystrokes, often times it will predict what method I'm about to create all on its own, without me even starting the method signature. Yes, sometimes it gets it very wrong and I'll just hit escape to clear it, but sometimes it gets it right... and I mean really scary right. Like every line down to the keystroke and even naming is spot on, consistent w/ naming throughout the rest of the project. Yes, agreed, you still need to review the generated code, but I suspect that will only continually get better with every iteration. Rather then autocompleting methods, eventually entire files, then entire projects, then entire solutions. It's probably best for developers to try to learn to work with it in harmony as it evolves, or they will fall behind their peers that are embracing it. Scary and exciting times ahead.

    • @pvanukoff
      @pvanukoff 9 หลายเดือนก่อน +20

      @@keyser456 Same experience for me. It predicts what I was about to write next about 80% of the time, and when it gets it right, it's pretty much spot on. Insane progress just over the past year. Imagine where it will be in another year. Or five years. Coding is going to be a thing of the past, and it's going to happen very quickly.

    • @rayyanabdulwajid7681
      @rayyanabdulwajid7681 9 หลายเดือนก่อน +6

      If it is intelligent enough to write code, it will eventually become intelligent enough to debug complex code, as long as you tell it what is the issue that arises

    • @CausallyExplained
      @CausallyExplained 9 หลายเดือนก่อน +11

      You are training the llm for the inevitable.

    • @derAtze
      @derAtze 9 หลายเดือนก่อน +2

      Oh man, now i really want to get into coding just to get that same transformative experience of a tool thinking ahead of you. I am a Designer, and to be frank, the experience with AI in my field is much less exciting, its just stockfootage on steroids, all the handywork of editing and putting it together is sadly the same. But the models are evolving rapidly and stuff like AI object select and masking, vector generation in Adobe Illustrator, transformative AI (making a summer valley into a snow valley e.g.) and motion graphics AI are on the horizon to be good or are already there. Indeed, what a time to be alive :D might get into coding soon tho

  • @Kaelygon
    @Kaelygon 10 หลายเดือนก่อน +737

    While AI lowers the bar to start programming, I'm afraid it also makes programming bad code easier. But with like any other tool, more power brings more responsibility and manual review should still be just as important.

    • @footballuniverse6522
      @footballuniverse6522 10 หลายเดือนก่อน +57

      as a cloud engineer I gotta say chatgpt with gpt 4 really turbocharges me for most tasks, my productivity shot up 100-200% and i'm not kidding. You gotta know how to make it work for you and it's amazing :)

    • @alexhguerra
      @alexhguerra 10 หลายเดือนก่อน +16

      There will be more than one AI , for each task, to create code and to validate code. Make no mistake, AGI is the last target, but the intermediate ones are good enough to speed up the whole ordeal/effort

    • @musiqtee
      @musiqtee 10 หลายเดือนก่อน +93

      Ok, speed, efficiency, productivity… All true, but to what effect? Isn’t it so that every time we’ve had a serious paradigm shift, we thought we could “save time”.
      Sadly, since corporations are not ‘human’, we’ve ended up working *more* not less, raising the almighty GDP - having less free time and not making significantly more money.
      Unless… you own shares, IP, patents and other *derivatives* of AI as capital.
      AI is a tool. A sharp knife is also one. This “debate” should ask “who is holding the tool, and for what purpose?”. That question reveals very different answers to a corporation, a government, a community or a single person.
      It’s not what AI is or can do. It’s more about what we are, and what we do with AI… 👍

    • @westongpt
      @westongpt 10 หลายเดือนก่อน +15

      Couldn't the same be said of Stack Overflow? I am not disagreeing with you, just adding an example to show it's not a new phenomenon.

    • @pledger6197
      @pledger6197 10 หลายเดือนก่อน +18

      It reminds me about talk in some podcasts before LLM, where speaker said that they tried to use AI as an assistant for medical reports and they faced the following problem:
      sometimes people see that AI gets the right answers and then when they disagree with it, they still choose the AI's conclusion, because "system can't be wrong".
      So to fight it, they programmed the system to sometimes give the wrong results, and ask the person to agree or disagree with it, to force people to chose the "right" answer and not to agree with anything that system says.
      And this is what I believe the weak point of LLM.
      While it's helpful in some scenarios, in other it can give SO deceiving answers which looks exactly how it should be, but in fact it's something that doesn't even exists.
      E.g. I tried to ask it about best way to get an achievement in the game, and it came with things that really exists in the game and sounds like they should be related to the achievement, but in fact they not.
      Or my friend tried to google windows error codes, and it came up with the problem and their descriptions, though it doesn't really exists either.

  • @ficolas2
    @ficolas2 10 หลายเดือนก่อน +781

    I have had copilot suggest an if statement that fixed an edge case I didn't contemplate, enough times to see it could really shine in fixing obvious bugs like that.

    • @doodlebroSH
      @doodlebroSH 10 หลายเดือนก่อน +126

      Skill issue

    • @antesajjas3371
      @antesajjas3371 10 หลายเดือนก่อน +318

      ​@@doodlebroSHif you always think of every edge case in all of the code you write you are not programming that much

    • @ficolas2
      @ficolas2 10 หลายเดือนก่อน

      @@doodlebroSH I can tell you are new to programming and talking out of your ass just by that comment.

    • @markoates9057
      @markoates9057 10 หลายเดือนก่อน +25

      @@doodlebroSH :D yikes

    • @turolretar
      @turolretar 10 หลายเดือนก่อน +3

      @@antesajjas3371I think you misspelled edge

  • @PauloJorgeMonteiro
    @PauloJorgeMonteiro 10 หลายเดือนก่อน +399

    Linus..... My man!!!
    I would probably hate working with him, because I am not a very good software engineer and he would be going nuts with my time-complexity solutions... but boy has he inspired me.
    Thank you!

    • @MrFallout92
      @MrFallout92 10 หลายเดือนก่อน +25

      bro do you even O(n^2)?

    • @PauloJorgeMonteiro
      @PauloJorgeMonteiro 10 หลายเดือนก่อน +55

      @@MrFallout92 I wish!!!
      These days I have a deep love for factorials!

    • @TestTest12332
      @TestTest12332 10 หลายเดือนก่อน +41

      I don't think he would. His famous rants on LKML before he changed his tone were ate people who SHOULD HAVE KNOWN BETTER. I don't remember him going nuts at newbies for being newbies. He did go nuts at experts who tried to submit sub-par/lazy/incomplete/etc work and should have know it's sub-par and needs fixing and didn't bother doing that. He was quite accurate and fair in that.

    • @Saitanen
      @Saitanen 9 หลายเดือนก่อน +4

      @@TestTest12332 Has this ever happened? Do you have any specific examples?

    • @uis246
      @uis246 9 หลายเดือนก่อน +8

      ​@@Saitanenthat time when fd-based syscall returned file not found error code. Linus went nuts.

  • @alcedob.5850
    @alcedob.5850 9 หลายเดือนก่อน +322

    Wow finally someone who acknowledges the options LLMs give without being overhyped or calling out an existential threat

    • @darklittlepeople
      @darklittlepeople 9 หลายเดือนก่อน +5

      yes, i find him very refreshing indeed

    • @MikehMike01
      @MikehMike01 9 หลายเดือนก่อน

      LLMs are total crap, there’s no reason to be optimistic

    • @deeplife9654
      @deeplife9654 9 หลายเดือนก่อน +34

      Yes. Because he is not a marketing guy or not ceo of a company.

    • @genekisayan6564
      @genekisayan6564 9 หลายเดือนก่อน +2

      Man they can t even count additions. Of course they are not a threat. At least yet

    • @curious_banda
      @curious_banda 9 หลายเดือนก่อน

      ​@@genekisayan6564 never used gpt4 and other later models?

  • @vlasquez53
    @vlasquez53 9 หลายเดือนก่อน +190

    Linus sounds so calmed and relaxed until you see his comments on others PRs

    • @thewhitefalcon8539
      @thewhitefalcon8539 9 หลายเดือนก่อน +22

      That was a terrible PR though

    • @Alguem387
      @Alguem387 8 หลายเดือนก่อน +3

      I think he does it for fun tbh

    • @gruberu
      @gruberu 8 หลายเดือนก่อน +13

      who amongst us that hasnt had a bad day because of a bad PR cast the first stone

    • @MechMK1
      @MechMK1 7 หลายเดือนก่อน +2

      You gotta let off steam somehow

    • @__Henry__
      @__Henry__ 7 หลายเดือนก่อน +1

      Yeah :/

  • @ZeroPlayerGame
    @ZeroPlayerGame 10 หลายเดือนก่อน +143

    Man, Linus looks a noticeably older, wiser man than I've seen him in older talks. More respect for the guy.

    • @RyanMartinRAM
      @RyanMartinRAM 9 หลายเดือนก่อน +12

      Great people often age like wine.

    • @ZeroPlayerGame
      @ZeroPlayerGame 9 หลายเดือนก่อน +18

      @@RyanMartinRAMI have another adage - with age comes wisdom, but sometimes age comes alone. Not this time though!

    • @DielsonSales
      @DielsonSales 5 หลายเดือนก่อน

      I think age makes anyone more humble, but sometimes less open minded. It’s good to see Linus recognize that LLMs have their uses, while some projects like Gentoo have stood completely against LLMs. Nothing is black and white, and when the hype is over, I think LLMs will still be used as assistants to pay attention to small stuff we sometimes neglect.

  • @Pantong
    @Pantong 10 หลายเดือนก่อน +170

    It's another tool like static and dynamic analysis. No programmer will follow these tools blindly, but can use them to make suggestions or improve a feature. There have been times i've been stuck on picking a good data structure, and gpt has given more insightful ideas or edge cases i was not considering. That's this most useful moment right now. A Rubber Ducky.

    • @AM-yk5yd
      @AM-yk5yd 10 หลายเดือนก่อน

      >No programmer will follow these tools blindly
      My sweet summer child. CURL authors already have to deal with "security reports" because some [REDACTED]s used Bard to find "vulnerabilities" to get a bug bounty. Wait for next jam in style "submit N PRs and you get our merch" and instead of PRs that fix a typo, you'll get even worse - the code that doesn't compile.

    • @conrad42
      @conrad42 10 หลายเดือนก่อน +16

      I agree that it can help in these scenarios. People should make aware of this, as the current discussion is way over the top and scare people in losing their jobs (an therefore their mental health). Another thing is, as sustainability was a topic, I'm not sure if the energy consumed by this technology justifies these trivial tasks. Talking with a colleague seems more energy efficient.

    • @LordChen
      @LordChen 10 หลายเดือนก่อน

      aha. until it writes a Go GTK phone app (Linux phone) zero to hero with no code review and only UI design discussions.
      6 months ago. just chatgpt4.
      programming is dying and you people are dreaming.
      in 2023 there were 30% less new hires across all programming languages.
      for 2024, out of 950 tech companies, over 40% plan layoffs due to AI.
      a bit tired to link the source

    • @larryjonn1973
      @larryjonn1973 10 หลายเดือนก่อน +23

      You underestimate the stupidity of people

    • @Gokuguy1243
      @Gokuguy1243 10 หลายเดือนก่อน +17

      Absolutely, im convinced the other commenters claiming LLMs will make programming obsolete in 3 years or whatever are either not programmers or bad programmers lol

  • @heshercharacter5555
    @heshercharacter5555 9 หลายเดือนก่อน +23

    I find LLM's extremely usefull for generating small code snippet very quickly. For example advanced regular expressions. Saved me tons of hours.

    • @purdysanchez
      @purdysanchez 4 หลายเดือนก่อน

      As long as you understand regular expressions, and review and write extensive test cases for what the regular expressions should do. Then ChatGPT is pretty useful

  • @elliott8596
    @elliott8596 10 หลายเดือนก่อน +216

    Linus has really mellowed out as he has gotten older.

    • @duffy666
      @duffy666 10 หลายเดือนก่อน +44

      In a good way.

    • @Munchkin303
      @Munchkin303 9 หลายเดือนก่อน +62

      He became hopeful and humble

    • @mikicerise6250
      @mikicerise6250 9 หลายเดือนก่อน +30

      The therapy worked. 😉

    • @Rajmanov
      @Rajmanov 9 หลายเดือนก่อน

      no therapy at all, just wisdom @@mikicerise6250

    • @darxoonwasser
      @darxoonwasser 9 หลายเดือนก่อน +24

      ​@@Munchkin303Linus hopeful and humble Torvalds

  • @illyam689
    @illyam689 10 หลายเดือนก่อน +460

    I think that Linus, in 2024, should run his own podcast

    • @TalsBadKidney
      @TalsBadKidney 10 หลายเดือนก่อน +43

      and his first guest should be joe rogan

    • @SergioGomez-qe3kn
      @SergioGomez-qe3kn 10 หลายเดือนก่อน +76

      @@TalsBadKidney
      Linus: - "What language do you think should be first tought at elementary school, Joe?"
      Joe: - "Jujitsu"

    • @turolretar
      @turolretar 10 หลายเดือนก่อน +4

      @@TalsBadKidneythis is such a great idea

    • @ton4eg1
      @ton4eg1 10 หลายเดือนก่อน +1

      And has a stand-up.

    • @madisonhanberry6019
      @madisonhanberry6019 10 หลายเดือนก่อน +5

      He's such a great speaker, but I doubt he would have much time between managing Linux, family life, and whatever else

  • @shroomer3867
    @shroomer3867 9 หลายเดือนก่อน +134

    At 1:10 you can see how Linus is locating the Apple user and was considering to kill him on the spot but decides against it and continues his thought

    • @Yamagatabr
      @Yamagatabr 9 หลายเดือนก่อน +3

      😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂

    • @stephanste1n
      @stephanste1n 9 หลายเดือนก่อน +2

      lmao

    • @khalifarmili1256
      @khalifarmili1256 9 หลายเดือนก่อน +2

      😂😂

    • @王甯-h2x
      @王甯-h2x 7 หลายเดือนก่อน +1

      HAHAHA

    • @cromulence
      @cromulence 6 หลายเดือนก่อน

      Linus literally uses a MacBook…

  • @ChrisM541
    @ChrisM541 10 หลายเดือนก่อน +74

    For experienced programmers, most of the mistakes they make can be categorised as 'stupid' i.e. a simple overlook, where the fix is equally stupidly trivial. Exactly the same with building a PC - you might have done it 'millions' of times, but forgetting something stupid in the build is always stupidly easy to do, and though you might not do it often, you will inevitably still do it. At some point. Unfortunately, the fixes seem to always take forever to find.

    • @Jonas-Seiler
      @Jonas-Seiler 10 หลายเดือนก่อน +16

      That’s the only good take on ai in the video, and maybe the only truly helpful thing ai might ever be used for, finding the obvious mistakes humans make because they’re thinking about more important shit.

    • @autohmae
      @autohmae 9 หลายเดือนก่อน +5

      That's the problem with computers, you need to do it all 100% correct or it won't work.

    • @hallrules
      @hallrules 9 หลายเดือนก่อน +6

      @@autohmae That also doubles as the good thing about computers, because it will never do something that you didn't tell it to do

    • @chunkyMunky329
      @chunkyMunky329 9 หลายเดือนก่อน +4

      I disagree with this. Simple bugs are easier to find, so we find more of them. The other bugs are more complex which makes them harder to find, so we find less of them. For example, not realising that the HTTP protocol has certain ramifications that become a serious problem when you structure your web app a certain way.

    • @ChrisM541
      @ChrisM541 9 หลายเดือนก่อน

      @@chunkyMunky329 It's definitely true that there are always exceptions, though I'd politely suggest "not realising" is primarily a result of inexperience.
      A badly written and/or badly translated urs can lead to significant issues when the inevitable subsequent change requests flood in, especially if there's poor documentation in the code.
      Any organisation is only as good as it's QA. We see this more and more in the games industry, where we increasingly, and deliberately, offload the testing aspect of that onto the end consumer.
      Simple bugs should be easy to find, you'd think, but they're also very, very easy to hide, unfortunately.

  • @ginebro1930
    @ginebro1930 10 หลายเดือนก่อน +58

    Smart answer from Linus.

  • @nathanmccarthy6209
    @nathanmccarthy6209 9 หลายเดือนก่อน +10

    There is absolutely no doubt in my mind that things like co-pilot are already part of pull requests that have been merged into the Linux kernel.

  • @piotrek7633
    @piotrek7633 8 หลายเดือนก่อน +4

    You people dont understand, it never was if ai would replace programmerw, it always was if ai will reduce job position by a critical amount so that its hard to get hired

  • @joemiller8409
    @joemiller8409 9 หลายเดือนก่อน +57

    the deafening silence when that phone alarm dared to go off mid torvalds dialogue 😆

  • @AlbertCloete
    @AlbertCloete 10 หลายเดือนก่อน +93

    Those subtle bugs are what LLMs produce copious amounts of. And it takes very long to debug. To the degree where you probably would have been better off if you just wrote the code by hand yourself.

    • @xSyn08
      @xSyn08 10 หลายเดือนก่อน +17

      @@AvacadoJuice-q9bWhat, like a "Prompt Engineer"? It's ridiculous that this became a thing given how LLMs work.
      It's all about intuition that most people can figure out if they spend a day messing around with it.

    • @joshmogil8562
      @joshmogil8562 10 หลายเดือนก่อน +4

      Honestly this has not been my experience using GPT4

    • @tbunreall
      @tbunreall 10 หลายเดือนก่อน +2

      Disagree. Since humans constantly creating bugs when coding themselves, even subtle, even the best of the best. LLM are amazing. I realized my python code ended up needing to be multi threaded. I fed it my code, and it multi threaded everything. They are incredible and only this is just beginning? 5 years will blow peoples minds, completely. People who don't find how amazing llms are, just aren't that bright in my opinion.

    • @asterinycht5438
      @asterinycht5438 10 หลายเดือนก่อน +2

      thats why you must input the psuedocode on llm to control the output more be precise to what you want.

    • @gabrielkdc17
      @gabrielkdc17 10 หลายเดือนก่อน +7

      It's amusing how we, as programmers, often tell users that if they input poor quality data into the system, they should expect poor quality results. In this case, the fault lies with the user, not the system. However, now we find ourselves complaining about a system when we input low-quality data and receive unsatisfactory results. This time, though, we blame the system instead of ourselves

  • @TheNimaid
    @TheNimaid 9 หลายเดือนก่อน +147

    As someone with a degree in Machine Learning, hearing him call it LLMs "Autocorrect on steroids" gave me catharsis. The way people talk and think about the field of AI is totally absurd and grounded in SciFi only. I want to vomit every time someone tells me to "just use AI to write the code for that" or similar.
    AI, as it exists now, is the perfect tool to aid humans (think pair programming, code auto-completion for stuff like simple loops, rough prototypes that can inspire new ideas, etc.) Don't let it trick you into thinking it can do anyone's job though. It's just a digital sycophant, never forget that.

    • @vuralmecbur9958
      @vuralmecbur9958 8 หลายเดือนก่อน +12

      Do you have any valid arguments that make you think that it cannot do anyone's job or is it just your emotions?

    • @legendarymortarplayer9453
      @legendarymortarplayer9453 8 หลายเดือนก่อน

      ​@@vuralmecbur9958if your job relies on not thinking and copy pasting code then yes it can replace you but if it is not,if you understand code and can modify it properly to your needs and specifications it can not replace you,I work on ai as well

    • @user-zf4nq1dy2n
      @user-zf4nq1dy2n 7 หลายเดือนก่อน

      ​@@vuralmecbur9958it's not about AI not being an "autocorrect on steroids". It's about "there are a lot of jobs out there, that could be done by autocorrect on steroids"

    • @DDracee
      @DDracee 7 หลายเดือนก่อน

      @@vuralmecbur9958do you have any valid arguments as to why people will get layed off instead of companies scaling up their projects? 200-300% increase in productivity simply means 200-300% increase in future project sizes, the field you're working in is already dying anyway if scaling up isn't possible and you're barking at the wrong tree
      where i'm working were constantly turning down projects because there's too much to do and no skilled labour to hire (avionics/defense)

    • @jeromemoutou9744
      @jeromemoutou9744 7 หลายเดือนก่อน

      ​@@vuralmecbur9958 go prompt it to make you a simple application and you'll see it's not taking anyone's job anytime soon.
      If anything, it's an amazing learning tool. You can study code and anything you don't understand, it will explain in depth. You don't quite grasp a concept? Prompt it to explain it further.

  • @bergonius
    @bergonius 9 หลายเดือนก่อน +10

    "You have to kinda be a bit too optimistic at times to make a difference" -This is profound

  • @roylxp
    @roylxp 9 หลายเดือนก่อน +4

    no one commenting on the moderator? He is doing a great job driving the conversation

  • @duffy666
    @duffy666 10 หลายเดือนก่อน +147

    "we are all autocorrects on steroids to some degree" - agree 100%

    • @alang.2054
      @alang.2054 10 หลายเดือนก่อน +9

      Could you elaborate why do you agree? Your comment adds no value right now

    • @RFC3514
      @RFC3514 10 หลายเดือนก่อน +13

      I think he really meant to say "autocomplete". Because it basically takes your prompt and looks for what answer is mostly likely to follow it, based on material it has read.
      Which _is_ indeed kind of how humans work... if you remove creativity and the ability to _interact_ with the world, and only allow them to read books and answer written questions.
      And by "creativity" I'm including the ability to spot gaps in our own knowledge and do experiments to acquire _new_ information that wasn't part of our training.

    • @sbqp3
      @sbqp3 10 หลายเดือนก่อน +19

      The thing people with the interviewers mindset misses is what it takes to predict correctly. The language model has to have an implicit understanding of the data in order to predict. And ChatGPT is using a large language model to produce text, but you could just as well use it to produce something else, like actions in a robot. Which is kind of what humans do; they see and hear things, and act accordingly. People who dismiss the brilliance of large language models on the basis that they're "just predicting text" are really missing the point.

    • @RFC3514
      @RFC3514 10 หลายเดือนก่อน +1

      @@sbqp3 - No, you couldn't really use it to "produce actions in a robot", because what makes ChatGPT (and LLMs in general) reasonably competent is the huge amount of material it was trained on, and there isn't anywhere near the same amount of material (certainly not in a standardised, easily digestible form) of robot control files and outcomes.
      The recent "leap" in generative AI came from the volume of training data (and ability to process it), not from any revolutionary new algorithms. Just more memory + more CPU power + easy access to documents on the internet = more connections & better weigh(t)ing = better output.
      And in any application where you just don't have that volume of easily accessible, easily processable data, LLMs are going to give you poor results.
      We're still waiting for remotely competent self-driving vehicles, and there are billions of hours of dashcam footage and hundreds of companies investing millions in it. Now imagine trying to use a similar machine-learning model to train a mobile industrial robot, that has to deal with things like "finger" pressure, spatial clearance, humans moving around it, etc.. Explicitly coded logic (possibly aided by some generic AI for object recognition, etc. - which is already used) is still going to be the norm for the foreseeable future.

    • @duffy666
      @duffy666 10 หลายเดือนก่อน +4

      ​@@alang.2054 I like his comment because most thinking humans do is in fact system 1 thinking - which is reflex-like and on a similar level as what LLMs do.

  • @alextrebek5237
    @alextrebek5237 10 หลายเดือนก่อน +61

    (Average typing speed*number of working days a year)/6 words per line of code ~=1milLOC/year. But we dont write that much. Why? Most coding is just sitting and thinking, then writing little
    LLMs are great to get started with a new language, library or writing repetitive datastructs or algs, but bad for production or logic (design patterns such as the Strategy pattern) due to not logically understanding the problem domain, which from our napkin math just proved is the largest part coding assistants arent improving

    • @antman7673
      @antman7673 10 หลายเดือนก่อน +2

      I wouldn’t even agree.
      Imagine yourself just getting the job to code x project.
      In that case, you can rely on a very limited amount of information.
      Within the right, there are very few ways in which LLMs fail.

    • @coryc9040
      @coryc9040 10 หลายเดือนก่อน +1

      Maybe if many programmers sit down and explain their thought process on multiple different problems it can learn to abstract the problem solving method programmers use. While the auto correct on steroids might be technically accurate for what it's doing, the models it builds to predict the next token are extremely sophisticated and for all we know may have some similarity to our logical understanding of problem domains. Also LLMs are still in their infancy. There are probably controls or additional complexity that could be added to address current shortcomings. I'm skeptical of some of the AI hype, I'm equally skeptical of the naysayers. I tend to think the naysayers are wrong based on what LLMs have already accomplished. Plenty of people just 2-3 years ago would've said some of the things they are doing now are impossible.

    • @SimGunther
      @SimGunther 10 หลายเดือนก่อน +5

      Read the original documentation and if there's something you don't understand, Google it and be social. Only let the LLM regurgitate that part of the docs in terms you understand as a last resort.
      I'm surprised at the creativity LLMs have in their own context, but don't replace reading the docs and writing code with LLMs. You must understand why the algo/struct is important and what problems each algorithm solves.
      If you think LLMs replace experience, you're surely mistaken and you'll be trapped in learned helplessness for eternity.

    • @mobbs8229
      @mobbs8229 10 หลายเดือนก่อน +4

      I literally asked chatGPT today to explain to MVCC pattern (Which I could've sworn is called the MVVC pattern but it corrected me to that) and its explanation got worse after every attempt of me telling it, it was not doing a good job.

    • @RobFisherUK
      @RobFisherUK 10 หลายเดือนก่อน +3

      ​@@SimGuntherreading the docs only works if you know what you're looking for. LLMs are great at understanding your badly written question.
      I once proposed a solution to a problem I had to ChatGPT and it said: that sounds similar to the technique in statistics called bootstrapping. Opened up a whole new box of tricks previously unknown to me.
      I could have spent months cultivating social relationships with statisticians but it would have been a lot more work and I'm not sure they'd have the patience.

  • @samson_77
    @samson_77 10 หลายเดือนก่อน +44

    Good interview, but I disagree with the introduction, where it is said that LLM's are "auto-correction on steroids" . Yes, LLMs do next token prediction. But that's just one part. The engine of a LLM is a giant neural network, that learned a (more or less sophisticated) model of the world. It is being used during inference to match input information against and, based on that correlations, creates new output information which leads, in an iterative process, to a series of next token. So the magic happens, when input information is matched against the learned world model, that leads to new output information.

    • @thedave0004
      @thedave0004 10 หลายเดือนก่อน +19

      Agreed! This is the type of thing people say somewhat arrogantly when they've only had a limited play with the modern LLMs. My mind was blown when I wrote a parser of what I would call medium complexity in python for a particular proprietary protocol. It worked great but it was taking 45 mins to process a days worth of data, and I was using it every day to hunt down a weird edge case that only happened every few days. So out of interest I copied and pasted the entire thing into GPT4 and said "This is too slow, please re-write it in C and make it faster" and it did. Multiple files, including headers, all perfect. It compiled first time, and did in about 30s (I forget how long exactly but that ballpark) what my hand written python program was doing in 45 mins. I don't think I've EVER written even a simple program that's compiled first time, let alone something medium complicated.
      To call this auto complete doesn't give it the respect it deserves. GPT4 did in a few seconds what would have taken me a couple of days (if I even managed it at all, I'm not an expert in C by a long stretch).

    • @davidparker5530
      @davidparker5530 10 หลายเดือนก่อน +9

      I agree, the reductionist argument trivializes the power of LLMs. We could say the same thing about humans, we "just predict the next word in a series of sentences". That doesn't capture the power and magic of human ingenuity.

    • @thegoncaloalves
      @thegoncaloalves 10 หลายเดือนก่อน +4

      Even Linus says that. Some of the things that LLMs produce are almost black magic.

    • @mitchhudson3972
      @mitchhudson3972 10 หลายเดือนก่อน +9

      So... Autocorrect

    • @mitchhudson3972
      @mitchhudson3972 10 หลายเดือนก่อน +6

      ​@@davidparker5530humans don't just predict the next word though. Llms do. Neural networks don't think, all they do is guess based on some inputs. Humans think about problems and work through them, llms by nature don't think about anything more than what they've seen before.

  • @naldorayn
    @naldorayn 7 หลายเดือนก่อน +1

    My only gripe with AI generated code currently is when they write or suggest code that contains security vulnerabilities, or worse, leak credentials, secrets. AI may accelerate human productivity, but on the other side, it may also accelerate human stupidity.

  • @7rich79
    @7rich79 10 หลายเดือนก่อน +22

    Personally I think that while it will be extremely useful, there will also be this belief over time that the "computer is always right". In this sense we will surely end up with a scandal like Horizon in the future, but this time it will be much harder to prove that there was a fault in the system.

    • @arentyr
      @arentyr 9 หลายเดือนก่อน

      Precisely this. With Horizon it took years of them being incredulous that there were any bugs at all, that it must be perfect and that instead thousands of postmasters were simply thieves. Eventually the bugs/errors became so glaring (and finally maybe someone competent actually looked at the code) that it was then known that the software was in fact broken. What then followed were many many more years of cover ups and lies, with people mainly concerned with protecting their own status/reputation/business revenue rather than do what was right and just.
      Given all this, the AI scenario is going to be far worse: the AI system that “hallucinates” faulty code will also “hallucinate” spurious but very plausible explanations.
      99.99% won’t have the requisite technical knowledge to determine that it is in fact wrong. The 0.01% won’t be believed or listened to.
      The terrifying prospect of AI is in fact very mundane (not Terminator nonsense): its ability to be completely wrong or fabricate entirely incorrect information, and then proceed to explain/defend it with seemingly absolute authority and clarity.
      It is only a matter of time before people naturally entrust them far too much, under the illusion that they are never incorrect, in the same way that one assumes something must be correct if 99/100 people believe it to be so. Probability/mathematics is a good example of where 99/100 might think something is correct, but in fact they’re all wrong - sometimes facts can be deeply counterintuitive, and go against our natural intelligence heuristics.

    • @MrWizardGG
      @MrWizardGG 8 หลายเดือนก่อน

      Maybe. But it depends what we allow ai to be in charge of. Remember, if we vote out the gop we can like pass laws again to do things for the benefit of the people including ai regulations if needed.

  • @TjPhysicist
    @TjPhysicist 8 หลายเดือนก่อน +2

    I love this little short. I think what both of them said is true. LLM is definitely "autocorrect on steroids", as it were. But honestly, a lot of programming or really a lot of jobs in general don't really require higher level of intelligence, as Linus said - we all are autocorrect on steroids to some degree, because for the most part a lot of things we do, that's all you need. The problem is knowing the limitations of such a tool and not attempting to subvert human creativity with it.

  • @vaibhawc
    @vaibhawc 9 หลายเดือนก่อน +33

    Always love to hear Sir Linus Hopeful Humble Torvalds

    • @latt.qcd9221
      @latt.qcd9221 9 หลายเดือนก่อน +1

      Sir Linus Hopeful *_And_* Humble Torvalds

  • @Spencer-r6r2l
    @Spencer-r6r2l 10 หลายเดือนก่อน +29

    A responsible programmer might use AI to generate code, but they would never submit it without understanding it and testing it first.

    • @traveller23e
      @traveller23e 10 หลายเดือนก่อน +14

      Although by the time you read and fully understand the code, you may as well have written it.

    • @Spencer-r6r2l
      @Spencer-r6r2l 10 หลายเดือนก่อน +2

      @@traveller23e if the code fails for some reason, I'll be glad I took the time to understand it.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 9 หลายเดือนก่อน +4

      @@traveller23e actually true. if you understand every aspect of the code, why wouldn't you just have written it yourself? at some point when using llms these people will become used to the answers being mostly correct so they'll stop checking. productivity 200% bla bla, yeah sure dude. man llms will ruin modern software even more, todays releases are already full of bugs

    • @MrHaggyy
      @MrHaggyy 9 หลายเดือนก่อน

      @@traveller23e Well the same goes for the compiler. If you "fully understand" the code there should never be a warning or error. Most tools like GitHub-copilot require you to write anyway, but they give you the option of writing a view dozen chars with a single keystroke. This is pretty nice if most of your work is assembling different algorithms or data structures, not creating new ones.

    • @Mpanagiotopoulos
      @Mpanagiotopoulos 9 หลายเดือนก่อน

      I submit all the times code I don't understand, I simply ask in english the LLM to explain it to me. I have written a whole app in javascript without learning JS in my entire life

  • @lmamakos
    @lmamakos 9 หลายเดือนก่อน +18

    Is cut-and-paste from StackOverflow that far from asking the LLM for the answer?

    • @derekhettinger451
      @derekhettinger451 8 หลายเดือนก่อน +21

      ive never been insulted by gpt

    • @David-gu8hv
      @David-gu8hv 8 หลายเดือนก่อน +1

      @@derekhettinger451 Ha Ha!!!!!

    • @VoyivodaFTW1
      @VoyivodaFTW1 8 หลายเดือนก่อน

      Lmao. Well, a senior dev is likely on the other end of a stack overflow answer, so basically yea

    • @pauldraper1736
      @pauldraper1736 6 หลายเดือนก่อน

      @@VoyivodaFTW1 optimistic I see

    • @mongoosae
      @mongoosae 5 หลายเดือนก่อน

      Any help forum is just a distributed neural net when you think about it

  • @timothybruce9366
    @timothybruce9366 7 หลายเดือนก่อน +1

    My last company started using AI over a year ago. We write the docblock and the AI writes the function. And it's largely correct. This is production code in smartphones and home appliances world-wide.

  • @nox5282
    @nox5282 9 หลายเดือนก่อน +6

    I use ai as a learning tool, if I get stuck I bounce ideas similar to a person, I then use it as a basis to keep going. I discover things I didn’t consider and continue reading other sources. Right now ai os not good to teach you, but great to get directions to explore or make of things or concepts to lookup.
    That being said next generation will be unable to form thoughts without ai, how many knows how to do long division anymore by hand

  • @vishnurajbhar007
    @vishnurajbhar007 9 หลายเดือนก่อน +2

    In such a short video, one can easily witness the brilliance of the man!!!

  • @Standbackforscience
    @Standbackforscience 9 หลายเดือนก่อน +9

    There's a world of difference between using AI to find bugs in your code, vs using AI to generate novel code from a prompt. Linus is talking about the former, AI Bros mean the latter.

  • @mikezooper
    @mikezooper 5 หลายเดือนก่อน +1

    Amazing that Linus accepts AI. Some techies are disparaging of AI. A truly smart person looks at the pros and cons, rather than just being dogmatically for or against.

  • @CausallyExplained
    @CausallyExplained 9 หลายเดือนก่อน +8

    Linus is definitely not the sheep, you can tell just how different he is from the general.

    • @chunkyMunky329
      @chunkyMunky329 9 หลายเดือนก่อน +2

      He is different but something I've noticed is that smart people have a great at understanding things that the rest of us struggle with, but they are kinda dumb when it comes to things of simple common sense. Like for him to not understand the down side to an AI writing bad code for you is just kinda silly. It should be obvious that a more reliable tool would be better than a less reliable tool.

    • @justsomerandomnesss604
      @justsomerandomnesss604 9 หลายเดือนก่อน +3

      ​@@chunkyMunky329There is no "more reliable tool" though
      It is about tools in your toolbox in general
      Just because your hammer is really good at hammering in a nail, you're not gonna use it to saw a plank.
      Same with programming. You use the tools that get the job done.

    • @pauldraper1736
      @pauldraper1736 6 หลายเดือนก่อน

      @@chunkyMunky329 You have an implicit assumption that people are more reliable tools than LLMs. I think that is up for debate.

    • @chunkyMunky329
      @chunkyMunky329 6 หลายเดือนก่อน

      @@pauldraper1736 "people" is a vague term. Also, I never said that it was a battle between manual effort vs LLMs. It should be a battle between an S-Tier human invention such as a compiler vs an LLM. Great human-built software will cause chat GPT to want to delete itself

    • @pauldraper1736
      @pauldraper1736 6 หลายเดือนก่อน

      @@chunkyMunky329 linter is only one possible use of ai

  • @bryn494
    @bryn494 4 หลายเดือนก่อน +1

    It's not the Artificial Intelligence that people should be worried about. It's the Natural Intelligence we need to watch the most...

  • @taymossninjapriest
    @taymossninjapriest 10 หลายเดือนก่อน +4

    This feels like it's lagging behind the state of things right now. I don't think it's a serious question whether LLM's will be useful for coding. They already are.

  • @brentlidstone1982
    @brentlidstone1982 3 หลายเดือนก่อน

    This was surprisingly not what I was expecting him to say, and yet I simultaneously respect him even more for saying it.

  • @br3nto
    @br3nto 10 หลายเดือนก่อน +15

    LLMs are interesting. They can be super helpful to write out a ton of code from a short description, allowing you to formulate an idea really quickly, but often the finer details are wrong. That is using an LLM to write unique code is problematic. You may want the basic structure of idiomatic code, but then introduce subtle differences. When doing this, the LLM seems to struggle, often suggesting methods that don’t exist, or used to exist, or starts mixing methodologies from multiple versions of the library in use. E.g trying to use WebApplicationFactory in C#, but introducing some new reusable interfaces to configure the services and WebApplication that can be overridden in tests. It couldn’t find/suggest a solution. It’s a reminder that it can only write code it’s seen before. It can’t write something new. At least not yet.

    • @elle305
      @elle305 10 หลายเดือนก่อน +9

      you'll spend more time making sure it didn't add confident errors than it would take to write the code in the first place. complete gimmick only attractive to weak programmers

    • @br3nto
      @br3nto 10 หลายเดือนก่อน +3

      @@elle305 I don’t think that’s accurate. Sure, you need the expertise to spot errors. Sure, you need the expertise to know what to ask for. But I don’t agree with the idea that you’ll take more time with LLMs than without. It’s boosted by productivity significantly. It’s boosted my ability to try new ideas quickly and iterate quickly. It’s boosted my ability to debug problems in existing code. It’s been incredibly useful. It’s a soundboard. It’s like doing pair programming but you get instant code. I want more of it, not less.

    • @elle305
      @elle305 10 หลายเดือนก่อน +3

      @@br3nto i have no way to validate your personal experience because i have no idea of your background. but I'm a full time developer and have been for decades, and I'm telling you that reviewing llm output is harder and more error prone than programming. there are no shortcuts to this discipline and people who look for them tend to fail

    • @Jonas-Seiler
      @Jonas-Seiler 10 หลายเดือนก่อน

      ⁠@@elle305 it’s no different for any other discipline. but sometimes doing it the hard way (fucking around trying to make the ai output work somehow) is more efficient than doing it the right way, especially for one-of things, like trying to cobble together an assignment. and unfortunately more often than not, weak programmers (writers, artists, …) are perfectly sufficient for the purposes of most companies.

    • @elle305
      @elle305 10 หลายเดือนก่อน

      @@Jonas-Seiler i disagree

  • @programmingwithyunusemrevu7222
    @programmingwithyunusemrevu7222 9 หลายเดือนก่อน +1

    For those commenting there won't be coding in a couple of years, I'd like to remind scientific calculators and software for them. We didn't stop doing math by hand. We just made some tasks faster and more accurate. You will always need to learn the 'boring' parts even if there is a 'calculator'. Your brain needs the boring stuff to create more complex results.

  • @sfacets
    @sfacets 10 หลายเดือนก่อน +21

    If programmers aren't debugging their own work, then they will gradually loose the ability to do so. Just like when a child learns to multiply with a calculator and not in their minds - they lose the ability to multiply, and become reliant on the machine.
    Programmers learn as they program. It is mind-expanding work. Look at Torvalds and you see a person who is highly intelligent, because he has put the work in over many years.
    We can become more efficient programers using AI tools - but it will come at a cost.
    "Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology." - Martin Heidegger
    When a programer, for example, is asked to check on a solution given by AI, and lacks the competency to do so (because, like the child, they never learned the process) then this is a dangerous position we as humans are placing ourselves in - caged in inscrutable logic that will nonetheless come to govern our lives.

    • @JhoferGamer
      @JhoferGamer 9 หลายเดือนก่อน

      yep

    • @dan-cj1rr
      @dan-cj1rr 9 หลายเดือนก่อน +2

      yep but companies dont care on the spot, they want the feature as fast as possible and the cheapest way

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 9 หลายเดือนก่อน

      nicely put

  • @Bapuji42
    @Bapuji42 8 หลายเดือนก่อน +2

    It can write code but I'm not sure it can design software. It can't really reason. The parameters still have to be defined by a reasoning being.

  • @kibiz0r
    @kibiz0r 9 หลายเดือนก่อน +24

    As a central figure in the FOSS movement, I'm surprised he doesn't have any scathing remarks about OpenAI and Microsoft hijacking the entire body of open source work to wrap it in an opaque for-profit subscription service.

    • @nothingtoseehere93
      @nothingtoseehere93 7 หลายเดือนก่อน

      He has to be careful now that the SJWs neutered him and sent him to tolerance camp. Thank the people who wrote absolute garbage like the contributor covenant code of conduct

    • @haroldcruz8550
      @haroldcruz8550 7 หลายเดือนก่อน +1

      Then you're not in the loop. Linus was never the central figure of the FOSS movement. While his contribution to the Linux Kernel is appreciated he's not really considered one of the leaders when it comes to the FOSS movement.

    • @jasperdevries1726
      @jasperdevries1726 5 หลายเดือนก่อน +1

      @@haroldcruz8550 Well said. I'd expect stronger opinions from Richard Stallman for instance.

  • @UnixPerdunix
    @UnixPerdunix 4 หลายเดือนก่อน +1

    I like Linus' calming voice, it's soothing

  • @DAG_42
    @DAG_42 9 หลายเดือนก่อน +21

    I'm glad he corrected the host. We are indeed all basically autocorrect to the extent LLMs are. LLMs are also creative and clever, at times. I get the feeling the host hasn't used them much, or perhaps at all

    • @kralg
      @kralg 9 หลายเดือนก่อน +13

      It _seems_ to be creative and it _seems_ to be clever especially to those who are not. The host was fully correct stating that it has nothing to do with "intelligence", it only _seems_ to be intelligent.

    • @doomsdayrule
      @doomsdayrule 9 หลายเดือนก่อน +3

      @@kralg If we made a future LLM that is indistinguishable from a human being, that answers questions correctly, that can solve novel problems, that "seems" creative... what is it that distinguishes our intelligence than the model's?
      It's just picking one token before the next, but isn't that what I'm also doing while writing this comment? In my view, there can certainly be intelligence involved in those simple choices.

    • @kralg
      @kralg 9 หลายเดือนก่อน +2

      @@doomsdayrule Intelligence is much more than just about writing a text. Our decisions are based not only on lexical facts, but on our personal experiences, personal interests, emotions etc. I cannot and not going to much deeper into that, but it must be way more complex than a simple algorithm based on a bunch of data.
      I am not saying less than you will never ever be able to make a future LLM that is indistinguishable from a human being. Of course when you are presented just a text written by "somebody" you may not be able to figure it out, but if you start living with a person controlled by an LLM you will distinguish much sooner than later. It is all because the bunch of data these LLMs are using is missing one important thing: personality. And this word is highly related to intelligence.

    • @KoflerDavid
      @KoflerDavid 8 หลายเดือนก่อน +5

      @@doomsdayrule As I am writing this comment, I'm not starting with a random word like "As" and then try to figure out what to write next. (Actually, the first draft started with "When")
      I have a thought in mind, and then somehow pick a sentence pattern suitable for expressing it. Then I read over (usually while still typing) and revise. At some point, my desire to fiddle with the comment is defeated by the need to do something else with my day, and I submit the reply. And then I notice obvious possibilities for improvements and edit what I just submitted.

    • @kralg
      @kralg 8 หลายเดือนก่อน

      @@MarcusHilarius One aspect to this is that we are living in an overhyped world. Just in recent years we have heard so many promises like what you made. Just think about the promises made by Elon Musk and other questionable people. The marketing around these technologies are way "in front" of the reality. If there is just a theoretical possibility of something, the marketing jumps on it, they create thousands of believers in the obvious aim to gather support for further development. I think it is just smart to be cautious.
      The other aspect to this is that many believers do not know the real details of the technologies they believe in. The examples you mentioned are not in the future, at some extent they are available now. We call it automation and they do not require AI at all. Instead they rely on sensor technology and simple logic. Put an AI sticker on it and sell a lot more.
      Sure machine learning will be a great tool in the future, but not much more. We are in the phase of admiration now, but soon we will face the challenges and disadvantages of it and we will just live with them as we did so with many other technologies from the past.

  • @mingzhu8093
    @mingzhu8093 9 หลายเดือนก่อน +2

    Program generated code goes way back for decades, if you ever use any ORM almost all of them generate tables for class and sql and vice versa. But I don’t think anybody just takes it as is without reviewing.

    • @caLLLendar
      @caLLLendar 9 หลายเดือนก่อน

      Reviewing can be automated.

  • @avananana
    @avananana 10 หลายเดือนก่อน +27

    I personally believe, much like many others, that AI/ML will only speedup the rate at which bad programmers become even worse programmers. Part of the art of writing software is writing it efficiently, and you can't do that if you always use tools to solve your problems for you. You need to experience the failures and downsides in order to fully understand how it works. There is a line when it turns from an efficient tool to a tool used to avoid actually thinking about solutions. I fully believe that there is a place for AI/ML in making software, but if people blindly use them to write software for them it'll just lead to hard-to-find bugs and code that nobody knows how it works because nobody actually wrote it.

    • @cookie_space
      @cookie_space 9 หลายเดือนก่อน +7

      You don't always have to reinvent the wheel when it comes to learning how to code.
      Everyone starts by copying code from Stack Overflow and many still do that for novel concepts they want to understand.
      It can be pretty helpful to ask AI for specific things instead of spending hours trying to search for something fitting...
      Sure thing, if you just stop at copying you don't learn anything

    • @conchitacaparroz
      @conchitacaparroz 9 หลายเดือนก่อน

      @@cookie_space but i think that's the thing, the risk of "just copying" will be higher because all the AI tools and AI features in our IDEs will make it a lot easier and more probable to get the code ready for you

    • @Markus-iq4sm
      @Markus-iq4sm 9 หลายเดือนก่อน +1

      @@cookie_spaceeveryone? Man don't throw everyone to the same bucket. Are you the guy who can not even write a bubble sort out of your head and you need to google every single solution? Well, that is sad

    • @cookie_space
      @cookie_space 9 หลายเดือนก่อน +4

      @@Markus-iq4sm I wasn't aware that your highness was born with the knowledge of every programming language and concept imprinted in your brain already. It might be hard to fathom for you, but some of us actually have to learn programming at some point

    • @Markus-iq4sm
      @Markus-iq4sm 9 หลายเดือนก่อน +1

      @@cookie_space you learn nothing by copy-pasting, actually it will even make you worse especially for beginners

  • @johncompassion9054
    @johncompassion9054 5 หลายเดือนก่อน

    This is why Linus is Linus. Just look at his intelligence, attitude to life and optimism. No negativity, rivalry or hate. My respect.

  • @MrVampify
    @MrVampify 10 หลายเดือนก่อน +24

    I think LLM technology will make bad programmers faster at being bad bad programmers and hopefully push them to be better programmers faster as well.
    LLMs I think will make good programmers more efficient at writing good code they probably would already write.

    • @melvin6228
      @melvin6228 10 หลายเดือนก่อน +7

      LLMs solve not needing to remember how you write things. You still have to be able to read it and have good judgement on where the code is subpar.

    • @ougonce
      @ougonce 10 หลายเดือนก่อน +7

      @@melvin6228 This is nonsense. How can you audit code that you yourself don't remember how to write?

    • @yjlom
      @yjlom 10 หลายเดือนก่อน +2

      @@ougonce is that function you use twice a year called "empty_foo_bar" or "clear_foo_bar"? Or maybe "foo_bar_clear"? Those kinds of questions are very important and annoying to answer when writing, useless when reading.

    • @unkarsthug4429
      @unkarsthug4429 10 หลายเดือนก่อน +5

      ​@@yjlom Or even just something as simple like the question of how you get the length of an array in the particular language you are using. After using enough languages, they kind of all blend together, and I can't remember if this one is x.length, x.length(), size(x), or len instead of length somewhere. I'm used to flipping between a lot of languages quickly, and it's really easy to forget the specifics of a particular one sometimes, even if I understand the flow I would like the program to follow. Essentially, having an AI that can act as a sort of active documentation can really help.

    • @RobFisherUK
      @RobFisherUK 10 หลายเดือนก่อน

      I was using ChatGPT to help me write code just today. I'm making a Python module in Rust and I'm new to Rust.
      I wanted to improve my error handling. I asked how to do something and ChatGPT explained that I could put Results in my iterator and just collect at the end to get a vector if all the results are ok or an error if there was a problem. I didn't understand how that worked and asked a bunch of follow-up questions about various edge cases. ChatGPT explained it all.
      Several things happened at once: I got an immediate, working solution to my specific problem. I didn't have to look up the functions and other names. And I got tutored in a new technique that I'll remember next time I have a similar situation.
      And it's not just the output. It's that your badly explained question, where you don't know the correct terminology, gets turned into a useful answer.
      On a separate occasion I learned about the statistical technique of bootstrapping by coming up with a similar idea myself and asking ChatGPT for prior art. I wouldn't have been able to search for it without already knowing the term.

  • @calmhorizons
    @calmhorizons 9 หลายเดือนก่อน +3

    There is a fundamental philosophical difference between the type of wrong humans do, and the type AI does (in its present form). I think programmers are in danger of seriously devaluing the relative difference between incidental errors and constitutive errors - that is, humans are wrong accidentally, LLMs are wrong by design - and while we know we can train people better to reduce the former, it remains to be seen if the latter will remain inherent in the implementation realities of the latter - i.e. relying on statistical inference as a substitute for reason.

    • @caLLLendar
      @caLLLendar 9 หลายเดือนก่อน

      You got stuck in your own word salad. Start over; Think like a programmer. Break the problem down. How would you go about proving the LLM's code is correct using today's technology?

    • @calmhorizons
      @calmhorizons 9 หลายเดือนก่อน +1

      ​@@caLLLendar
      First, I don't appreciate your tone. I know this is TH-cam and standards of discourse here are notoriously low, but there is no need to be rude.
      I wasn't making a point about engineering.
      The issue is not the code, code can of course be Unit Tested etc. for validity.
      The issue is that the method of producing the code is fundamentally statistical, and not arrived at through any form of reason. This means there is a ceiling of trust that we must impose if we are to avoid the obvious pitfalls of such an approach.
      As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data - and you, as the developer, if you do not preference your own problem solving skills are increasingly relegated to the role of code babysitter. This is not something to be treated casually.
      Early research is now starting to validate this concern: visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx
      These models have their undeniable uses, but I find it depressing how many developers are rushing to proclaim their own obsolescence in the face of a provably flawed (though powerful) tool.

    • @caLLLendar
      @caLLLendar 9 หลายเดือนก่อน

      @@calmhorizons Have one developer draft psueudocode that is transformed to whatever scripting language that is preferred and then use a boatload of QA tools. The output from the QA tools prompt the LLM. Look at Python Wolverine to see automated debugging. Google the loooooonnnnng list of free open source QA tools that can be wrapped around the LLMs. The LLMs can take care of most of the code (like writing unit tests, type hinting, documentation, etc).
      The first thing you'd have to do is get some hands on experience in writing the pseudocode in a style that LLMs and non-programmers can understand.
      From there, you will get better at it and ultimately SEE it with your own eyes. I admit that there are times that I have to delete a conversation (because the LLM seems to become stubborn). However, that too can be automated.
      The result?
      19 out of 20 developers fired. LOL I definitely wouldn't hire a developer who wouldn't be able to come up with a solution for the problems you posed (even if the LLM and tools are doing most of the work).
      Some devs pose the problem and cannot solve it. Other devs think that the LLM should be able to do everything (i.e. "Write me a software program that will make me a million dollars next week).
      Both perceptions are provably wrong. As programmers it is our job to break the problem down and solve it.
      Finally, there are ALREADY companies doing this work (and they are very easy to find).

    • @vibovitold
      @vibovitold 5 หลายเดือนก่อน

      @@calmhorizons exactly. Agreed, and very well put. Respect for taking time to reply to a rather shallow and asinine comment.
      "As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data "
      I would add that this will likely be exacerbated once more and more AI-generated code makes its way into the training datasets (and good luck filtering it out).
      We already know that it has a very deteriorating effect on the quality (already proven for the case of image generation), because all flaws inherent to the method get amplified as a result.

  • @draoi99
    @draoi99 9 หลายเดือนก่อน +2

    Linus is always chill about new things.

  • @sidharthv
    @sidharthv 9 หลายเดือนก่อน +19

    I learned python on my on from TH-cam and online tutorials. And recently I started learning Go the same way, but this time also with the help of Bard. The learning experience has been nothing short of incredible.

    • @Spacemonkeymojo
      @Spacemonkeymojo 9 หลายเดือนก่อน +3

      You should pat yourself on the back for not asking ChatGPT to write code for you.

    • @incremental_failure
      @incremental_failure 9 หลายเดือนก่อน

      @@Spacemonkeymojo Only my TH-cam comments are written by ChatGPT, not my code.

    • @etziowingeler3173
      @etziowingeler3173 9 หลายเดือนก่อน

      Bard and code, only for simple stuff

  • @gmxmatei
    @gmxmatei 2 หลายเดือนก่อน +1

    The future is Subject Oriented Programming!!

  • @lindhe
    @lindhe 10 หลายเดือนก่อน +3

    "Hopeful and humble" sounds like a good name for a Linux release. Just saying…

  • @aaronstathatos6195
    @aaronstathatos6195 2 หลายเดือนก่อน +1

    4:31 - It’s crucial to remember that the current state of LLMs is the worst they’ll ever be. They’re continually improving, though I suspect we’ll eventually hit a point of diminishing returns.

    • @Insideoutcest
      @Insideoutcest 2 หลายเดือนก่อน +1

      how do you make this statement? the worst they'll ever be? really? how do you come up with this. you dont even program so what is your opinion worth

    • @aaronstathatos6195
      @aaronstathatos6195 2 หลายเดือนก่อน

      @@Insideoutcest I actually just received my first offer doing R&D for a software development company. I specifically specialize in AI product software development (writing code) . The statement I made is 100% factual, the current capabilities of models are the worst they will ever be…. They will only improve, now how much remains to be seen. Could be just 2% could be 20%. I personally believe there is room for considerable improvement before we hit the frontier of diminishing returns.
      Edit: you know nothing about me, why tell me I don’t program? As if that would certify my previously stated opinion on the improvement of the technology….

    • @Insideoutcest
      @Insideoutcest 2 หลายเดือนก่อน

      @@aaronstathatos6195 cope, you're a laymen

  • @msromike123
    @msromike123 4 หลายเดือนก่อน +3

    Well, my first Arduino project went very well. A medium complexity differential temperature project with 3 operating modes, hysteresis, etc. Medium complexity. I know BASIC and 4NT batch language. Microsoft Co-pilot helped me produce tight, memory efficient, buffer safe, and well documented code. So, AI for the win!

  • @cesarlapa
    @cesarlapa 9 หลายเดือนก่อน +1

    That Canadian guy was lucky enough to be given the name of a true tech genius

  • @flokar6197
    @flokar6197 9 หลายเดือนก่อน +14

    I have never programmed before in my life and with GPT4 I have programmed several little programs in Phython. From code that helps me renaming large amount of files to more advanced stuff. LLMs give me the opportunity to play around. Only thing I need to learn is how to prompt better.

    • @kevinmcq7968
      @kevinmcq7968 9 หลายเดือนก่อน +2

      you're a programmer in my eyes!

    • @twigsagan3857
      @twigsagan3857 9 หลายเดือนก่อน +4

      "Only thing I need to learn is how to prompt better."
      This is exactly the problem. Especially when you scale. You can't prompt to make a change to an already complex system. It then becomes easier to just code or refactor yourself.

    • @chunkyMunky329
      @chunkyMunky329 9 หลายเดือนก่อน

      The fact that anybody needs to "prompt better" suggest that LLMs are not very good yet

    • @flokar6197
      @flokar6197 9 หลายเดือนก่อน

      @@twigsagan3857 Only problem is when the code exceeds the Token Limit. Otherwise I can still let the LLM correct my code. Takes a while to get there but It works.. And no I am not at all a programmer xD

    • @flokar6197
      @flokar6197 9 หลายเดือนก่อน +1

      @@chunkyMunky329 huh? LLMs predict the most likely answer. So the way you describe the Task is the most important thing in dealing with it..

  • @vijaysulakhe5605
    @vijaysulakhe5605 15 วันที่ผ่านมา

    I am using AI to learn arduino coding, it helps me a lot to understand the code and do fault finding but when i ask to make corresponding circuit diagram, for even simple problem, it struggles. But, It explains the circuit diagram very well. Needs improvement. Many PDF books available, Just feed the AI and improve ?

  • @aniellodimeglio8369
    @aniellodimeglio8369 9 หลายเดือนก่อน +3

    LLMs are certainly useful and can very much assist in many areas. The future is really is open source models which are explainable and share their training data.

  • @srinivaschillara4023
    @srinivaschillara4023 6 หลายเดือนก่อน +1

    so nice, and also the quality of comments for this video.... there ishope for humanity.

  • @Kersich86
    @Kersich86 9 หลายเดือนก่อน +4

    my main fear is that this is something we will start relying on too much. especially when people start even autocompletion can become a crutch so much so that a developer becomes useless without it. imagine that but when it comes to thinking about code. we are looking at a feature where all software will be as bad as modern web develooment.

    • @kevinmcq7968
      @kevinmcq7968 9 หลายเดือนก่อน

      technology as an idea is reliable - a hammer will always be a hard thing + leverage. We have relied on technology since the dawn of mankind, so I'm not sure what you're saying here.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 9 หลายเดือนก่อน

      @@kevinmcq7968 llms are reliable? how so? can you name a technology that we have relied on in the past that is as random as llms? I am genuinely curious

    • @diadetediotedio6918
      @diadetediotedio6918 8 หลายเดือนก่อน

      @@kevinmcq7968
      I think you are just intentionally misunderstanding what he is saying. He is not saying tools are not usefull, he is saying that if the tool starts to replace the use of your own mind it can make you dependent at the point that it will prejudice your own reasoning skills (and we have some evidence that this is happening, that's why some schools are turning back to use handwritting for example / Miguel Nicolelis also has some takes on this matter).

  • @RayDusso
    @RayDusso 8 หลายเดือนก่อน

    I like how he didn’t fall into the trap of Ai bashing like the host was trying to lead him to. That’s how you can differentiate a trend follower from a visionary.

  • @roaringdragon2628
    @roaringdragon2628 10 หลายเดือนก่อน +9

    I find that in their current state, these models tend to make more work for me deleting and fixing bad code and poor comments than the work they save. It's usually faster for me to write something and prune it than to prune the ai code. This may be partially because it's easier for me to understand and prune my own code than to do the same with the generated stuff, but there is usually a lot less pruning to do without ai.

    • @voltydequa845
      @voltydequa845 9 หลายเดือนก่อน

      No. Your comment was for me like a fresh air in the middle of all this pseudo-cognitive farting about the so-called AI. No, it is not only for you. Those who say otherwise are just posers, actors, mystifying parrots repeating the instilled marketing hype.

    • @rainharlock7616
      @rainharlock7616 วันที่ผ่านมา

      Maybe that's just in the beginning? Eventually, it might become easier to spot someone else's mistake than your own. Also, AI might more easily find your mistakes.

  • @giannirosato4341
    @giannirosato4341 9 หลายเดือนก่อน +2

    From personal experience, I think LLMs *writing* your code are terrible when learning. They will produce bugs that you don't understand as a beginner (speaking from experience). As for explaining stuff, I think they're a bit more useful with that.

    • @kevinmcq7968
      @kevinmcq7968 9 หลายเดือนก่อน +2

      if you didn't like what the tool performed, then it isn't the tool, friend.

    • @caLLLendar
      @caLLLendar 9 หลายเดือนก่อน

      Keep practicing your prompts (and programming). If you like, I'll help you train (for free).

  • @DemPilafian
    @DemPilafian 10 หลายเดือนก่อน +4

    Auto-correct can cause bugs like tricking developers into importing unintended packages. I've seen production code that should fail miserably, but pure happenstance results in the code miraculously not blowing up. AI is a powerful tool, but it will amp up these problems.

    • @caLLLendar
      @caLLLendar 9 หลายเดือนก่อน

      No. Thinking like a programmer, are you able to come up with a solution?

  • @nwic
    @nwic 19 ชั่วโมงที่ผ่านมา

    yes i accept suggestions, and i read them too, never accept something you dont know about.

  • @hyperthreaded
    @hyperthreaded 9 หลายเดือนก่อน +3

    I love how Hohndel disses AI as "not very intelligent" / "just predicts the next word" and Linus retorts that it's actually pretty great lol

    • @hyperthreaded
      @hyperthreaded 9 หลายเดือนก่อน

      @@EdwardBlair yeah I also found it curious that as he was about to ask Linus about AI in kernel development, he apparently felt an overwhelming need to first vent his own opinion on AI in general even though that wasn't even the topic at hand and he wasn't the person that was being interviewed.

    • @GSBarlev
      @GSBarlev 9 หลายเดือนก่อน +3

      I'm an expert in the field and I _still_ think it's "autocorrect on steroids." It's just that I think that autocorrect was a revolutionary tool, even when it was just Markov chains.

  • @kawingchan
    @kawingchan 9 หลายเดือนก่อน +2

    I’m afraid he didnt get something, going from assembly->c->rust->(yet higher level lang) is a whole universe apart from understanding messy human natural language, and then translate that into code, there r humans who understood compiler, but no human (yet) understand how a transformer did its “mapping”. Linus wasn’t trained in machine learning, so in this aspect, one should discount his opinion.

  • @mdimransarkar1103
    @mdimransarkar1103 10 หลายเดือนก่อน +3

    could be a great tool for static analysis.

    • @chunkyMunky329
      @chunkyMunky329 9 หลายเดือนก่อน +3

      If it was great at static analysis then people would probably already be using it for static analysis

  • @alleged_STINK
    @alleged_STINK 4 หลายเดือนก่อน

    "This pattern doesn't look like the usual pattern, are you sure?" awesome

  • @nati7728
    @nati7728 10 หลายเดือนก่อน +3

    I already feel helpless without intellisense. I can imagine how future developers will feel banging their head against their keyboard because their LLM won't load with the right context for their environment.

    • @willsamadi
      @willsamadi 10 หลายเดือนก่อน +1

      I use intellisense on a daily but I know people who code on raw vim and get more things done than me in a day. AI is going to make typical things more easy and is going to have limitations for a long time and to do anything outside those limitations we'll need actual programmers.

  • @ajaypatro1554
    @ajaypatro1554 9 หลายเดือนก่อน +1

    LLM in the hands of Jr Dev is like a bug building tool 😂, LLM in the hands of experienced Sr. Dev is like a sharpening tool.

  • @HonoredMule
    @HonoredMule 10 หลายเดือนก่อน +8

    This is the first time I've seen a public figure push back on the humancentric narrative that LLMs are insufficient because (description of LLMs with false implicit assumption it contains a distinction from human intelligence). He's also one of the last people in tech I'd expect to find checking human exceptionalism bias, but but that's where assumptions get you.
    Then his role as kernel code gatekeeper probably gives him pretty unique insights into the limits of _other_ humans' intelligence, if not also his own. 😉
    Anyway I hope to see more people calling out this bias, or fewer people relying on it in their arguments. If accepted, it tends to render any following discussion moot.

    • @Jonas-Seiler
      @Jonas-Seiler 10 หลายเดือนก่อน

      you shouldn’t conclude llms to not be dumb as fuck just because they happen to be smarter than you

  • @LuicMarin
    @LuicMarin 7 หลายเดือนก่อน

    It is already helping review code, just look at million lint, it's not all AI but it has aspects where it uses LLMs to help you find performance issues in react code. A similar thing could be applied to code reviews in general

  • @hyphenpointhyphen
    @hyphenpointhyphen 10 หลายเดือนก่อน +9

    I think some humans would be glad if they still had the time to hallucinate, dream or imagine things from time to time.

    • @asainpopiu6033
      @asainpopiu6033 9 หลายเดือนก่อน +1

      good point xD

    • @verdiss7487
      @verdiss7487 9 หลายเดือนก่อน

      I think most project leads would not be glad if one of their devs submitted a PR for code they hallucinated

    • @hyphenpointhyphen
      @hyphenpointhyphen 9 หลายเดือนก่อน

      @@verdiss7487 not what i am talking about

    • @pueraeternus.
      @pueraeternus. 9 หลายเดือนก่อน

      late stage ca-

    • @asainpopiu6033
      @asainpopiu6033 9 หลายเดือนก่อน

      @@pueraeternus. cannibalism?

  • @sj6986
    @sj6986 4 หลายเดือนก่อน

    Has been a long time since I have seen such a hard argument - both are very right. They will have to master the equivalent of unit testing to ensure that LLM driven decision-making doesn’t become a runaway train. Even if you put a human to actually “pull the trigger”, if the choices are provided LLM then they could be false choices. On the other hand, there is likely a ton of low lying fruit that LLM could mop up in no time. There could be enormous efficiencies in entire stacks and all the associated compute in terms of performance and stability if code is consistent.

  • @flink1231
    @flink1231 10 หลายเดือนก่อน +3

    The difference between hallucination and idea is the quality of the reasoning behind it. The issue is not that the llms hallucinate, that is a future feature, the issue is that it is unable to figure when the question is objective and if it knows the answer... not easy to fix, for sure, but I have no doubt it will be fixed one way or another.

    • @alexxx4434
      @alexxx4434 10 หลายเดือนก่อน +1

      Not until it gains some sort of consciousness.

    • @flink1231
      @flink1231 10 หลายเดือนก่อน

      @alexxx4434 I think when it does, it will take a while for all to agree it does... consciousness is very definition dependent and looks to me like a moving target (or rather a raising bar to clear).

  • @travismaxwell9805
    @travismaxwell9805 9 หลายเดือนก่อน +1

    This is my experience with AI coding and is probably a telling indication that programmers will always be needed. I script in a cad environment using LISP which is not 100% compatible with Autocad’s lisp. It is fairly compatible up until Visual Lisp came out, but not after. Everything script it writes fails. It reads well, but never works.

  • @LarsLarsen77
    @LarsLarsen77 9 หลายเดือนก่อน +4

    This host underestimates how hard a task autocorrect is. You have to understand human sentiment to predict the next word, which is really hard.

  • @breebw
    @breebw 9 หลายเดือนก่อน

    Someone recently said Something like "It isn't really about what AI can do, but what the public believes it can do".

  • @cruz1ale
    @cruz1ale 9 หลายเดือนก่อน +5

    Saying LLM's are just autocorrects on steroids is like saying human experts are just autocorrects on steroids. Obviously there's more to being an expert than that, and it is that expert role we are now bit by bit transferring over to machines

    • @alakani
      @alakani 9 หลายเดือนก่อน

      Please let them keep saying it, it's a super convenient way for me to tell when somebody has no idea what's going on, without having to interview them

    • @ABa-os6wm
      @ABa-os6wm 9 หลายเดือนก่อน

      True. LLMs are .more of a pattern generator on steroids.

    • @GSBarlev
      @GSBarlev 9 หลายเดือนก่อน +1

      It's literally true, though. It's all about probabilities and choosing the most appropriate response. What differentiates Transformer models from previous Markov chains and Naïve Bayes algorithms is that Transformers encode the input into a more useful vector space before applying the predictions.
      You may find the "on steroids" shorthand as somewhat short-selling the importance of that shift, but the alternative is that we talk about artifical neural network models as if they have intelligence or agency (using terms like "attention," "understanding," "learn" and "halluciante") which, while useful shorthand, is preposterous.

    • @alakani
      @alakani 9 หลายเดือนก่อน

      @@GSBarlev Sure but you can tell when people are just repeating that phrase because they heard it somewhere, in an attempt to rationalize metaphysical concepts like souls. The only difference between the hyperdimensional vectorspace that modern AI's operate in and the Hilbert space that you operate in is number of dimensions and support for entanglement and superposition- which are not exclusive to biology, and which many would argue are not even relevant to biology (they are, but AIs can have qubit-based mirror neurons too)

    • @GSBarlev
      @GSBarlev 9 หลายเดือนก่อน

      @@alakani Going to ignore your wider point and just give you an FYI-you can pretty safely disregard quantum effects when it comes to ideas of "consciousness." Yes, the probability that a K+ ion will quantum tunnel through a cell membrane is nonzero, but it's _infinitesimal,_ especially compared to the _real_ quantum tunneling that poses a major limitation to how tightly we can pack transistors on a CPU die.

  • @noneatallatanytime
    @noneatallatanytime 9 หลายเดือนก่อน +1

    It is almost unbelievable to hear someone being reasonable when talking about "AI". I hope this becomes the main stream attitude soon and that CEO's and tech bros drop the marketing speak. It is actually an interesting area of automation and calling it AI, I think, does the field a disservice in the long run even though it helps to sell products right now.

    • @crptc5707
      @crptc5707 9 หลายเดือนก่อน +2

      They hype it to pump the stock price

    • @cortster12
      @cortster12 9 หลายเดือนก่อน +2

      Calling it AI is accurate, the issue is people have a wrong impression in their minds of AI. People think AGI when they hear AI, when in reality what we have right now is narrow AI. It's still AI, objectively, but people are uninformed and think that means more than it does.

  • @breezystormatic827
    @breezystormatic827 9 หลายเดือนก่อน +3

    there is a lot more to software engineering than just writing code

  • @MorsDengse
    @MorsDengse 5 หลายเดือนก่อน

    We already have huge problems with OSS quality, where more than 80% of all OSS is either poorly maintained or not maintained at all. On top of that, OSS is on the rise, being the single biggest cause for the increasing Technical Dept.
    LLM have the potential for greatly increasing the amount of OSS generated, meaning, that unless we actively address the OSS quality, LLM will most likely make it worse.

  • @skejeton
    @skejeton 10 หลายเดือนก่อน +4

    I think the hallucinations make it less scary, the fact that it needs human involvement means that jobs will stay.

  • @은하수-p2t
    @은하수-p2t 8 หลายเดือนก่อน

    You're right. But we're also hearing some negative stories in terms of teamwork. For example, there are some situations where a junior developer sits and waits for an AI code that keeps giving different answers instead of writing code, or it takes more time to analyze why the code was written the way it was, as opposed to the other way around, but it still helps to gain insight or a new approach, even if it's a completely different answer.

    • @tapetwo7115
      @tapetwo7115 8 หลายเดือนก่อน

      That junior coder needs more GitHubs so we can bring them on as a lead dev to work with AI. The middle management and entry level is over in the future.

  • @TheClonerx
    @TheClonerx 9 หลายเดือนก่อน +3

    Im still very worried about the copyright implications, and the hidden immoralities happening to classify training data

    • @rithikgandhi3685
      @rithikgandhi3685 9 หลายเดือนก่อน +1

      Yes no one is talking about the effect on creativity LLMs will cause

    • @fabsi154
      @fabsi154 9 หลายเดือนก่อน

      @@rithikgandhi3685yeah

  • @DrugzMunny
    @DrugzMunny 9 หลายเดือนก่อน +1

    "Here's the code for the new program. It's created by the same technology that constantly changes my correct typing to a bunch of wrong and completely ridiculous strings, like changing 'if' to 'uff' or changing 'wrong' to 'wht'."

  • @wabbajocky8235
    @wabbajocky8235 10 หลายเดือนก่อน +5

    linus with the hot takes. love to see it

  • @mixenne
    @mixenne 7 หลายเดือนก่อน +2

    Yeah it's autocomplete/correct, except the autocomplete is owned and controled by huge companies like Open AI who train it on data no one gave them consent to use. Implementing machine learning autocomplete/correct that runs on user's machine vs a huge HPC monster that is controled by a huge company are two very different things IMO. I'm not endorsing machine learning that can't be ran offline and on user's machine. Even better if it allows the user to train their own models on their own data. No ridiculous power and water used, no company using copyrighted material, no companies using your data on their servers for training AI. I think the trend of offloading more and more of our lives to an API or a huge for profit company, and thus even further deepening the fact that users are products themselves is not the right way.

  • @stevecastle1730
    @stevecastle1730 10 หลายเดือนก่อน +2

    Linus has a much more correct perspective than the interviewer. Our brains ARE pattern predictors, our brains also dream and hallucinate. The author is trying to make it sound like those properties should diminish the credentials of 'LLMs' when really they make them more interesting. He's also ignoring that it's really Transformers we're talking about. Transformers are also being easily applied to visual, language, and audio data, and they're working easily multimodally to transform between them. There is no correct reading of the situation other than that something profound and core to the way intelligence probably works has been discovered.