@@denisblack9897don't hate anyone man, the guy is responsible for countless kids getting into tech, people tend to sort out the educational "bugs" on the way up :)
@@Ryochan7 Fun fact, fMRI studies show trolling has the same neural activation patterns as psychopaths thinking about torturing puppies; it's very specific, right down to the part where they vacillate between thinking it's their universal right, and that they're helping someone somehow
@@Sindoku While I get your point, comments are definitely a good thing. Yes code should be self-explanatory, and if it isn't you try your best to fix this. But there's definitely cases where it's best to add a short comment explaining why you've done something. It shouldn't describe *what* but *why*
@@NetherFX That's the point, a comment is worthless unless it touches on the why. A comment that just discusses the what is absolutely garbage because the code documents the what.
I am senior software engineer and I use chat gpt sometimes at work to write powershell scripts. They usually provide a good enough start for me to modify to do what i want. That saves me time and allows me to create more scripts to automate more. Its not my main programming task, but it definitely saves me time when I need to do it.
Yep, saves me so much time with data preprocessing, and adds nice little features that I wouldn't normally bother with for a 1 time use throwaway script
If you let the LLM author code without checking it, then inevitably you will just get broken code. If you don't use LLMs you will take twice as long. If you use LLMs and review and verify what it says and proposes, and use it as Linus rightly suggests as a code reviewer who will actually read your code and can guess at your intent, you get more reliable code much faster. At least that is the state of things as of today.
Perhaps anecdotal, but it (AI Assistant in my case, I'm using JB Rider, pretty sure that's tied to ChatGPT) seems to get better with time. After finishing a method, I have another method already in mind. I move the cursor and put a blank line or two in under the method I just created in prep for the new method. If I let it sit for just a second or two before any keystrokes, often times it will predict what method I'm about to create all on its own, without me even starting the method signature. Yes, sometimes it gets it very wrong and I'll just hit escape to clear it, but sometimes it gets it right... and I mean really scary right. Like every line down to the keystroke and even naming is spot on, consistent w/ naming throughout the rest of the project. Yes, agreed, you still need to review the generated code, but I suspect that will only continually get better with every iteration. Rather then autocompleting methods, eventually entire files, then entire projects, then entire solutions. It's probably best for developers to try to learn to work with it in harmony as it evolves, or they will fall behind their peers that are embracing it. Scary and exciting times ahead.
@@keyser456 Same experience for me. It predicts what I was about to write next about 80% of the time, and when it gets it right, it's pretty much spot on. Insane progress just over the past year. Imagine where it will be in another year. Or five years. Coding is going to be a thing of the past, and it's going to happen very quickly.
If it is intelligent enough to write code, it will eventually become intelligent enough to debug complex code, as long as you tell it what is the issue that arises
Oh man, now i really want to get into coding just to get that same transformative experience of a tool thinking ahead of you. I am a Designer, and to be frank, the experience with AI in my field is much less exciting, its just stockfootage on steroids, all the handywork of editing and putting it together is sadly the same. But the models are evolving rapidly and stuff like AI object select and masking, vector generation in Adobe Illustrator, transformative AI (making a summer valley into a snow valley e.g.) and motion graphics AI are on the horizon to be good or are already there. Indeed, what a time to be alive :D might get into coding soon tho
While AI lowers the bar to start programming, I'm afraid it also makes programming bad code easier. But with like any other tool, more power brings more responsibility and manual review should still be just as important.
as a cloud engineer I gotta say chatgpt with gpt 4 really turbocharges me for most tasks, my productivity shot up 100-200% and i'm not kidding. You gotta know how to make it work for you and it's amazing :)
There will be more than one AI , for each task, to create code and to validate code. Make no mistake, AGI is the last target, but the intermediate ones are good enough to speed up the whole ordeal/effort
Ok, speed, efficiency, productivity… All true, but to what effect? Isn’t it so that every time we’ve had a serious paradigm shift, we thought we could “save time”. Sadly, since corporations are not ‘human’, we’ve ended up working *more* not less, raising the almighty GDP - having less free time and not making significantly more money. Unless… you own shares, IP, patents and other *derivatives* of AI as capital. AI is a tool. A sharp knife is also one. This “debate” should ask “who is holding the tool, and for what purpose?”. That question reveals very different answers to a corporation, a government, a community or a single person. It’s not what AI is or can do. It’s more about what we are, and what we do with AI… 👍
It reminds me about talk in some podcasts before LLM, where speaker said that they tried to use AI as an assistant for medical reports and they faced the following problem: sometimes people see that AI gets the right answers and then when they disagree with it, they still choose the AI's conclusion, because "system can't be wrong". So to fight it, they programmed the system to sometimes give the wrong results, and ask the person to agree or disagree with it, to force people to chose the "right" answer and not to agree with anything that system says. And this is what I believe the weak point of LLM. While it's helpful in some scenarios, in other it can give SO deceiving answers which looks exactly how it should be, but in fact it's something that doesn't even exists. E.g. I tried to ask it about best way to get an achievement in the game, and it came with things that really exists in the game and sounds like they should be related to the achievement, but in fact they not. Or my friend tried to google windows error codes, and it came up with the problem and their descriptions, though it doesn't really exists either.
I have had copilot suggest an if statement that fixed an edge case I didn't contemplate, enough times to see it could really shine in fixing obvious bugs like that.
Linus..... My man!!! I would probably hate working with him, because I am not a very good software engineer and he would be going nuts with my time-complexity solutions... but boy has he inspired me. Thank you!
I don't think he would. His famous rants on LKML before he changed his tone were ate people who SHOULD HAVE KNOWN BETTER. I don't remember him going nuts at newbies for being newbies. He did go nuts at experts who tried to submit sub-par/lazy/incomplete/etc work and should have know it's sub-par and needs fixing and didn't bother doing that. He was quite accurate and fair in that.
I think age makes anyone more humble, but sometimes less open minded. It’s good to see Linus recognize that LLMs have their uses, while some projects like Gentoo have stood completely against LLMs. Nothing is black and white, and when the hype is over, I think LLMs will still be used as assistants to pay attention to small stuff we sometimes neglect.
It's another tool like static and dynamic analysis. No programmer will follow these tools blindly, but can use them to make suggestions or improve a feature. There have been times i've been stuck on picking a good data structure, and gpt has given more insightful ideas or edge cases i was not considering. That's this most useful moment right now. A Rubber Ducky.
>No programmer will follow these tools blindly My sweet summer child. CURL authors already have to deal with "security reports" because some [REDACTED]s used Bard to find "vulnerabilities" to get a bug bounty. Wait for next jam in style "submit N PRs and you get our merch" and instead of PRs that fix a typo, you'll get even worse - the code that doesn't compile.
I agree that it can help in these scenarios. People should make aware of this, as the current discussion is way over the top and scare people in losing their jobs (an therefore their mental health). Another thing is, as sustainability was a topic, I'm not sure if the energy consumed by this technology justifies these trivial tasks. Talking with a colleague seems more energy efficient.
aha. until it writes a Go GTK phone app (Linux phone) zero to hero with no code review and only UI design discussions. 6 months ago. just chatgpt4. programming is dying and you people are dreaming. in 2023 there were 30% less new hires across all programming languages. for 2024, out of 950 tech companies, over 40% plan layoffs due to AI. a bit tired to link the source
Absolutely, im convinced the other commenters claiming LLMs will make programming obsolete in 3 years or whatever are either not programmers or bad programmers lol
As long as you understand regular expressions, and review and write extensive test cases for what the regular expressions should do. Then ChatGPT is pretty useful
For experienced programmers, most of the mistakes they make can be categorised as 'stupid' i.e. a simple overlook, where the fix is equally stupidly trivial. Exactly the same with building a PC - you might have done it 'millions' of times, but forgetting something stupid in the build is always stupidly easy to do, and though you might not do it often, you will inevitably still do it. At some point. Unfortunately, the fixes seem to always take forever to find.
That’s the only good take on ai in the video, and maybe the only truly helpful thing ai might ever be used for, finding the obvious mistakes humans make because they’re thinking about more important shit.
I disagree with this. Simple bugs are easier to find, so we find more of them. The other bugs are more complex which makes them harder to find, so we find less of them. For example, not realising that the HTTP protocol has certain ramifications that become a serious problem when you structure your web app a certain way.
@@chunkyMunky329 It's definitely true that there are always exceptions, though I'd politely suggest "not realising" is primarily a result of inexperience. A badly written and/or badly translated urs can lead to significant issues when the inevitable subsequent change requests flood in, especially if there's poor documentation in the code. Any organisation is only as good as it's QA. We see this more and more in the games industry, where we increasingly, and deliberately, offload the testing aspect of that onto the end consumer. Simple bugs should be easy to find, you'd think, but they're also very, very easy to hide, unfortunately.
You people dont understand, it never was if ai would replace programmerw, it always was if ai will reduce job position by a critical amount so that its hard to get hired
Those subtle bugs are what LLMs produce copious amounts of. And it takes very long to debug. To the degree where you probably would have been better off if you just wrote the code by hand yourself.
@@AvacadoJuice-q9bWhat, like a "Prompt Engineer"? It's ridiculous that this became a thing given how LLMs work. It's all about intuition that most people can figure out if they spend a day messing around with it.
Disagree. Since humans constantly creating bugs when coding themselves, even subtle, even the best of the best. LLM are amazing. I realized my python code ended up needing to be multi threaded. I fed it my code, and it multi threaded everything. They are incredible and only this is just beginning? 5 years will blow peoples minds, completely. People who don't find how amazing llms are, just aren't that bright in my opinion.
It's amusing how we, as programmers, often tell users that if they input poor quality data into the system, they should expect poor quality results. In this case, the fault lies with the user, not the system. However, now we find ourselves complaining about a system when we input low-quality data and receive unsatisfactory results. This time, though, we blame the system instead of ourselves
As someone with a degree in Machine Learning, hearing him call it LLMs "Autocorrect on steroids" gave me catharsis. The way people talk and think about the field of AI is totally absurd and grounded in SciFi only. I want to vomit every time someone tells me to "just use AI to write the code for that" or similar. AI, as it exists now, is the perfect tool to aid humans (think pair programming, code auto-completion for stuff like simple loops, rough prototypes that can inspire new ideas, etc.) Don't let it trick you into thinking it can do anyone's job though. It's just a digital sycophant, never forget that.
@@vuralmecbur9958if your job relies on not thinking and copy pasting code then yes it can replace you but if it is not,if you understand code and can modify it properly to your needs and specifications it can not replace you,I work on ai as well
@@vuralmecbur9958it's not about AI not being an "autocorrect on steroids". It's about "there are a lot of jobs out there, that could be done by autocorrect on steroids"
@@vuralmecbur9958do you have any valid arguments as to why people will get layed off instead of companies scaling up their projects? 200-300% increase in productivity simply means 200-300% increase in future project sizes, the field you're working in is already dying anyway if scaling up isn't possible and you're barking at the wrong tree where i'm working were constantly turning down projects because there's too much to do and no skilled labour to hire (avionics/defense)
@@vuralmecbur9958 go prompt it to make you a simple application and you'll see it's not taking anyone's job anytime soon. If anything, it's an amazing learning tool. You can study code and anything you don't understand, it will explain in depth. You don't quite grasp a concept? Prompt it to explain it further.
I think he really meant to say "autocomplete". Because it basically takes your prompt and looks for what answer is mostly likely to follow it, based on material it has read. Which _is_ indeed kind of how humans work... if you remove creativity and the ability to _interact_ with the world, and only allow them to read books and answer written questions. And by "creativity" I'm including the ability to spot gaps in our own knowledge and do experiments to acquire _new_ information that wasn't part of our training.
The thing people with the interviewers mindset misses is what it takes to predict correctly. The language model has to have an implicit understanding of the data in order to predict. And ChatGPT is using a large language model to produce text, but you could just as well use it to produce something else, like actions in a robot. Which is kind of what humans do; they see and hear things, and act accordingly. People who dismiss the brilliance of large language models on the basis that they're "just predicting text" are really missing the point.
@@sbqp3 - No, you couldn't really use it to "produce actions in a robot", because what makes ChatGPT (and LLMs in general) reasonably competent is the huge amount of material it was trained on, and there isn't anywhere near the same amount of material (certainly not in a standardised, easily digestible form) of robot control files and outcomes. The recent "leap" in generative AI came from the volume of training data (and ability to process it), not from any revolutionary new algorithms. Just more memory + more CPU power + easy access to documents on the internet = more connections & better weigh(t)ing = better output. And in any application where you just don't have that volume of easily accessible, easily processable data, LLMs are going to give you poor results. We're still waiting for remotely competent self-driving vehicles, and there are billions of hours of dashcam footage and hundreds of companies investing millions in it. Now imagine trying to use a similar machine-learning model to train a mobile industrial robot, that has to deal with things like "finger" pressure, spatial clearance, humans moving around it, etc.. Explicitly coded logic (possibly aided by some generic AI for object recognition, etc. - which is already used) is still going to be the norm for the foreseeable future.
@@alang.2054 I like his comment because most thinking humans do is in fact system 1 thinking - which is reflex-like and on a similar level as what LLMs do.
(Average typing speed*number of working days a year)/6 words per line of code ~=1milLOC/year. But we dont write that much. Why? Most coding is just sitting and thinking, then writing little LLMs are great to get started with a new language, library or writing repetitive datastructs or algs, but bad for production or logic (design patterns such as the Strategy pattern) due to not logically understanding the problem domain, which from our napkin math just proved is the largest part coding assistants arent improving
I wouldn’t even agree. Imagine yourself just getting the job to code x project. In that case, you can rely on a very limited amount of information. Within the right, there are very few ways in which LLMs fail.
Maybe if many programmers sit down and explain their thought process on multiple different problems it can learn to abstract the problem solving method programmers use. While the auto correct on steroids might be technically accurate for what it's doing, the models it builds to predict the next token are extremely sophisticated and for all we know may have some similarity to our logical understanding of problem domains. Also LLMs are still in their infancy. There are probably controls or additional complexity that could be added to address current shortcomings. I'm skeptical of some of the AI hype, I'm equally skeptical of the naysayers. I tend to think the naysayers are wrong based on what LLMs have already accomplished. Plenty of people just 2-3 years ago would've said some of the things they are doing now are impossible.
Read the original documentation and if there's something you don't understand, Google it and be social. Only let the LLM regurgitate that part of the docs in terms you understand as a last resort. I'm surprised at the creativity LLMs have in their own context, but don't replace reading the docs and writing code with LLMs. You must understand why the algo/struct is important and what problems each algorithm solves. If you think LLMs replace experience, you're surely mistaken and you'll be trapped in learned helplessness for eternity.
I literally asked chatGPT today to explain to MVCC pattern (Which I could've sworn is called the MVVC pattern but it corrected me to that) and its explanation got worse after every attempt of me telling it, it was not doing a good job.
@@SimGuntherreading the docs only works if you know what you're looking for. LLMs are great at understanding your badly written question. I once proposed a solution to a problem I had to ChatGPT and it said: that sounds similar to the technique in statistics called bootstrapping. Opened up a whole new box of tricks previously unknown to me. I could have spent months cultivating social relationships with statisticians but it would have been a lot more work and I'm not sure they'd have the patience.
Good interview, but I disagree with the introduction, where it is said that LLM's are "auto-correction on steroids" . Yes, LLMs do next token prediction. But that's just one part. The engine of a LLM is a giant neural network, that learned a (more or less sophisticated) model of the world. It is being used during inference to match input information against and, based on that correlations, creates new output information which leads, in an iterative process, to a series of next token. So the magic happens, when input information is matched against the learned world model, that leads to new output information.
Agreed! This is the type of thing people say somewhat arrogantly when they've only had a limited play with the modern LLMs. My mind was blown when I wrote a parser of what I would call medium complexity in python for a particular proprietary protocol. It worked great but it was taking 45 mins to process a days worth of data, and I was using it every day to hunt down a weird edge case that only happened every few days. So out of interest I copied and pasted the entire thing into GPT4 and said "This is too slow, please re-write it in C and make it faster" and it did. Multiple files, including headers, all perfect. It compiled first time, and did in about 30s (I forget how long exactly but that ballpark) what my hand written python program was doing in 45 mins. I don't think I've EVER written even a simple program that's compiled first time, let alone something medium complicated. To call this auto complete doesn't give it the respect it deserves. GPT4 did in a few seconds what would have taken me a couple of days (if I even managed it at all, I'm not an expert in C by a long stretch).
I agree, the reductionist argument trivializes the power of LLMs. We could say the same thing about humans, we "just predict the next word in a series of sentences". That doesn't capture the power and magic of human ingenuity.
@@davidparker5530humans don't just predict the next word though. Llms do. Neural networks don't think, all they do is guess based on some inputs. Humans think about problems and work through them, llms by nature don't think about anything more than what they've seen before.
My only gripe with AI generated code currently is when they write or suggest code that contains security vulnerabilities, or worse, leak credentials, secrets. AI may accelerate human productivity, but on the other side, it may also accelerate human stupidity.
Personally I think that while it will be extremely useful, there will also be this belief over time that the "computer is always right". In this sense we will surely end up with a scandal like Horizon in the future, but this time it will be much harder to prove that there was a fault in the system.
Precisely this. With Horizon it took years of them being incredulous that there were any bugs at all, that it must be perfect and that instead thousands of postmasters were simply thieves. Eventually the bugs/errors became so glaring (and finally maybe someone competent actually looked at the code) that it was then known that the software was in fact broken. What then followed were many many more years of cover ups and lies, with people mainly concerned with protecting their own status/reputation/business revenue rather than do what was right and just. Given all this, the AI scenario is going to be far worse: the AI system that “hallucinates” faulty code will also “hallucinate” spurious but very plausible explanations. 99.99% won’t have the requisite technical knowledge to determine that it is in fact wrong. The 0.01% won’t be believed or listened to. The terrifying prospect of AI is in fact very mundane (not Terminator nonsense): its ability to be completely wrong or fabricate entirely incorrect information, and then proceed to explain/defend it with seemingly absolute authority and clarity. It is only a matter of time before people naturally entrust them far too much, under the illusion that they are never incorrect, in the same way that one assumes something must be correct if 99/100 people believe it to be so. Probability/mathematics is a good example of where 99/100 might think something is correct, but in fact they’re all wrong - sometimes facts can be deeply counterintuitive, and go against our natural intelligence heuristics.
Maybe. But it depends what we allow ai to be in charge of. Remember, if we vote out the gop we can like pass laws again to do things for the benefit of the people including ai regulations if needed.
I love this little short. I think what both of them said is true. LLM is definitely "autocorrect on steroids", as it were. But honestly, a lot of programming or really a lot of jobs in general don't really require higher level of intelligence, as Linus said - we all are autocorrect on steroids to some degree, because for the most part a lot of things we do, that's all you need. The problem is knowing the limitations of such a tool and not attempting to subvert human creativity with it.
@@traveller23e actually true. if you understand every aspect of the code, why wouldn't you just have written it yourself? at some point when using llms these people will become used to the answers being mostly correct so they'll stop checking. productivity 200% bla bla, yeah sure dude. man llms will ruin modern software even more, todays releases are already full of bugs
@@traveller23e Well the same goes for the compiler. If you "fully understand" the code there should never be a warning or error. Most tools like GitHub-copilot require you to write anyway, but they give you the option of writing a view dozen chars with a single keystroke. This is pretty nice if most of your work is assembling different algorithms or data structures, not creating new ones.
I submit all the times code I don't understand, I simply ask in english the LLM to explain it to me. I have written a whole app in javascript without learning JS in my entire life
My last company started using AI over a year ago. We write the docblock and the AI writes the function. And it's largely correct. This is production code in smartphones and home appliances world-wide.
I use ai as a learning tool, if I get stuck I bounce ideas similar to a person, I then use it as a basis to keep going. I discover things I didn’t consider and continue reading other sources. Right now ai os not good to teach you, but great to get directions to explore or make of things or concepts to lookup. That being said next generation will be unable to form thoughts without ai, how many knows how to do long division anymore by hand
There's a world of difference between using AI to find bugs in your code, vs using AI to generate novel code from a prompt. Linus is talking about the former, AI Bros mean the latter.
Amazing that Linus accepts AI. Some techies are disparaging of AI. A truly smart person looks at the pros and cons, rather than just being dogmatically for or against.
He is different but something I've noticed is that smart people have a great at understanding things that the rest of us struggle with, but they are kinda dumb when it comes to things of simple common sense. Like for him to not understand the down side to an AI writing bad code for you is just kinda silly. It should be obvious that a more reliable tool would be better than a less reliable tool.
@@chunkyMunky329There is no "more reliable tool" though It is about tools in your toolbox in general Just because your hammer is really good at hammering in a nail, you're not gonna use it to saw a plank. Same with programming. You use the tools that get the job done.
@@pauldraper1736 "people" is a vague term. Also, I never said that it was a battle between manual effort vs LLMs. It should be a battle between an S-Tier human invention such as a compiler vs an LLM. Great human-built software will cause chat GPT to want to delete itself
This feels like it's lagging behind the state of things right now. I don't think it's a serious question whether LLM's will be useful for coding. They already are.
LLMs are interesting. They can be super helpful to write out a ton of code from a short description, allowing you to formulate an idea really quickly, but often the finer details are wrong. That is using an LLM to write unique code is problematic. You may want the basic structure of idiomatic code, but then introduce subtle differences. When doing this, the LLM seems to struggle, often suggesting methods that don’t exist, or used to exist, or starts mixing methodologies from multiple versions of the library in use. E.g trying to use WebApplicationFactory in C#, but introducing some new reusable interfaces to configure the services and WebApplication that can be overridden in tests. It couldn’t find/suggest a solution. It’s a reminder that it can only write code it’s seen before. It can’t write something new. At least not yet.
you'll spend more time making sure it didn't add confident errors than it would take to write the code in the first place. complete gimmick only attractive to weak programmers
@@elle305 I don’t think that’s accurate. Sure, you need the expertise to spot errors. Sure, you need the expertise to know what to ask for. But I don’t agree with the idea that you’ll take more time with LLMs than without. It’s boosted by productivity significantly. It’s boosted my ability to try new ideas quickly and iterate quickly. It’s boosted my ability to debug problems in existing code. It’s been incredibly useful. It’s a soundboard. It’s like doing pair programming but you get instant code. I want more of it, not less.
@@br3nto i have no way to validate your personal experience because i have no idea of your background. but I'm a full time developer and have been for decades, and I'm telling you that reviewing llm output is harder and more error prone than programming. there are no shortcuts to this discipline and people who look for them tend to fail
@@elle305 it’s no different for any other discipline. but sometimes doing it the hard way (fucking around trying to make the ai output work somehow) is more efficient than doing it the right way, especially for one-of things, like trying to cobble together an assignment. and unfortunately more often than not, weak programmers (writers, artists, …) are perfectly sufficient for the purposes of most companies.
For those commenting there won't be coding in a couple of years, I'd like to remind scientific calculators and software for them. We didn't stop doing math by hand. We just made some tasks faster and more accurate. You will always need to learn the 'boring' parts even if there is a 'calculator'. Your brain needs the boring stuff to create more complex results.
If programmers aren't debugging their own work, then they will gradually loose the ability to do so. Just like when a child learns to multiply with a calculator and not in their minds - they lose the ability to multiply, and become reliant on the machine. Programmers learn as they program. It is mind-expanding work. Look at Torvalds and you see a person who is highly intelligent, because he has put the work in over many years. We can become more efficient programers using AI tools - but it will come at a cost. "Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology." - Martin Heidegger When a programer, for example, is asked to check on a solution given by AI, and lacks the competency to do so (because, like the child, they never learned the process) then this is a dangerous position we as humans are placing ourselves in - caged in inscrutable logic that will nonetheless come to govern our lives.
As a central figure in the FOSS movement, I'm surprised he doesn't have any scathing remarks about OpenAI and Microsoft hijacking the entire body of open source work to wrap it in an opaque for-profit subscription service.
He has to be careful now that the SJWs neutered him and sent him to tolerance camp. Thank the people who wrote absolute garbage like the contributor covenant code of conduct
Then you're not in the loop. Linus was never the central figure of the FOSS movement. While his contribution to the Linux Kernel is appreciated he's not really considered one of the leaders when it comes to the FOSS movement.
I'm glad he corrected the host. We are indeed all basically autocorrect to the extent LLMs are. LLMs are also creative and clever, at times. I get the feeling the host hasn't used them much, or perhaps at all
It _seems_ to be creative and it _seems_ to be clever especially to those who are not. The host was fully correct stating that it has nothing to do with "intelligence", it only _seems_ to be intelligent.
@@kralg If we made a future LLM that is indistinguishable from a human being, that answers questions correctly, that can solve novel problems, that "seems" creative... what is it that distinguishes our intelligence than the model's? It's just picking one token before the next, but isn't that what I'm also doing while writing this comment? In my view, there can certainly be intelligence involved in those simple choices.
@@doomsdayrule Intelligence is much more than just about writing a text. Our decisions are based not only on lexical facts, but on our personal experiences, personal interests, emotions etc. I cannot and not going to much deeper into that, but it must be way more complex than a simple algorithm based on a bunch of data. I am not saying less than you will never ever be able to make a future LLM that is indistinguishable from a human being. Of course when you are presented just a text written by "somebody" you may not be able to figure it out, but if you start living with a person controlled by an LLM you will distinguish much sooner than later. It is all because the bunch of data these LLMs are using is missing one important thing: personality. And this word is highly related to intelligence.
@@doomsdayrule As I am writing this comment, I'm not starting with a random word like "As" and then try to figure out what to write next. (Actually, the first draft started with "When") I have a thought in mind, and then somehow pick a sentence pattern suitable for expressing it. Then I read over (usually while still typing) and revise. At some point, my desire to fiddle with the comment is defeated by the need to do something else with my day, and I submit the reply. And then I notice obvious possibilities for improvements and edit what I just submitted.
@@MarcusHilarius One aspect to this is that we are living in an overhyped world. Just in recent years we have heard so many promises like what you made. Just think about the promises made by Elon Musk and other questionable people. The marketing around these technologies are way "in front" of the reality. If there is just a theoretical possibility of something, the marketing jumps on it, they create thousands of believers in the obvious aim to gather support for further development. I think it is just smart to be cautious. The other aspect to this is that many believers do not know the real details of the technologies they believe in. The examples you mentioned are not in the future, at some extent they are available now. We call it automation and they do not require AI at all. Instead they rely on sensor technology and simple logic. Put an AI sticker on it and sell a lot more. Sure machine learning will be a great tool in the future, but not much more. We are in the phase of admiration now, but soon we will face the challenges and disadvantages of it and we will just live with them as we did so with many other technologies from the past.
Program generated code goes way back for decades, if you ever use any ORM almost all of them generate tables for class and sql and vice versa. But I don’t think anybody just takes it as is without reviewing.
I personally believe, much like many others, that AI/ML will only speedup the rate at which bad programmers become even worse programmers. Part of the art of writing software is writing it efficiently, and you can't do that if you always use tools to solve your problems for you. You need to experience the failures and downsides in order to fully understand how it works. There is a line when it turns from an efficient tool to a tool used to avoid actually thinking about solutions. I fully believe that there is a place for AI/ML in making software, but if people blindly use them to write software for them it'll just lead to hard-to-find bugs and code that nobody knows how it works because nobody actually wrote it.
You don't always have to reinvent the wheel when it comes to learning how to code. Everyone starts by copying code from Stack Overflow and many still do that for novel concepts they want to understand. It can be pretty helpful to ask AI for specific things instead of spending hours trying to search for something fitting... Sure thing, if you just stop at copying you don't learn anything
@@cookie_space but i think that's the thing, the risk of "just copying" will be higher because all the AI tools and AI features in our IDEs will make it a lot easier and more probable to get the code ready for you
@@cookie_spaceeveryone? Man don't throw everyone to the same bucket. Are you the guy who can not even write a bubble sort out of your head and you need to google every single solution? Well, that is sad
@@Markus-iq4sm I wasn't aware that your highness was born with the knowledge of every programming language and concept imprinted in your brain already. It might be hard to fathom for you, but some of us actually have to learn programming at some point
I think LLM technology will make bad programmers faster at being bad bad programmers and hopefully push them to be better programmers faster as well. LLMs I think will make good programmers more efficient at writing good code they probably would already write.
@@ougonce is that function you use twice a year called "empty_foo_bar" or "clear_foo_bar"? Or maybe "foo_bar_clear"? Those kinds of questions are very important and annoying to answer when writing, useless when reading.
@@yjlom Or even just something as simple like the question of how you get the length of an array in the particular language you are using. After using enough languages, they kind of all blend together, and I can't remember if this one is x.length, x.length(), size(x), or len instead of length somewhere. I'm used to flipping between a lot of languages quickly, and it's really easy to forget the specifics of a particular one sometimes, even if I understand the flow I would like the program to follow. Essentially, having an AI that can act as a sort of active documentation can really help.
I was using ChatGPT to help me write code just today. I'm making a Python module in Rust and I'm new to Rust. I wanted to improve my error handling. I asked how to do something and ChatGPT explained that I could put Results in my iterator and just collect at the end to get a vector if all the results are ok or an error if there was a problem. I didn't understand how that worked and asked a bunch of follow-up questions about various edge cases. ChatGPT explained it all. Several things happened at once: I got an immediate, working solution to my specific problem. I didn't have to look up the functions and other names. And I got tutored in a new technique that I'll remember next time I have a similar situation. And it's not just the output. It's that your badly explained question, where you don't know the correct terminology, gets turned into a useful answer. On a separate occasion I learned about the statistical technique of bootstrapping by coming up with a similar idea myself and asking ChatGPT for prior art. I wouldn't have been able to search for it without already knowing the term.
There is a fundamental philosophical difference between the type of wrong humans do, and the type AI does (in its present form). I think programmers are in danger of seriously devaluing the relative difference between incidental errors and constitutive errors - that is, humans are wrong accidentally, LLMs are wrong by design - and while we know we can train people better to reduce the former, it remains to be seen if the latter will remain inherent in the implementation realities of the latter - i.e. relying on statistical inference as a substitute for reason.
You got stuck in your own word salad. Start over; Think like a programmer. Break the problem down. How would you go about proving the LLM's code is correct using today's technology?
@@caLLLendar First, I don't appreciate your tone. I know this is TH-cam and standards of discourse here are notoriously low, but there is no need to be rude. I wasn't making a point about engineering. The issue is not the code, code can of course be Unit Tested etc. for validity. The issue is that the method of producing the code is fundamentally statistical, and not arrived at through any form of reason. This means there is a ceiling of trust that we must impose if we are to avoid the obvious pitfalls of such an approach. As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data - and you, as the developer, if you do not preference your own problem solving skills are increasingly relegated to the role of code babysitter. This is not something to be treated casually. Early research is now starting to validate this concern: visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx These models have their undeniable uses, but I find it depressing how many developers are rushing to proclaim their own obsolescence in the face of a provably flawed (though powerful) tool.
@@calmhorizons Have one developer draft psueudocode that is transformed to whatever scripting language that is preferred and then use a boatload of QA tools. The output from the QA tools prompt the LLM. Look at Python Wolverine to see automated debugging. Google the loooooonnnnng list of free open source QA tools that can be wrapped around the LLMs. The LLMs can take care of most of the code (like writing unit tests, type hinting, documentation, etc). The first thing you'd have to do is get some hands on experience in writing the pseudocode in a style that LLMs and non-programmers can understand. From there, you will get better at it and ultimately SEE it with your own eyes. I admit that there are times that I have to delete a conversation (because the LLM seems to become stubborn). However, that too can be automated. The result? 19 out of 20 developers fired. LOL I definitely wouldn't hire a developer who wouldn't be able to come up with a solution for the problems you posed (even if the LLM and tools are doing most of the work). Some devs pose the problem and cannot solve it. Other devs think that the LLM should be able to do everything (i.e. "Write me a software program that will make me a million dollars next week). Both perceptions are provably wrong. As programmers it is our job to break the problem down and solve it. Finally, there are ALREADY companies doing this work (and they are very easy to find).
@@calmhorizons exactly. Agreed, and very well put. Respect for taking time to reply to a rather shallow and asinine comment. "As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data " I would add that this will likely be exacerbated once more and more AI-generated code makes its way into the training datasets (and good luck filtering it out). We already know that it has a very deteriorating effect on the quality (already proven for the case of image generation), because all flaws inherent to the method get amplified as a result.
I learned python on my on from TH-cam and online tutorials. And recently I started learning Go the same way, but this time also with the help of Bard. The learning experience has been nothing short of incredible.
4:31 - It’s crucial to remember that the current state of LLMs is the worst they’ll ever be. They’re continually improving, though I suspect we’ll eventually hit a point of diminishing returns.
@@Insideoutcest I actually just received my first offer doing R&D for a software development company. I specifically specialize in AI product software development (writing code) . The statement I made is 100% factual, the current capabilities of models are the worst they will ever be…. They will only improve, now how much remains to be seen. Could be just 2% could be 20%. I personally believe there is room for considerable improvement before we hit the frontier of diminishing returns. Edit: you know nothing about me, why tell me I don’t program? As if that would certify my previously stated opinion on the improvement of the technology….
Well, my first Arduino project went very well. A medium complexity differential temperature project with 3 operating modes, hysteresis, etc. Medium complexity. I know BASIC and 4NT batch language. Microsoft Co-pilot helped me produce tight, memory efficient, buffer safe, and well documented code. So, AI for the win!
I have never programmed before in my life and with GPT4 I have programmed several little programs in Phython. From code that helps me renaming large amount of files to more advanced stuff. LLMs give me the opportunity to play around. Only thing I need to learn is how to prompt better.
"Only thing I need to learn is how to prompt better." This is exactly the problem. Especially when you scale. You can't prompt to make a change to an already complex system. It then becomes easier to just code or refactor yourself.
@@twigsagan3857 Only problem is when the code exceeds the Token Limit. Otherwise I can still let the LLM correct my code. Takes a while to get there but It works.. And no I am not at all a programmer xD
I am using AI to learn arduino coding, it helps me a lot to understand the code and do fault finding but when i ask to make corresponding circuit diagram, for even simple problem, it struggles. But, It explains the circuit diagram very well. Needs improvement. Many PDF books available, Just feed the AI and improve ?
LLMs are certainly useful and can very much assist in many areas. The future is really is open source models which are explainable and share their training data.
my main fear is that this is something we will start relying on too much. especially when people start even autocompletion can become a crutch so much so that a developer becomes useless without it. imagine that but when it comes to thinking about code. we are looking at a feature where all software will be as bad as modern web develooment.
technology as an idea is reliable - a hammer will always be a hard thing + leverage. We have relied on technology since the dawn of mankind, so I'm not sure what you're saying here.
@@kevinmcq7968 llms are reliable? how so? can you name a technology that we have relied on in the past that is as random as llms? I am genuinely curious
@@kevinmcq7968 I think you are just intentionally misunderstanding what he is saying. He is not saying tools are not usefull, he is saying that if the tool starts to replace the use of your own mind it can make you dependent at the point that it will prejudice your own reasoning skills (and we have some evidence that this is happening, that's why some schools are turning back to use handwritting for example / Miguel Nicolelis also has some takes on this matter).
I like how he didn’t fall into the trap of Ai bashing like the host was trying to lead him to. That’s how you can differentiate a trend follower from a visionary.
I find that in their current state, these models tend to make more work for me deleting and fixing bad code and poor comments than the work they save. It's usually faster for me to write something and prune it than to prune the ai code. This may be partially because it's easier for me to understand and prune my own code than to do the same with the generated stuff, but there is usually a lot less pruning to do without ai.
No. Your comment was for me like a fresh air in the middle of all this pseudo-cognitive farting about the so-called AI. No, it is not only for you. Those who say otherwise are just posers, actors, mystifying parrots repeating the instilled marketing hype.
Maybe that's just in the beginning? Eventually, it might become easier to spot someone else's mistake than your own. Also, AI might more easily find your mistakes.
From personal experience, I think LLMs *writing* your code are terrible when learning. They will produce bugs that you don't understand as a beginner (speaking from experience). As for explaining stuff, I think they're a bit more useful with that.
Auto-correct can cause bugs like tricking developers into importing unintended packages. I've seen production code that should fail miserably, but pure happenstance results in the code miraculously not blowing up. AI is a powerful tool, but it will amp up these problems.
@@EdwardBlair yeah I also found it curious that as he was about to ask Linus about AI in kernel development, he apparently felt an overwhelming need to first vent his own opinion on AI in general even though that wasn't even the topic at hand and he wasn't the person that was being interviewed.
I'm an expert in the field and I _still_ think it's "autocorrect on steroids." It's just that I think that autocorrect was a revolutionary tool, even when it was just Markov chains.
I’m afraid he didnt get something, going from assembly->c->rust->(yet higher level lang) is a whole universe apart from understanding messy human natural language, and then translate that into code, there r humans who understood compiler, but no human (yet) understand how a transformer did its “mapping”. Linus wasn’t trained in machine learning, so in this aspect, one should discount his opinion.
I already feel helpless without intellisense. I can imagine how future developers will feel banging their head against their keyboard because their LLM won't load with the right context for their environment.
I use intellisense on a daily but I know people who code on raw vim and get more things done than me in a day. AI is going to make typical things more easy and is going to have limitations for a long time and to do anything outside those limitations we'll need actual programmers.
This is the first time I've seen a public figure push back on the humancentric narrative that LLMs are insufficient because (description of LLMs with false implicit assumption it contains a distinction from human intelligence). He's also one of the last people in tech I'd expect to find checking human exceptionalism bias, but but that's where assumptions get you. Then his role as kernel code gatekeeper probably gives him pretty unique insights into the limits of _other_ humans' intelligence, if not also his own. 😉 Anyway I hope to see more people calling out this bias, or fewer people relying on it in their arguments. If accepted, it tends to render any following discussion moot.
It is already helping review code, just look at million lint, it's not all AI but it has aspects where it uses LLMs to help you find performance issues in react code. A similar thing could be applied to code reviews in general
Has been a long time since I have seen such a hard argument - both are very right. They will have to master the equivalent of unit testing to ensure that LLM driven decision-making doesn’t become a runaway train. Even if you put a human to actually “pull the trigger”, if the choices are provided LLM then they could be false choices. On the other hand, there is likely a ton of low lying fruit that LLM could mop up in no time. There could be enormous efficiencies in entire stacks and all the associated compute in terms of performance and stability if code is consistent.
The difference between hallucination and idea is the quality of the reasoning behind it. The issue is not that the llms hallucinate, that is a future feature, the issue is that it is unable to figure when the question is objective and if it knows the answer... not easy to fix, for sure, but I have no doubt it will be fixed one way or another.
@alexxx4434 I think when it does, it will take a while for all to agree it does... consciousness is very definition dependent and looks to me like a moving target (or rather a raising bar to clear).
This is my experience with AI coding and is probably a telling indication that programmers will always be needed. I script in a cad environment using LISP which is not 100% compatible with Autocad’s lisp. It is fairly compatible up until Visual Lisp came out, but not after. Everything script it writes fails. It reads well, but never works.
Saying LLM's are just autocorrects on steroids is like saying human experts are just autocorrects on steroids. Obviously there's more to being an expert than that, and it is that expert role we are now bit by bit transferring over to machines
It's literally true, though. It's all about probabilities and choosing the most appropriate response. What differentiates Transformer models from previous Markov chains and Naïve Bayes algorithms is that Transformers encode the input into a more useful vector space before applying the predictions. You may find the "on steroids" shorthand as somewhat short-selling the importance of that shift, but the alternative is that we talk about artifical neural network models as if they have intelligence or agency (using terms like "attention," "understanding," "learn" and "halluciante") which, while useful shorthand, is preposterous.
@@GSBarlev Sure but you can tell when people are just repeating that phrase because they heard it somewhere, in an attempt to rationalize metaphysical concepts like souls. The only difference between the hyperdimensional vectorspace that modern AI's operate in and the Hilbert space that you operate in is number of dimensions and support for entanglement and superposition- which are not exclusive to biology, and which many would argue are not even relevant to biology (they are, but AIs can have qubit-based mirror neurons too)
@@alakani Going to ignore your wider point and just give you an FYI-you can pretty safely disregard quantum effects when it comes to ideas of "consciousness." Yes, the probability that a K+ ion will quantum tunnel through a cell membrane is nonzero, but it's _infinitesimal,_ especially compared to the _real_ quantum tunneling that poses a major limitation to how tightly we can pack transistors on a CPU die.
It is almost unbelievable to hear someone being reasonable when talking about "AI". I hope this becomes the main stream attitude soon and that CEO's and tech bros drop the marketing speak. It is actually an interesting area of automation and calling it AI, I think, does the field a disservice in the long run even though it helps to sell products right now.
Calling it AI is accurate, the issue is people have a wrong impression in their minds of AI. People think AGI when they hear AI, when in reality what we have right now is narrow AI. It's still AI, objectively, but people are uninformed and think that means more than it does.
We already have huge problems with OSS quality, where more than 80% of all OSS is either poorly maintained or not maintained at all. On top of that, OSS is on the rise, being the single biggest cause for the increasing Technical Dept. LLM have the potential for greatly increasing the amount of OSS generated, meaning, that unless we actively address the OSS quality, LLM will most likely make it worse.
You're right. But we're also hearing some negative stories in terms of teamwork. For example, there are some situations where a junior developer sits and waits for an AI code that keeps giving different answers instead of writing code, or it takes more time to analyze why the code was written the way it was, as opposed to the other way around, but it still helps to gain insight or a new approach, even if it's a completely different answer.
That junior coder needs more GitHubs so we can bring them on as a lead dev to work with AI. The middle management and entry level is over in the future.
"Here's the code for the new program. It's created by the same technology that constantly changes my correct typing to a bunch of wrong and completely ridiculous strings, like changing 'if' to 'uff' or changing 'wrong' to 'wht'."
Yeah it's autocomplete/correct, except the autocomplete is owned and controled by huge companies like Open AI who train it on data no one gave them consent to use. Implementing machine learning autocomplete/correct that runs on user's machine vs a huge HPC monster that is controled by a huge company are two very different things IMO. I'm not endorsing machine learning that can't be ran offline and on user's machine. Even better if it allows the user to train their own models on their own data. No ridiculous power and water used, no company using copyrighted material, no companies using your data on their servers for training AI. I think the trend of offloading more and more of our lives to an API or a huge for profit company, and thus even further deepening the fact that users are products themselves is not the right way.
Linus has a much more correct perspective than the interviewer. Our brains ARE pattern predictors, our brains also dream and hallucinate. The author is trying to make it sound like those properties should diminish the credentials of 'LLMs' when really they make them more interesting. He's also ignoring that it's really Transformers we're talking about. Transformers are also being easily applied to visual, language, and audio data, and they're working easily multimodally to transform between them. There is no correct reading of the situation other than that something profound and core to the way intelligence probably works has been discovered.
These are the true Linus tech tips
hahaha
so true lmao
This!
Hate that lame wannabe dude pretending to know stuff
😅
@@denisblack9897don't hate anyone man, the guy is responsible for countless kids getting into tech, people tend to sort out the educational "bugs" on the way up :)
"Sometimes you have to be a bit too optimistic to make a difference"
-Stockton Rush
it;s actually originally from William Paul Young, The Shack@@bartonfarnsworth7690
Understatement of the day, LOL.
hell of a motivational quote
that is also what a scammer wants from you.
dont put everything that looks fancy in you mind kiddo.
Man Linus is always such a refreshing glimpse of sanity
His argumet was bugs are shallow .we have compliers for shallow bugs llm can gind not so shallow .he is not the brightest
@@JosiahWarren Try that again with proper grammar chief.
He let his own kernel and dev community get destroyed. Screw him. RIP Linux
@@Ryochan7 Fun fact, fMRI studies show trolling has the same neural activation patterns as psychopaths thinking about torturing puppies; it's very specific, right down to the part where they vacillate between thinking it's their universal right, and that they're helping someone somehow
@@alakani and some people who troll do not think about it at all. they're easier to deal with if we aren't ascribing beneficial qualities to them.
LLMs write way better commit messages than I do and I appreciate that.
And they actually comment their code 😂
@@SaintNathcomments are usually bad though but are good if you’re learning I suppose but they can be out of date and thus misleading
@@Sindoku i hope your comment gets out of date quickly because its already misleading
@@Sindoku While I get your point, comments are definitely a good thing.
Yes code should be self-explanatory, and if it isn't you try your best to fix this. But there's definitely cases where it's best to add a short comment explaining why you've done something. It shouldn't describe *what* but *why*
@@NetherFX That's the point, a comment is worthless unless it touches on the why. A comment that just discusses the what is absolutely garbage because the code documents the what.
Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.
Where was this talk held?
thanks "so" much! It's pretty appalling that these folks don't even quote the source
Thank you a ton!
Thank you th-cam.com/video/OvuEYtkOH88/w-d-xo.html
Thank you so much!
1:06 "Now we're moving on from C to Rust" This is much more interesting than the title. I always thought, Torvalds viewed Rust as an experiment.
Rust just isn't his expertise. It's going in the kernel, he's just letting others oversee it.
@@feignit It's already in the mainline kernel for a while. It's very stable and Rust just works really well now.
I actually think go is better than Rust
@@yifeiren8004 You want a garbage collector running in the kernel?
@@SecretAgentBartFargoRust isn't even close to the stable.
I am senior software engineer and I use chat gpt sometimes at work to write powershell scripts. They usually provide a good enough start for me to modify to do what i want. That saves me time and allows me to create more scripts to automate more. Its not my main programming task, but it definitely saves me time when I need to do it.
Same. ChatGPT is great for throwing together a quick shell or python script to do boring data tasks that would otherwise take much longer.
Yep, saves me so much time with data preprocessing, and adds nice little features that I wouldn't normally bother with for a 1 time use throwaway script
Quit your job.
@@jsrjsr And light a fart?
@@alakani he should do worse than that.
If you let the LLM author code without checking it, then inevitably you will just get broken code. If you don't use LLMs you will take twice as long. If you use LLMs and review and verify what it says and proposes, and use it as Linus rightly suggests as a code reviewer who will actually read your code and can guess at your intent, you get more reliable code much faster. At least that is the state of things as of today.
Perhaps anecdotal, but it (AI Assistant in my case, I'm using JB Rider, pretty sure that's tied to ChatGPT) seems to get better with time. After finishing a method, I have another method already in mind. I move the cursor and put a blank line or two in under the method I just created in prep for the new method. If I let it sit for just a second or two before any keystrokes, often times it will predict what method I'm about to create all on its own, without me even starting the method signature. Yes, sometimes it gets it very wrong and I'll just hit escape to clear it, but sometimes it gets it right... and I mean really scary right. Like every line down to the keystroke and even naming is spot on, consistent w/ naming throughout the rest of the project. Yes, agreed, you still need to review the generated code, but I suspect that will only continually get better with every iteration. Rather then autocompleting methods, eventually entire files, then entire projects, then entire solutions. It's probably best for developers to try to learn to work with it in harmony as it evolves, or they will fall behind their peers that are embracing it. Scary and exciting times ahead.
@@keyser456 Same experience for me. It predicts what I was about to write next about 80% of the time, and when it gets it right, it's pretty much spot on. Insane progress just over the past year. Imagine where it will be in another year. Or five years. Coding is going to be a thing of the past, and it's going to happen very quickly.
If it is intelligent enough to write code, it will eventually become intelligent enough to debug complex code, as long as you tell it what is the issue that arises
You are training the llm for the inevitable.
Oh man, now i really want to get into coding just to get that same transformative experience of a tool thinking ahead of you. I am a Designer, and to be frank, the experience with AI in my field is much less exciting, its just stockfootage on steroids, all the handywork of editing and putting it together is sadly the same. But the models are evolving rapidly and stuff like AI object select and masking, vector generation in Adobe Illustrator, transformative AI (making a summer valley into a snow valley e.g.) and motion graphics AI are on the horizon to be good or are already there. Indeed, what a time to be alive :D might get into coding soon tho
While AI lowers the bar to start programming, I'm afraid it also makes programming bad code easier. But with like any other tool, more power brings more responsibility and manual review should still be just as important.
as a cloud engineer I gotta say chatgpt with gpt 4 really turbocharges me for most tasks, my productivity shot up 100-200% and i'm not kidding. You gotta know how to make it work for you and it's amazing :)
There will be more than one AI , for each task, to create code and to validate code. Make no mistake, AGI is the last target, but the intermediate ones are good enough to speed up the whole ordeal/effort
Ok, speed, efficiency, productivity… All true, but to what effect? Isn’t it so that every time we’ve had a serious paradigm shift, we thought we could “save time”.
Sadly, since corporations are not ‘human’, we’ve ended up working *more* not less, raising the almighty GDP - having less free time and not making significantly more money.
Unless… you own shares, IP, patents and other *derivatives* of AI as capital.
AI is a tool. A sharp knife is also one. This “debate” should ask “who is holding the tool, and for what purpose?”. That question reveals very different answers to a corporation, a government, a community or a single person.
It’s not what AI is or can do. It’s more about what we are, and what we do with AI… 👍
Couldn't the same be said of Stack Overflow? I am not disagreeing with you, just adding an example to show it's not a new phenomenon.
It reminds me about talk in some podcasts before LLM, where speaker said that they tried to use AI as an assistant for medical reports and they faced the following problem:
sometimes people see that AI gets the right answers and then when they disagree with it, they still choose the AI's conclusion, because "system can't be wrong".
So to fight it, they programmed the system to sometimes give the wrong results, and ask the person to agree or disagree with it, to force people to chose the "right" answer and not to agree with anything that system says.
And this is what I believe the weak point of LLM.
While it's helpful in some scenarios, in other it can give SO deceiving answers which looks exactly how it should be, but in fact it's something that doesn't even exists.
E.g. I tried to ask it about best way to get an achievement in the game, and it came with things that really exists in the game and sounds like they should be related to the achievement, but in fact they not.
Or my friend tried to google windows error codes, and it came up with the problem and their descriptions, though it doesn't really exists either.
I have had copilot suggest an if statement that fixed an edge case I didn't contemplate, enough times to see it could really shine in fixing obvious bugs like that.
Skill issue
@@doodlebroSHif you always think of every edge case in all of the code you write you are not programming that much
@@doodlebroSH I can tell you are new to programming and talking out of your ass just by that comment.
@@doodlebroSH :D yikes
@@antesajjas3371I think you misspelled edge
Linus..... My man!!!
I would probably hate working with him, because I am not a very good software engineer and he would be going nuts with my time-complexity solutions... but boy has he inspired me.
Thank you!
bro do you even O(n^2)?
@@MrFallout92 I wish!!!
These days I have a deep love for factorials!
I don't think he would. His famous rants on LKML before he changed his tone were ate people who SHOULD HAVE KNOWN BETTER. I don't remember him going nuts at newbies for being newbies. He did go nuts at experts who tried to submit sub-par/lazy/incomplete/etc work and should have know it's sub-par and needs fixing and didn't bother doing that. He was quite accurate and fair in that.
@@TestTest12332 Has this ever happened? Do you have any specific examples?
@@Saitanenthat time when fd-based syscall returned file not found error code. Linus went nuts.
Wow finally someone who acknowledges the options LLMs give without being overhyped or calling out an existential threat
yes, i find him very refreshing indeed
LLMs are total crap, there’s no reason to be optimistic
Yes. Because he is not a marketing guy or not ceo of a company.
Man they can t even count additions. Of course they are not a threat. At least yet
@@genekisayan6564 never used gpt4 and other later models?
Linus sounds so calmed and relaxed until you see his comments on others PRs
That was a terrible PR though
I think he does it for fun tbh
who amongst us that hasnt had a bad day because of a bad PR cast the first stone
You gotta let off steam somehow
Yeah :/
Man, Linus looks a noticeably older, wiser man than I've seen him in older talks. More respect for the guy.
Great people often age like wine.
@@RyanMartinRAMI have another adage - with age comes wisdom, but sometimes age comes alone. Not this time though!
I think age makes anyone more humble, but sometimes less open minded. It’s good to see Linus recognize that LLMs have their uses, while some projects like Gentoo have stood completely against LLMs. Nothing is black and white, and when the hype is over, I think LLMs will still be used as assistants to pay attention to small stuff we sometimes neglect.
It's another tool like static and dynamic analysis. No programmer will follow these tools blindly, but can use them to make suggestions or improve a feature. There have been times i've been stuck on picking a good data structure, and gpt has given more insightful ideas or edge cases i was not considering. That's this most useful moment right now. A Rubber Ducky.
>No programmer will follow these tools blindly
My sweet summer child. CURL authors already have to deal with "security reports" because some [REDACTED]s used Bard to find "vulnerabilities" to get a bug bounty. Wait for next jam in style "submit N PRs and you get our merch" and instead of PRs that fix a typo, you'll get even worse - the code that doesn't compile.
I agree that it can help in these scenarios. People should make aware of this, as the current discussion is way over the top and scare people in losing their jobs (an therefore their mental health). Another thing is, as sustainability was a topic, I'm not sure if the energy consumed by this technology justifies these trivial tasks. Talking with a colleague seems more energy efficient.
aha. until it writes a Go GTK phone app (Linux phone) zero to hero with no code review and only UI design discussions.
6 months ago. just chatgpt4.
programming is dying and you people are dreaming.
in 2023 there were 30% less new hires across all programming languages.
for 2024, out of 950 tech companies, over 40% plan layoffs due to AI.
a bit tired to link the source
You underestimate the stupidity of people
Absolutely, im convinced the other commenters claiming LLMs will make programming obsolete in 3 years or whatever are either not programmers or bad programmers lol
I find LLM's extremely usefull for generating small code snippet very quickly. For example advanced regular expressions. Saved me tons of hours.
As long as you understand regular expressions, and review and write extensive test cases for what the regular expressions should do. Then ChatGPT is pretty useful
Linus has really mellowed out as he has gotten older.
In a good way.
He became hopeful and humble
The therapy worked. 😉
no therapy at all, just wisdom @@mikicerise6250
@@Munchkin303Linus hopeful and humble Torvalds
I think that Linus, in 2024, should run his own podcast
and his first guest should be joe rogan
@@TalsBadKidney
Linus: - "What language do you think should be first tought at elementary school, Joe?"
Joe: - "Jujitsu"
@@TalsBadKidneythis is such a great idea
And has a stand-up.
He's such a great speaker, but I doubt he would have much time between managing Linux, family life, and whatever else
At 1:10 you can see how Linus is locating the Apple user and was considering to kill him on the spot but decides against it and continues his thought
😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂
lmao
😂😂
HAHAHA
Linus literally uses a MacBook…
For experienced programmers, most of the mistakes they make can be categorised as 'stupid' i.e. a simple overlook, where the fix is equally stupidly trivial. Exactly the same with building a PC - you might have done it 'millions' of times, but forgetting something stupid in the build is always stupidly easy to do, and though you might not do it often, you will inevitably still do it. At some point. Unfortunately, the fixes seem to always take forever to find.
That’s the only good take on ai in the video, and maybe the only truly helpful thing ai might ever be used for, finding the obvious mistakes humans make because they’re thinking about more important shit.
That's the problem with computers, you need to do it all 100% correct or it won't work.
@@autohmae That also doubles as the good thing about computers, because it will never do something that you didn't tell it to do
I disagree with this. Simple bugs are easier to find, so we find more of them. The other bugs are more complex which makes them harder to find, so we find less of them. For example, not realising that the HTTP protocol has certain ramifications that become a serious problem when you structure your web app a certain way.
@@chunkyMunky329 It's definitely true that there are always exceptions, though I'd politely suggest "not realising" is primarily a result of inexperience.
A badly written and/or badly translated urs can lead to significant issues when the inevitable subsequent change requests flood in, especially if there's poor documentation in the code.
Any organisation is only as good as it's QA. We see this more and more in the games industry, where we increasingly, and deliberately, offload the testing aspect of that onto the end consumer.
Simple bugs should be easy to find, you'd think, but they're also very, very easy to hide, unfortunately.
Smart answer from Linus.
There is absolutely no doubt in my mind that things like co-pilot are already part of pull requests that have been merged into the Linux kernel.
You people dont understand, it never was if ai would replace programmerw, it always was if ai will reduce job position by a critical amount so that its hard to get hired
the deafening silence when that phone alarm dared to go off mid torvalds dialogue 😆
Those subtle bugs are what LLMs produce copious amounts of. And it takes very long to debug. To the degree where you probably would have been better off if you just wrote the code by hand yourself.
@@AvacadoJuice-q9bWhat, like a "Prompt Engineer"? It's ridiculous that this became a thing given how LLMs work.
It's all about intuition that most people can figure out if they spend a day messing around with it.
Honestly this has not been my experience using GPT4
Disagree. Since humans constantly creating bugs when coding themselves, even subtle, even the best of the best. LLM are amazing. I realized my python code ended up needing to be multi threaded. I fed it my code, and it multi threaded everything. They are incredible and only this is just beginning? 5 years will blow peoples minds, completely. People who don't find how amazing llms are, just aren't that bright in my opinion.
thats why you must input the psuedocode on llm to control the output more be precise to what you want.
It's amusing how we, as programmers, often tell users that if they input poor quality data into the system, they should expect poor quality results. In this case, the fault lies with the user, not the system. However, now we find ourselves complaining about a system when we input low-quality data and receive unsatisfactory results. This time, though, we blame the system instead of ourselves
As someone with a degree in Machine Learning, hearing him call it LLMs "Autocorrect on steroids" gave me catharsis. The way people talk and think about the field of AI is totally absurd and grounded in SciFi only. I want to vomit every time someone tells me to "just use AI to write the code for that" or similar.
AI, as it exists now, is the perfect tool to aid humans (think pair programming, code auto-completion for stuff like simple loops, rough prototypes that can inspire new ideas, etc.) Don't let it trick you into thinking it can do anyone's job though. It's just a digital sycophant, never forget that.
Do you have any valid arguments that make you think that it cannot do anyone's job or is it just your emotions?
@@vuralmecbur9958if your job relies on not thinking and copy pasting code then yes it can replace you but if it is not,if you understand code and can modify it properly to your needs and specifications it can not replace you,I work on ai as well
@@vuralmecbur9958it's not about AI not being an "autocorrect on steroids". It's about "there are a lot of jobs out there, that could be done by autocorrect on steroids"
@@vuralmecbur9958do you have any valid arguments as to why people will get layed off instead of companies scaling up their projects? 200-300% increase in productivity simply means 200-300% increase in future project sizes, the field you're working in is already dying anyway if scaling up isn't possible and you're barking at the wrong tree
where i'm working were constantly turning down projects because there's too much to do and no skilled labour to hire (avionics/defense)
@@vuralmecbur9958 go prompt it to make you a simple application and you'll see it's not taking anyone's job anytime soon.
If anything, it's an amazing learning tool. You can study code and anything you don't understand, it will explain in depth. You don't quite grasp a concept? Prompt it to explain it further.
"You have to kinda be a bit too optimistic at times to make a difference" -This is profound
no one commenting on the moderator? He is doing a great job driving the conversation
"we are all autocorrects on steroids to some degree" - agree 100%
Could you elaborate why do you agree? Your comment adds no value right now
I think he really meant to say "autocomplete". Because it basically takes your prompt and looks for what answer is mostly likely to follow it, based on material it has read.
Which _is_ indeed kind of how humans work... if you remove creativity and the ability to _interact_ with the world, and only allow them to read books and answer written questions.
And by "creativity" I'm including the ability to spot gaps in our own knowledge and do experiments to acquire _new_ information that wasn't part of our training.
The thing people with the interviewers mindset misses is what it takes to predict correctly. The language model has to have an implicit understanding of the data in order to predict. And ChatGPT is using a large language model to produce text, but you could just as well use it to produce something else, like actions in a robot. Which is kind of what humans do; they see and hear things, and act accordingly. People who dismiss the brilliance of large language models on the basis that they're "just predicting text" are really missing the point.
@@sbqp3 - No, you couldn't really use it to "produce actions in a robot", because what makes ChatGPT (and LLMs in general) reasonably competent is the huge amount of material it was trained on, and there isn't anywhere near the same amount of material (certainly not in a standardised, easily digestible form) of robot control files and outcomes.
The recent "leap" in generative AI came from the volume of training data (and ability to process it), not from any revolutionary new algorithms. Just more memory + more CPU power + easy access to documents on the internet = more connections & better weigh(t)ing = better output.
And in any application where you just don't have that volume of easily accessible, easily processable data, LLMs are going to give you poor results.
We're still waiting for remotely competent self-driving vehicles, and there are billions of hours of dashcam footage and hundreds of companies investing millions in it. Now imagine trying to use a similar machine-learning model to train a mobile industrial robot, that has to deal with things like "finger" pressure, spatial clearance, humans moving around it, etc.. Explicitly coded logic (possibly aided by some generic AI for object recognition, etc. - which is already used) is still going to be the norm for the foreseeable future.
@@alang.2054 I like his comment because most thinking humans do is in fact system 1 thinking - which is reflex-like and on a similar level as what LLMs do.
(Average typing speed*number of working days a year)/6 words per line of code ~=1milLOC/year. But we dont write that much. Why? Most coding is just sitting and thinking, then writing little
LLMs are great to get started with a new language, library or writing repetitive datastructs or algs, but bad for production or logic (design patterns such as the Strategy pattern) due to not logically understanding the problem domain, which from our napkin math just proved is the largest part coding assistants arent improving
I wouldn’t even agree.
Imagine yourself just getting the job to code x project.
In that case, you can rely on a very limited amount of information.
Within the right, there are very few ways in which LLMs fail.
Maybe if many programmers sit down and explain their thought process on multiple different problems it can learn to abstract the problem solving method programmers use. While the auto correct on steroids might be technically accurate for what it's doing, the models it builds to predict the next token are extremely sophisticated and for all we know may have some similarity to our logical understanding of problem domains. Also LLMs are still in their infancy. There are probably controls or additional complexity that could be added to address current shortcomings. I'm skeptical of some of the AI hype, I'm equally skeptical of the naysayers. I tend to think the naysayers are wrong based on what LLMs have already accomplished. Plenty of people just 2-3 years ago would've said some of the things they are doing now are impossible.
Read the original documentation and if there's something you don't understand, Google it and be social. Only let the LLM regurgitate that part of the docs in terms you understand as a last resort.
I'm surprised at the creativity LLMs have in their own context, but don't replace reading the docs and writing code with LLMs. You must understand why the algo/struct is important and what problems each algorithm solves.
If you think LLMs replace experience, you're surely mistaken and you'll be trapped in learned helplessness for eternity.
I literally asked chatGPT today to explain to MVCC pattern (Which I could've sworn is called the MVVC pattern but it corrected me to that) and its explanation got worse after every attempt of me telling it, it was not doing a good job.
@@SimGuntherreading the docs only works if you know what you're looking for. LLMs are great at understanding your badly written question.
I once proposed a solution to a problem I had to ChatGPT and it said: that sounds similar to the technique in statistics called bootstrapping. Opened up a whole new box of tricks previously unknown to me.
I could have spent months cultivating social relationships with statisticians but it would have been a lot more work and I'm not sure they'd have the patience.
Good interview, but I disagree with the introduction, where it is said that LLM's are "auto-correction on steroids" . Yes, LLMs do next token prediction. But that's just one part. The engine of a LLM is a giant neural network, that learned a (more or less sophisticated) model of the world. It is being used during inference to match input information against and, based on that correlations, creates new output information which leads, in an iterative process, to a series of next token. So the magic happens, when input information is matched against the learned world model, that leads to new output information.
Agreed! This is the type of thing people say somewhat arrogantly when they've only had a limited play with the modern LLMs. My mind was blown when I wrote a parser of what I would call medium complexity in python for a particular proprietary protocol. It worked great but it was taking 45 mins to process a days worth of data, and I was using it every day to hunt down a weird edge case that only happened every few days. So out of interest I copied and pasted the entire thing into GPT4 and said "This is too slow, please re-write it in C and make it faster" and it did. Multiple files, including headers, all perfect. It compiled first time, and did in about 30s (I forget how long exactly but that ballpark) what my hand written python program was doing in 45 mins. I don't think I've EVER written even a simple program that's compiled first time, let alone something medium complicated.
To call this auto complete doesn't give it the respect it deserves. GPT4 did in a few seconds what would have taken me a couple of days (if I even managed it at all, I'm not an expert in C by a long stretch).
I agree, the reductionist argument trivializes the power of LLMs. We could say the same thing about humans, we "just predict the next word in a series of sentences". That doesn't capture the power and magic of human ingenuity.
Even Linus says that. Some of the things that LLMs produce are almost black magic.
So... Autocorrect
@@davidparker5530humans don't just predict the next word though. Llms do. Neural networks don't think, all they do is guess based on some inputs. Humans think about problems and work through them, llms by nature don't think about anything more than what they've seen before.
My only gripe with AI generated code currently is when they write or suggest code that contains security vulnerabilities, or worse, leak credentials, secrets. AI may accelerate human productivity, but on the other side, it may also accelerate human stupidity.
Personally I think that while it will be extremely useful, there will also be this belief over time that the "computer is always right". In this sense we will surely end up with a scandal like Horizon in the future, but this time it will be much harder to prove that there was a fault in the system.
Precisely this. With Horizon it took years of them being incredulous that there were any bugs at all, that it must be perfect and that instead thousands of postmasters were simply thieves. Eventually the bugs/errors became so glaring (and finally maybe someone competent actually looked at the code) that it was then known that the software was in fact broken. What then followed were many many more years of cover ups and lies, with people mainly concerned with protecting their own status/reputation/business revenue rather than do what was right and just.
Given all this, the AI scenario is going to be far worse: the AI system that “hallucinates” faulty code will also “hallucinate” spurious but very plausible explanations.
99.99% won’t have the requisite technical knowledge to determine that it is in fact wrong. The 0.01% won’t be believed or listened to.
The terrifying prospect of AI is in fact very mundane (not Terminator nonsense): its ability to be completely wrong or fabricate entirely incorrect information, and then proceed to explain/defend it with seemingly absolute authority and clarity.
It is only a matter of time before people naturally entrust them far too much, under the illusion that they are never incorrect, in the same way that one assumes something must be correct if 99/100 people believe it to be so. Probability/mathematics is a good example of where 99/100 might think something is correct, but in fact they’re all wrong - sometimes facts can be deeply counterintuitive, and go against our natural intelligence heuristics.
Maybe. But it depends what we allow ai to be in charge of. Remember, if we vote out the gop we can like pass laws again to do things for the benefit of the people including ai regulations if needed.
I love this little short. I think what both of them said is true. LLM is definitely "autocorrect on steroids", as it were. But honestly, a lot of programming or really a lot of jobs in general don't really require higher level of intelligence, as Linus said - we all are autocorrect on steroids to some degree, because for the most part a lot of things we do, that's all you need. The problem is knowing the limitations of such a tool and not attempting to subvert human creativity with it.
Always love to hear Sir Linus Hopeful Humble Torvalds
Sir Linus Hopeful *_And_* Humble Torvalds
A responsible programmer might use AI to generate code, but they would never submit it without understanding it and testing it first.
Although by the time you read and fully understand the code, you may as well have written it.
@@traveller23e if the code fails for some reason, I'll be glad I took the time to understand it.
@@traveller23e actually true. if you understand every aspect of the code, why wouldn't you just have written it yourself? at some point when using llms these people will become used to the answers being mostly correct so they'll stop checking. productivity 200% bla bla, yeah sure dude. man llms will ruin modern software even more, todays releases are already full of bugs
@@traveller23e Well the same goes for the compiler. If you "fully understand" the code there should never be a warning or error. Most tools like GitHub-copilot require you to write anyway, but they give you the option of writing a view dozen chars with a single keystroke. This is pretty nice if most of your work is assembling different algorithms or data structures, not creating new ones.
I submit all the times code I don't understand, I simply ask in english the LLM to explain it to me. I have written a whole app in javascript without learning JS in my entire life
Is cut-and-paste from StackOverflow that far from asking the LLM for the answer?
ive never been insulted by gpt
@@derekhettinger451 Ha Ha!!!!!
Lmao. Well, a senior dev is likely on the other end of a stack overflow answer, so basically yea
@@VoyivodaFTW1 optimistic I see
Any help forum is just a distributed neural net when you think about it
My last company started using AI over a year ago. We write the docblock and the AI writes the function. And it's largely correct. This is production code in smartphones and home appliances world-wide.
I use ai as a learning tool, if I get stuck I bounce ideas similar to a person, I then use it as a basis to keep going. I discover things I didn’t consider and continue reading other sources. Right now ai os not good to teach you, but great to get directions to explore or make of things or concepts to lookup.
That being said next generation will be unable to form thoughts without ai, how many knows how to do long division anymore by hand
In such a short video, one can easily witness the brilliance of the man!!!
There's a world of difference between using AI to find bugs in your code, vs using AI to generate novel code from a prompt. Linus is talking about the former, AI Bros mean the latter.
Amazing that Linus accepts AI. Some techies are disparaging of AI. A truly smart person looks at the pros and cons, rather than just being dogmatically for or against.
Linus is definitely not the sheep, you can tell just how different he is from the general.
He is different but something I've noticed is that smart people have a great at understanding things that the rest of us struggle with, but they are kinda dumb when it comes to things of simple common sense. Like for him to not understand the down side to an AI writing bad code for you is just kinda silly. It should be obvious that a more reliable tool would be better than a less reliable tool.
@@chunkyMunky329There is no "more reliable tool" though
It is about tools in your toolbox in general
Just because your hammer is really good at hammering in a nail, you're not gonna use it to saw a plank.
Same with programming. You use the tools that get the job done.
@@chunkyMunky329 You have an implicit assumption that people are more reliable tools than LLMs. I think that is up for debate.
@@pauldraper1736 "people" is a vague term. Also, I never said that it was a battle between manual effort vs LLMs. It should be a battle between an S-Tier human invention such as a compiler vs an LLM. Great human-built software will cause chat GPT to want to delete itself
@@chunkyMunky329 linter is only one possible use of ai
It's not the Artificial Intelligence that people should be worried about. It's the Natural Intelligence we need to watch the most...
This feels like it's lagging behind the state of things right now. I don't think it's a serious question whether LLM's will be useful for coding. They already are.
This was surprisingly not what I was expecting him to say, and yet I simultaneously respect him even more for saying it.
LLMs are interesting. They can be super helpful to write out a ton of code from a short description, allowing you to formulate an idea really quickly, but often the finer details are wrong. That is using an LLM to write unique code is problematic. You may want the basic structure of idiomatic code, but then introduce subtle differences. When doing this, the LLM seems to struggle, often suggesting methods that don’t exist, or used to exist, or starts mixing methodologies from multiple versions of the library in use. E.g trying to use WebApplicationFactory in C#, but introducing some new reusable interfaces to configure the services and WebApplication that can be overridden in tests. It couldn’t find/suggest a solution. It’s a reminder that it can only write code it’s seen before. It can’t write something new. At least not yet.
you'll spend more time making sure it didn't add confident errors than it would take to write the code in the first place. complete gimmick only attractive to weak programmers
@@elle305 I don’t think that’s accurate. Sure, you need the expertise to spot errors. Sure, you need the expertise to know what to ask for. But I don’t agree with the idea that you’ll take more time with LLMs than without. It’s boosted by productivity significantly. It’s boosted my ability to try new ideas quickly and iterate quickly. It’s boosted my ability to debug problems in existing code. It’s been incredibly useful. It’s a soundboard. It’s like doing pair programming but you get instant code. I want more of it, not less.
@@br3nto i have no way to validate your personal experience because i have no idea of your background. but I'm a full time developer and have been for decades, and I'm telling you that reviewing llm output is harder and more error prone than programming. there are no shortcuts to this discipline and people who look for them tend to fail
@@elle305 it’s no different for any other discipline. but sometimes doing it the hard way (fucking around trying to make the ai output work somehow) is more efficient than doing it the right way, especially for one-of things, like trying to cobble together an assignment. and unfortunately more often than not, weak programmers (writers, artists, …) are perfectly sufficient for the purposes of most companies.
@@Jonas-Seiler i disagree
For those commenting there won't be coding in a couple of years, I'd like to remind scientific calculators and software for them. We didn't stop doing math by hand. We just made some tasks faster and more accurate. You will always need to learn the 'boring' parts even if there is a 'calculator'. Your brain needs the boring stuff to create more complex results.
If programmers aren't debugging their own work, then they will gradually loose the ability to do so. Just like when a child learns to multiply with a calculator and not in their minds - they lose the ability to multiply, and become reliant on the machine.
Programmers learn as they program. It is mind-expanding work. Look at Torvalds and you see a person who is highly intelligent, because he has put the work in over many years.
We can become more efficient programers using AI tools - but it will come at a cost.
"Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology." - Martin Heidegger
When a programer, for example, is asked to check on a solution given by AI, and lacks the competency to do so (because, like the child, they never learned the process) then this is a dangerous position we as humans are placing ourselves in - caged in inscrutable logic that will nonetheless come to govern our lives.
yep
yep but companies dont care on the spot, they want the feature as fast as possible and the cheapest way
nicely put
It can write code but I'm not sure it can design software. It can't really reason. The parameters still have to be defined by a reasoning being.
As a central figure in the FOSS movement, I'm surprised he doesn't have any scathing remarks about OpenAI and Microsoft hijacking the entire body of open source work to wrap it in an opaque for-profit subscription service.
He has to be careful now that the SJWs neutered him and sent him to tolerance camp. Thank the people who wrote absolute garbage like the contributor covenant code of conduct
Then you're not in the loop. Linus was never the central figure of the FOSS movement. While his contribution to the Linux Kernel is appreciated he's not really considered one of the leaders when it comes to the FOSS movement.
@@haroldcruz8550 Well said. I'd expect stronger opinions from Richard Stallman for instance.
I like Linus' calming voice, it's soothing
I'm glad he corrected the host. We are indeed all basically autocorrect to the extent LLMs are. LLMs are also creative and clever, at times. I get the feeling the host hasn't used them much, or perhaps at all
It _seems_ to be creative and it _seems_ to be clever especially to those who are not. The host was fully correct stating that it has nothing to do with "intelligence", it only _seems_ to be intelligent.
@@kralg If we made a future LLM that is indistinguishable from a human being, that answers questions correctly, that can solve novel problems, that "seems" creative... what is it that distinguishes our intelligence than the model's?
It's just picking one token before the next, but isn't that what I'm also doing while writing this comment? In my view, there can certainly be intelligence involved in those simple choices.
@@doomsdayrule Intelligence is much more than just about writing a text. Our decisions are based not only on lexical facts, but on our personal experiences, personal interests, emotions etc. I cannot and not going to much deeper into that, but it must be way more complex than a simple algorithm based on a bunch of data.
I am not saying less than you will never ever be able to make a future LLM that is indistinguishable from a human being. Of course when you are presented just a text written by "somebody" you may not be able to figure it out, but if you start living with a person controlled by an LLM you will distinguish much sooner than later. It is all because the bunch of data these LLMs are using is missing one important thing: personality. And this word is highly related to intelligence.
@@doomsdayrule As I am writing this comment, I'm not starting with a random word like "As" and then try to figure out what to write next. (Actually, the first draft started with "When")
I have a thought in mind, and then somehow pick a sentence pattern suitable for expressing it. Then I read over (usually while still typing) and revise. At some point, my desire to fiddle with the comment is defeated by the need to do something else with my day, and I submit the reply. And then I notice obvious possibilities for improvements and edit what I just submitted.
@@MarcusHilarius One aspect to this is that we are living in an overhyped world. Just in recent years we have heard so many promises like what you made. Just think about the promises made by Elon Musk and other questionable people. The marketing around these technologies are way "in front" of the reality. If there is just a theoretical possibility of something, the marketing jumps on it, they create thousands of believers in the obvious aim to gather support for further development. I think it is just smart to be cautious.
The other aspect to this is that many believers do not know the real details of the technologies they believe in. The examples you mentioned are not in the future, at some extent they are available now. We call it automation and they do not require AI at all. Instead they rely on sensor technology and simple logic. Put an AI sticker on it and sell a lot more.
Sure machine learning will be a great tool in the future, but not much more. We are in the phase of admiration now, but soon we will face the challenges and disadvantages of it and we will just live with them as we did so with many other technologies from the past.
Program generated code goes way back for decades, if you ever use any ORM almost all of them generate tables for class and sql and vice versa. But I don’t think anybody just takes it as is without reviewing.
Reviewing can be automated.
I personally believe, much like many others, that AI/ML will only speedup the rate at which bad programmers become even worse programmers. Part of the art of writing software is writing it efficiently, and you can't do that if you always use tools to solve your problems for you. You need to experience the failures and downsides in order to fully understand how it works. There is a line when it turns from an efficient tool to a tool used to avoid actually thinking about solutions. I fully believe that there is a place for AI/ML in making software, but if people blindly use them to write software for them it'll just lead to hard-to-find bugs and code that nobody knows how it works because nobody actually wrote it.
You don't always have to reinvent the wheel when it comes to learning how to code.
Everyone starts by copying code from Stack Overflow and many still do that for novel concepts they want to understand.
It can be pretty helpful to ask AI for specific things instead of spending hours trying to search for something fitting...
Sure thing, if you just stop at copying you don't learn anything
@@cookie_space but i think that's the thing, the risk of "just copying" will be higher because all the AI tools and AI features in our IDEs will make it a lot easier and more probable to get the code ready for you
@@cookie_spaceeveryone? Man don't throw everyone to the same bucket. Are you the guy who can not even write a bubble sort out of your head and you need to google every single solution? Well, that is sad
@@Markus-iq4sm I wasn't aware that your highness was born with the knowledge of every programming language and concept imprinted in your brain already. It might be hard to fathom for you, but some of us actually have to learn programming at some point
@@cookie_space you learn nothing by copy-pasting, actually it will even make you worse especially for beginners
This is why Linus is Linus. Just look at his intelligence, attitude to life and optimism. No negativity, rivalry or hate. My respect.
I think LLM technology will make bad programmers faster at being bad bad programmers and hopefully push them to be better programmers faster as well.
LLMs I think will make good programmers more efficient at writing good code they probably would already write.
LLMs solve not needing to remember how you write things. You still have to be able to read it and have good judgement on where the code is subpar.
@@melvin6228 This is nonsense. How can you audit code that you yourself don't remember how to write?
@@ougonce is that function you use twice a year called "empty_foo_bar" or "clear_foo_bar"? Or maybe "foo_bar_clear"? Those kinds of questions are very important and annoying to answer when writing, useless when reading.
@@yjlom Or even just something as simple like the question of how you get the length of an array in the particular language you are using. After using enough languages, they kind of all blend together, and I can't remember if this one is x.length, x.length(), size(x), or len instead of length somewhere. I'm used to flipping between a lot of languages quickly, and it's really easy to forget the specifics of a particular one sometimes, even if I understand the flow I would like the program to follow. Essentially, having an AI that can act as a sort of active documentation can really help.
I was using ChatGPT to help me write code just today. I'm making a Python module in Rust and I'm new to Rust.
I wanted to improve my error handling. I asked how to do something and ChatGPT explained that I could put Results in my iterator and just collect at the end to get a vector if all the results are ok or an error if there was a problem. I didn't understand how that worked and asked a bunch of follow-up questions about various edge cases. ChatGPT explained it all.
Several things happened at once: I got an immediate, working solution to my specific problem. I didn't have to look up the functions and other names. And I got tutored in a new technique that I'll remember next time I have a similar situation.
And it's not just the output. It's that your badly explained question, where you don't know the correct terminology, gets turned into a useful answer.
On a separate occasion I learned about the statistical technique of bootstrapping by coming up with a similar idea myself and asking ChatGPT for prior art. I wouldn't have been able to search for it without already knowing the term.
There is a fundamental philosophical difference between the type of wrong humans do, and the type AI does (in its present form). I think programmers are in danger of seriously devaluing the relative difference between incidental errors and constitutive errors - that is, humans are wrong accidentally, LLMs are wrong by design - and while we know we can train people better to reduce the former, it remains to be seen if the latter will remain inherent in the implementation realities of the latter - i.e. relying on statistical inference as a substitute for reason.
You got stuck in your own word salad. Start over; Think like a programmer. Break the problem down. How would you go about proving the LLM's code is correct using today's technology?
@@caLLLendar
First, I don't appreciate your tone. I know this is TH-cam and standards of discourse here are notoriously low, but there is no need to be rude.
I wasn't making a point about engineering.
The issue is not the code, code can of course be Unit Tested etc. for validity.
The issue is that the method of producing the code is fundamentally statistical, and not arrived at through any form of reason. This means there is a ceiling of trust that we must impose if we are to avoid the obvious pitfalls of such an approach.
As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data - and you, as the developer, if you do not preference your own problem solving skills are increasingly relegated to the role of code babysitter. This is not something to be treated casually.
Early research is now starting to validate this concern: visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx
These models have their undeniable uses, but I find it depressing how many developers are rushing to proclaim their own obsolescence in the face of a provably flawed (though powerful) tool.
@@calmhorizons Have one developer draft psueudocode that is transformed to whatever scripting language that is preferred and then use a boatload of QA tools. The output from the QA tools prompt the LLM. Look at Python Wolverine to see automated debugging. Google the loooooonnnnng list of free open source QA tools that can be wrapped around the LLMs. The LLMs can take care of most of the code (like writing unit tests, type hinting, documentation, etc).
The first thing you'd have to do is get some hands on experience in writing the pseudocode in a style that LLMs and non-programmers can understand.
From there, you will get better at it and ultimately SEE it with your own eyes. I admit that there are times that I have to delete a conversation (because the LLM seems to become stubborn). However, that too can be automated.
The result?
19 out of 20 developers fired. LOL I definitely wouldn't hire a developer who wouldn't be able to come up with a solution for the problems you posed (even if the LLM and tools are doing most of the work).
Some devs pose the problem and cannot solve it. Other devs think that the LLM should be able to do everything (i.e. "Write me a software program that will make me a million dollars next week).
Both perceptions are provably wrong. As programmers it is our job to break the problem down and solve it.
Finally, there are ALREADY companies doing this work (and they are very easy to find).
@@calmhorizons exactly. Agreed, and very well put. Respect for taking time to reply to a rather shallow and asinine comment.
"As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data "
I would add that this will likely be exacerbated once more and more AI-generated code makes its way into the training datasets (and good luck filtering it out).
We already know that it has a very deteriorating effect on the quality (already proven for the case of image generation), because all flaws inherent to the method get amplified as a result.
Linus is always chill about new things.
I learned python on my on from TH-cam and online tutorials. And recently I started learning Go the same way, but this time also with the help of Bard. The learning experience has been nothing short of incredible.
You should pat yourself on the back for not asking ChatGPT to write code for you.
@@Spacemonkeymojo Only my TH-cam comments are written by ChatGPT, not my code.
Bard and code, only for simple stuff
The future is Subject Oriented Programming!!
"Hopeful and humble" sounds like a good name for a Linux release. Just saying…
4:31 - It’s crucial to remember that the current state of LLMs is the worst they’ll ever be. They’re continually improving, though I suspect we’ll eventually hit a point of diminishing returns.
how do you make this statement? the worst they'll ever be? really? how do you come up with this. you dont even program so what is your opinion worth
@@Insideoutcest I actually just received my first offer doing R&D for a software development company. I specifically specialize in AI product software development (writing code) . The statement I made is 100% factual, the current capabilities of models are the worst they will ever be…. They will only improve, now how much remains to be seen. Could be just 2% could be 20%. I personally believe there is room for considerable improvement before we hit the frontier of diminishing returns.
Edit: you know nothing about me, why tell me I don’t program? As if that would certify my previously stated opinion on the improvement of the technology….
@@aaronstathatos6195 cope, you're a laymen
Well, my first Arduino project went very well. A medium complexity differential temperature project with 3 operating modes, hysteresis, etc. Medium complexity. I know BASIC and 4NT batch language. Microsoft Co-pilot helped me produce tight, memory efficient, buffer safe, and well documented code. So, AI for the win!
That Canadian guy was lucky enough to be given the name of a true tech genius
I have never programmed before in my life and with GPT4 I have programmed several little programs in Phython. From code that helps me renaming large amount of files to more advanced stuff. LLMs give me the opportunity to play around. Only thing I need to learn is how to prompt better.
you're a programmer in my eyes!
"Only thing I need to learn is how to prompt better."
This is exactly the problem. Especially when you scale. You can't prompt to make a change to an already complex system. It then becomes easier to just code or refactor yourself.
The fact that anybody needs to "prompt better" suggest that LLMs are not very good yet
@@twigsagan3857 Only problem is when the code exceeds the Token Limit. Otherwise I can still let the LLM correct my code. Takes a while to get there but It works.. And no I am not at all a programmer xD
@@chunkyMunky329 huh? LLMs predict the most likely answer. So the way you describe the Task is the most important thing in dealing with it..
I am using AI to learn arduino coding, it helps me a lot to understand the code and do fault finding but when i ask to make corresponding circuit diagram, for even simple problem, it struggles. But, It explains the circuit diagram very well. Needs improvement. Many PDF books available, Just feed the AI and improve ?
LLMs are certainly useful and can very much assist in many areas. The future is really is open source models which are explainable and share their training data.
so nice, and also the quality of comments for this video.... there ishope for humanity.
my main fear is that this is something we will start relying on too much. especially when people start even autocompletion can become a crutch so much so that a developer becomes useless without it. imagine that but when it comes to thinking about code. we are looking at a feature where all software will be as bad as modern web develooment.
technology as an idea is reliable - a hammer will always be a hard thing + leverage. We have relied on technology since the dawn of mankind, so I'm not sure what you're saying here.
@@kevinmcq7968 llms are reliable? how so? can you name a technology that we have relied on in the past that is as random as llms? I am genuinely curious
@@kevinmcq7968
I think you are just intentionally misunderstanding what he is saying. He is not saying tools are not usefull, he is saying that if the tool starts to replace the use of your own mind it can make you dependent at the point that it will prejudice your own reasoning skills (and we have some evidence that this is happening, that's why some schools are turning back to use handwritting for example / Miguel Nicolelis also has some takes on this matter).
I like how he didn’t fall into the trap of Ai bashing like the host was trying to lead him to. That’s how you can differentiate a trend follower from a visionary.
I find that in their current state, these models tend to make more work for me deleting and fixing bad code and poor comments than the work they save. It's usually faster for me to write something and prune it than to prune the ai code. This may be partially because it's easier for me to understand and prune my own code than to do the same with the generated stuff, but there is usually a lot less pruning to do without ai.
No. Your comment was for me like a fresh air in the middle of all this pseudo-cognitive farting about the so-called AI. No, it is not only for you. Those who say otherwise are just posers, actors, mystifying parrots repeating the instilled marketing hype.
Maybe that's just in the beginning? Eventually, it might become easier to spot someone else's mistake than your own. Also, AI might more easily find your mistakes.
From personal experience, I think LLMs *writing* your code are terrible when learning. They will produce bugs that you don't understand as a beginner (speaking from experience). As for explaining stuff, I think they're a bit more useful with that.
if you didn't like what the tool performed, then it isn't the tool, friend.
Keep practicing your prompts (and programming). If you like, I'll help you train (for free).
Auto-correct can cause bugs like tricking developers into importing unintended packages. I've seen production code that should fail miserably, but pure happenstance results in the code miraculously not blowing up. AI is a powerful tool, but it will amp up these problems.
No. Thinking like a programmer, are you able to come up with a solution?
yes i accept suggestions, and i read them too, never accept something you dont know about.
I love how Hohndel disses AI as "not very intelligent" / "just predicts the next word" and Linus retorts that it's actually pretty great lol
@@EdwardBlair yeah I also found it curious that as he was about to ask Linus about AI in kernel development, he apparently felt an overwhelming need to first vent his own opinion on AI in general even though that wasn't even the topic at hand and he wasn't the person that was being interviewed.
I'm an expert in the field and I _still_ think it's "autocorrect on steroids." It's just that I think that autocorrect was a revolutionary tool, even when it was just Markov chains.
I’m afraid he didnt get something, going from assembly->c->rust->(yet higher level lang) is a whole universe apart from understanding messy human natural language, and then translate that into code, there r humans who understood compiler, but no human (yet) understand how a transformer did its “mapping”. Linus wasn’t trained in machine learning, so in this aspect, one should discount his opinion.
could be a great tool for static analysis.
If it was great at static analysis then people would probably already be using it for static analysis
"This pattern doesn't look like the usual pattern, are you sure?" awesome
I already feel helpless without intellisense. I can imagine how future developers will feel banging their head against their keyboard because their LLM won't load with the right context for their environment.
I use intellisense on a daily but I know people who code on raw vim and get more things done than me in a day. AI is going to make typical things more easy and is going to have limitations for a long time and to do anything outside those limitations we'll need actual programmers.
LLM in the hands of Jr Dev is like a bug building tool 😂, LLM in the hands of experienced Sr. Dev is like a sharpening tool.
This is the first time I've seen a public figure push back on the humancentric narrative that LLMs are insufficient because (description of LLMs with false implicit assumption it contains a distinction from human intelligence). He's also one of the last people in tech I'd expect to find checking human exceptionalism bias, but but that's where assumptions get you.
Then his role as kernel code gatekeeper probably gives him pretty unique insights into the limits of _other_ humans' intelligence, if not also his own. 😉
Anyway I hope to see more people calling out this bias, or fewer people relying on it in their arguments. If accepted, it tends to render any following discussion moot.
you shouldn’t conclude llms to not be dumb as fuck just because they happen to be smarter than you
It is already helping review code, just look at million lint, it's not all AI but it has aspects where it uses LLMs to help you find performance issues in react code. A similar thing could be applied to code reviews in general
I think some humans would be glad if they still had the time to hallucinate, dream or imagine things from time to time.
good point xD
I think most project leads would not be glad if one of their devs submitted a PR for code they hallucinated
@@verdiss7487 not what i am talking about
late stage ca-
@@pueraeternus. cannibalism?
Has been a long time since I have seen such a hard argument - both are very right. They will have to master the equivalent of unit testing to ensure that LLM driven decision-making doesn’t become a runaway train. Even if you put a human to actually “pull the trigger”, if the choices are provided LLM then they could be false choices. On the other hand, there is likely a ton of low lying fruit that LLM could mop up in no time. There could be enormous efficiencies in entire stacks and all the associated compute in terms of performance and stability if code is consistent.
The difference between hallucination and idea is the quality of the reasoning behind it. The issue is not that the llms hallucinate, that is a future feature, the issue is that it is unable to figure when the question is objective and if it knows the answer... not easy to fix, for sure, but I have no doubt it will be fixed one way or another.
Not until it gains some sort of consciousness.
@alexxx4434 I think when it does, it will take a while for all to agree it does... consciousness is very definition dependent and looks to me like a moving target (or rather a raising bar to clear).
This is my experience with AI coding and is probably a telling indication that programmers will always be needed. I script in a cad environment using LISP which is not 100% compatible with Autocad’s lisp. It is fairly compatible up until Visual Lisp came out, but not after. Everything script it writes fails. It reads well, but never works.
This host underestimates how hard a task autocorrect is. You have to understand human sentiment to predict the next word, which is really hard.
Someone recently said Something like "It isn't really about what AI can do, but what the public believes it can do".
Saying LLM's are just autocorrects on steroids is like saying human experts are just autocorrects on steroids. Obviously there's more to being an expert than that, and it is that expert role we are now bit by bit transferring over to machines
Please let them keep saying it, it's a super convenient way for me to tell when somebody has no idea what's going on, without having to interview them
True. LLMs are .more of a pattern generator on steroids.
It's literally true, though. It's all about probabilities and choosing the most appropriate response. What differentiates Transformer models from previous Markov chains and Naïve Bayes algorithms is that Transformers encode the input into a more useful vector space before applying the predictions.
You may find the "on steroids" shorthand as somewhat short-selling the importance of that shift, but the alternative is that we talk about artifical neural network models as if they have intelligence or agency (using terms like "attention," "understanding," "learn" and "halluciante") which, while useful shorthand, is preposterous.
@@GSBarlev Sure but you can tell when people are just repeating that phrase because they heard it somewhere, in an attempt to rationalize metaphysical concepts like souls. The only difference between the hyperdimensional vectorspace that modern AI's operate in and the Hilbert space that you operate in is number of dimensions and support for entanglement and superposition- which are not exclusive to biology, and which many would argue are not even relevant to biology (they are, but AIs can have qubit-based mirror neurons too)
@@alakani Going to ignore your wider point and just give you an FYI-you can pretty safely disregard quantum effects when it comes to ideas of "consciousness." Yes, the probability that a K+ ion will quantum tunnel through a cell membrane is nonzero, but it's _infinitesimal,_ especially compared to the _real_ quantum tunneling that poses a major limitation to how tightly we can pack transistors on a CPU die.
It is almost unbelievable to hear someone being reasonable when talking about "AI". I hope this becomes the main stream attitude soon and that CEO's and tech bros drop the marketing speak. It is actually an interesting area of automation and calling it AI, I think, does the field a disservice in the long run even though it helps to sell products right now.
They hype it to pump the stock price
Calling it AI is accurate, the issue is people have a wrong impression in their minds of AI. People think AGI when they hear AI, when in reality what we have right now is narrow AI. It's still AI, objectively, but people are uninformed and think that means more than it does.
there is a lot more to software engineering than just writing code
We already have huge problems with OSS quality, where more than 80% of all OSS is either poorly maintained or not maintained at all. On top of that, OSS is on the rise, being the single biggest cause for the increasing Technical Dept.
LLM have the potential for greatly increasing the amount of OSS generated, meaning, that unless we actively address the OSS quality, LLM will most likely make it worse.
I think the hallucinations make it less scary, the fact that it needs human involvement means that jobs will stay.
You're right. But we're also hearing some negative stories in terms of teamwork. For example, there are some situations where a junior developer sits and waits for an AI code that keeps giving different answers instead of writing code, or it takes more time to analyze why the code was written the way it was, as opposed to the other way around, but it still helps to gain insight or a new approach, even if it's a completely different answer.
That junior coder needs more GitHubs so we can bring them on as a lead dev to work with AI. The middle management and entry level is over in the future.
Im still very worried about the copyright implications, and the hidden immoralities happening to classify training data
Yes no one is talking about the effect on creativity LLMs will cause
@@rithikgandhi3685yeah
"Here's the code for the new program. It's created by the same technology that constantly changes my correct typing to a bunch of wrong and completely ridiculous strings, like changing 'if' to 'uff' or changing 'wrong' to 'wht'."
linus with the hot takes. love to see it
Yeah it's autocomplete/correct, except the autocomplete is owned and controled by huge companies like Open AI who train it on data no one gave them consent to use. Implementing machine learning autocomplete/correct that runs on user's machine vs a huge HPC monster that is controled by a huge company are two very different things IMO. I'm not endorsing machine learning that can't be ran offline and on user's machine. Even better if it allows the user to train their own models on their own data. No ridiculous power and water used, no company using copyrighted material, no companies using your data on their servers for training AI. I think the trend of offloading more and more of our lives to an API or a huge for profit company, and thus even further deepening the fact that users are products themselves is not the right way.
Linus has a much more correct perspective than the interviewer. Our brains ARE pattern predictors, our brains also dream and hallucinate. The author is trying to make it sound like those properties should diminish the credentials of 'LLMs' when really they make them more interesting. He's also ignoring that it's really Transformers we're talking about. Transformers are also being easily applied to visual, language, and audio data, and they're working easily multimodally to transform between them. There is no correct reading of the situation other than that something profound and core to the way intelligence probably works has been discovered.