For sure. In education, documenting the reasoning for how you approach one kid's way of learning versus another's is the goal of individualized education. Scientific discovery. We'll need agents that can act on the reasoning and thinking. And ways to determine if the agent properly acted on the reasoning both in the digital space but in the physical real world. Exciting times.
I like to imagine that we'll use with computers like we've seen in scifi movies/shows. A simple example is like in star trek.When there's a problem, "computer, what is the ...." or similar then the guys in the shirts go do things. I agree with Gary Vaynerchuk when he talks about the technology (it's development and adoption of use), it is undefeated. Things will change. Yes, some jobs will go away, and some new ones will come. I think Gary said once when electricity was finally available at a citizens home some people didn't want it because they thought demons were electricity.
Yes, because not every application lends itself to being a conversation. I want to play a game of chess, browse through my personal photos, or design a house. I think rather the paradigm of application development shifts to where it's no longer an advanced skill learned and practiced by a few, but that basically everyone that uses software and has any ideas for how to customize or improve it becomes a software developer, or rather the AI develops all or most software just based on the ideas and customizations submitted by users of that software. I imagine the process for customizing your applications by having AI modify it and then also submitting and ranking these changes and ideas to open repositories becomes streamlined and ubiquitous to the point that there is very little distinction left between software user and software developer.
It will continue to exist, but it will coexist with AI, just as radio coexisted with TV, and both with the internet. I think there's a huge conceptual error when talking about work and AI, and this error becomes apparent when the word "replace" is used (e.g., "AI will replace your job"). AI won't replace many jobs, but it will make a large number of them obsolete. Many jobs exist only because we are human, and as such, we are limited in our capabilities. But when a tool appears that doesn't have our limitations, those jobs become unnecessary. I always say that a model will never be trained to be a film lighting technician, but a video generator makes the demand for lighting technicians drop to the point where many of them will have to abandon their jobs. It's worth noting that there are people (like the guy that is interviewing Linus) who clearly show a kind of rejection, in my opinion, driven by fear. You can see they're constantly on the defensive. If these people were in positions like the lighting technician in the previous example, they would be saying, "AI will never replace me," because the fear of discovering that their job might disappear blinds them.
As a comparison, here is ChatGPT's attempt: 0:00:00 - Introduction and Overview of Linus Torvalds 0:01:32 - Linus Torvalds on AI Written Code 0:03:17 - The Evolution of Abstraction in Programming 0:04:40 - The Future of AI-Generated Coding Languages 0:05:49 - AI's Potential in Code Review and Bug Detection 0:07:25 - Human Error vs. AI Error in Coding 0:08:30 - Linus Torvalds' Optimism and Skepticism About AI 0:10:28 - Concerns About AI Hallucinations in Coding 0:11:04 - AI Hype and Linus Torvalds' Perspective on Its Impact 0:12:59 - AI's Positive Impact on the Kernel Community 0:14:59 - The Potential Decline of Application Developers 0:16:10 - The Evolution of Development Tools with AI 0:18:01 - The Importance of Open Data vs. Open Algorithms 0:19:21 - Closing Remarks and Audience Engagement
And FYI, the prompt `This is a transcript of a youtube video. Can you pick out the key sections of the video, a one line title and the timestamp in the format: hours:minutes:seconds - section title`
Hi @MikePreston-darkflib, just a quick note: your prompt for the transcript extraction is clear and precise. I'll make sure to provide the key sections, one-line title, and timestamps as requested.
Great overview! Could you provide more details on the specific points covered, especially any highlights from Linus Torvalds' thoughts on AI in coding? It would be interesting to know which of his insights you found most compelling.
no it's not... there's a huge complex metabolism keeping the thing in equilibrium, which requires tons of thought, theory and experimentation.. but, to go with your analogy, we humans are just some some collection of atoms typing on youtube. and these atoms are just some deterministic state of the wave function of universe.
@@ai_serf It's not a 'huge complex metabolism' you just get fuel rods (condensed uranium ore) close enough you get a chain reaction and that makes a lot of heat, which boils water and spins turbines just like any other power plant. A kid built one in is back yard.
and to me, you are giving it the lif, the human touch, and the debuggers, that AI didnt see. Coders will be the money makers, people who think you dont have to code, will be consumers. Thank you for your contribution ( token) Who will write the next AI algorithms? AI? Nahhh.. high paid people like us
@@AjarnSpencer90% of programmers are not creating new algorithms, they are taking already existing algorithms and fitting them together. You’re right that there will still be programmers that are pushing the field forward, but the point is it’ll be way less than it used to be. Even those last few who were able to keep their job due to being on the cutting edge of discovery may be replaced someday as well. There’s already AI that can be used to discover new mathematical proofs that can compete on a global scale.
You are right. I'm so tired of devs pretending they are inventors or developing new libraries. Almost all dev work is maintaining and building upon current business apps. Very small percentage of devs are building new apps libraries or languages. @@TerraGlide
Well…that *is* what you’re doing from the point of view of the *outcome*, although perhaps not the process. But as programmers we should understand that it’s the outcome that matters right?
LLMs are not just predicting the next token, they actually map the representational space and this can be seen across different human languages where the models activate the same patterns. They are actually learning something besides the next token.
What does this even mean. Of course, the purpose of embedding and representation learning is to determine the correct distribution that best approximates human language distribution. There is nothing really... More than that to it. That is not what we generally call "reasoning", though. Rather, reasoning might be considered as the cause to such a distribution: LLMs learn only the distribution, not its cause, though.
I think you're completely wrong on two specific points: 1) It's possible simplistic apps like a "to-do" app will be accomplishable via an LLM/AI directly, but enterprise grade software at scale is not even on the horizon for something AI will be able to do natively, so this idea that applications are going away is ridiculous. Maybe simple consumer apps on your smartphone, but not one meaningful and scalable application is even close to being replicated natively by AI. 2.) You've spoken before about this idea that one day LLMs might just write code in a completely esoteric and proprietary language only other AI systems can understand, and I find that to be the most horrible and dystopian idea ive ever heard. Having software and systems out there running and interfacing with humans lives whereby we have zero people on the planet who can read the underlying code and/or validate how these systems actually work under the hood is absolutely HORRIBLE. I cannot think of one singular system we have created thats out there running today, whereby there are zero humans on the planet that can validate for us how it works. Having software written in completely unreadable code should legitimately be outlawed at the governmental level; this idea is just that toxic and profoundly dangerous to humanity.
Maybe we don't have systems we created, but there are systems we don't know how they work on earth, they even kill us, and nobody care. But some people want to understand them, we call them scientists.
This guy probably never wrote anything more than a 20 line python script, even just basic apps are gonna be incredibly difficult to just embed into LLMs, he should remember, most AI today just output text, in order for that idea to become reality you really need to rewrite your operating system entirely to rely on LLM, so it can by itself access every low level info in real time
You will be surprised by what you can do with LLM/AI. I felt the same way you did until I started writing. There are many issues when using LLM; its memory is one thing, but the potential is there. I wrote tens of thousands of lines of code using LLM, but I had to break down everything to a manageable size because of the limits they had put on it. And yes, its logic is good but not good enough; many times, I had to teach it to take a different solution. A new developer can't manage it, but in the hands of an experienced developer, LLM/AI is a huge help. It sure helps in debugging, but it can't debug itself; it still needs an experienced developer. An experienced developer can use it to program in any environment, in any language, and accomplish any programming goal. I love it.
my son uses llms to write his projects, buts its a matter of getting it to do 20 line functions and then fixing them and pasting them together. As he's gotten better at a language, he's finds using the LLM less and less is more productive because you don't have to deal with the constant hallucinations, or that you can never design a prompt to perfectly match what you really want.
And now LLMs are training us to write in the language called "Prompt" which marginally resembles English. That is if you actually want the code to work.
happy to say i have finally reached that but feel stupid and self judgy that it took me this long, in my early 40s. Oh well. Yes Torvalds holds his core values to heart and is humble (unlike most, including me trying)
I think Linus is really spot on with a insightful and pragmatic view of AI's potential. I also think this could be very disruptive to the software development industry, but in a way that could be very good for consumers and end users as the millions of mediocre and bad developers who are currently producing the bug riddled garbage we currently suffer with will be rendered redundant.
LLMs will likely transform how we interact with the application layer by simplifying access and creating more natural, conversational interfaces. However, they won’t completely eliminate the need for dedicated applications, especially those with complex functionalities, specialized workflows, or stringent security requirements. Instead, LLMs will act as a bridge between users and applications, enhancing the user experience by providing a more intuitive way to interact with software. In short, LLMs will change the role of the application layer rather than do away with it altogether.
So after like 3-4 months of using AI tools to code, I’ve realized something. The whole "NO MORE CODING" thing in the future might actually be kinda true, but "NO MORE SOFTWARE ENGINEERS" nah, that’s not happening. AI is cool and all, but it gets stuck in these dumb infinite loops sometimes-like it keeps trying to fix itself but just ends up making the same mistake over and over. That's where we humans come in. We can spot those mistakes and come up with new solutions AI just can't think of. Plus, there’s literally endless stuff to innovate in tech. We’re always gonna need engineers to come up with new ideas, like inventing algorithms AI can’t even imagine. For example, a human might invent a compression algorithm that’s never existed before, something AI isn’t gonna do on its own. So yeah, AI might cut down on the actual coding we do, but we’re still gonna need engineers to keep pushing the limits and creating new things. Things like creating landing pages , simple web or mobile apps for small bussinesses would become more simple with ai. but creating new things and new technology i think that is what engineers would be doing in the future.
same what I said, engineering is thinking and planning and that is not going away, but "doing" maybe, coding will be automated and coders will not be needed, but engineers will know how to code
You do realize that you are moving goalpost. 1-2 years ago it could not do working code most of the time. You are claiming that it will never be able to do large scale planing, and maping from that level to lower level actions it can do now.
@@EduardsRuzga Every time I see someone post that AI "will never be able" to do X they only look at the state of AI at the time. If you never factor in AI development then you're always going to end up with this moving goalposts scenario. Sometimes I even see people claim AI won't ever be able to do something that it can already do at the time because the state of AI that they're basing their claim on is outdated.
Problem with people like you is you do not look in the future. You are describing this point of time and still thinks AI not be the same in the future.
Good job brother. I agree with your insight. Started programming in 1985 with a Commodore 64 and BASIC. AI is a tool and a very usable tool when you know what it actually does vs. magic.
I lead teams of developers and have always told my team, 'Our job in writing code is to automate ourselves out of existence!' I didn't think it would happen so soon. Developers love shiny new tools!
at a former job, a decade ago, I did a lot of that stuff, automate the obstacles or the tedious stuff everyone was doing for years (mainly because I hated it). On my last day, a colleague teased me by putting a post-it on my screen "Tom automated all the things, now he's gone (and added a a big smiley)" Which was quite funny tbh. Now it's on another level and scale.
Recently discovered Linux. Installed it on my server PCs where I use it with ollama and LMStudio. Even brought back to life a 9 year old laptop that was literal garbage with windows. Now it's like new. I'm so happy with it that I'm seriously considering switching my main to linux.
Except it won't be like new. It's 9 years old and will have the performance of a 9 year old laptop. Yeah, the OS will be more responsive becaause its more basic and cut down, but apps will run as they do on a 9 year old laptop. I have an old i3 laptop with 4Gb of RAM. It won't run Windows 10. I could do what you did. I'm not sure it's worth it. There'll be driver issues, the battery probably needs replacing, the drive is old... etc
@@toby9999 def worth it, try installing linux mint xfce edition, only takes 10 min to install... and put firefox instead of chrome as only 4gb ram... old laptops in linux never have driver issues unlike in windows
It’s the last step for human interaction with machines. The last LAST step will be machine just using binary to communicate with itself… until it learns to reverse entropy. Anyways, it’s four dollars a pound.
Last step will be machines realizing what is needed and creating it, most of us will be so pampered that we will not even care of what's going on, we will expect it to be ubiquitous and fall into entitlement. We will fuse and be absorbed until extiction of carbon entities replaced by silicon etc, the first wave will be from medical issues (blindness, disabilities) and access to information, and the expectation to live forever, also to allow space exploration (more resistant). My greatest concern is the loss of soul.
13:17 "What is the point of building an application when you can get what you want from an LLM?" That's like saying "What's the point of manufacturing hand-held cooking utensils when you can have an automated factory make your meals?" What you want is sometimes not a goal but a process. If I enjoy cooking, there's a point in having hand-held cooking utensils for the process of cooking. Some apps are for artistic activities. I personally use apps like Ableton Live to make music for my own enjoyment. Sure, we're beginning to be able to completely automate music creation, and I welcome it as a great thing, but I'm going to still enjoy the process of manually creating melodies, harmonies, chord progressions, etc in a non-linear dedicated software environment, which can't be replaced with linear prompt interactions with an AI.
I’m with Linus on this. AI coding is getting better and better but you still have to be careful with what it produces. You still need to know the language and know general purpose coding concepts to have it work effectively for you. So at least for now it’s a tool developers can use to speed up development. You should not use it as an alternative to learning to code and expect to get a job. I don’t really agree with Matthew here because LLMs aren’t writing code “better” than humans, they are writing code faster. The models are predicting the next token based on context and also on the data they were trained on. So it’s very easy for LLMs to introduce mistakes that are made by humans because they are trained off data that humans created. Not to mention the possibility of the model using outdated APIs or libraries. I think an AI specific coding language would be a mistake at least with the current technology. Maybe in the future.
Linus is great for many reasons. One of the most interesting things for me is often his openess to adoption and adaptation. Often when you are as great as Linus you have to become "set in your ways". After doing something that has worked or made the world a little better for 30 yrs changing it might not make sense or be tough. I could see him saying something like, "When we did it, we did great!" But "now we do it this new way" and it is better!
The AI is a tool, but it's a tool in the hands of the final user. A calculator is no longer a tool in the hands of the middlemen whose job was performing calculations by hand for a living. Those middlemen are gone and the calculator is now a tool in the hands of the final user. Same goes for AI.
Esta en manos del usuario final si, pero ahora mismo sin conocimientos precisos no puedes hacerlo solo. Puedes intentarlo y conseguirlo pero a cambio de mucho tiempo. Sigue requiriendo una persona capacitada y entrenada en eso. Esto es aplicable a cualquier disciplina.
@@melski9205 You're right 👍 I meant no human labour as middleman but one can consider the owners of AI+Robots as the last and ultimate middlemen which is a serious matter in itself. Anyway I don't expect "jobs" to outlast ASI since no-one would be hiring humans and -if things go well, no-one will be willing to "work for a living"
Dude, thanks for this video and all of the others. It is a very good channel to be in touch with important AI and programming stuff. English is not my firt language but your english is so clear that I can undestand everything you say. Cheers!
Holding up a "todo" app as an example is pretty close to saying: no one's going to need a "hello world" app anymore. There will always be a demand for precision 3D modeling software, video editing software, etc Maybe not by noobs who would be content to ask an LLM, but certainly by those who can use them to their greatest potential. Likewise for the software itself. There will be less need for programmers sure, but the need will never completely eliminated.
We are not even close to be there yet.. and I’m saying this as someone who works with LLMs on a daily basis as a software developer. Most of the hype is just that.. hype.
@@ey00000 I never said we won't get there one day, on the contrary I believe just as you said, it is a matter of time. What I am saying is we are not there yet. I also mentioned that I use these tools everyday.. so I'm pretty sure there won't be an "unexpected bye bye" for me lol.
AI Hater: "AI is just autocorrect on steroids" Counter: "Humans are just biological autocorrect on caffeine" - Kyle Hill had a great video about human intelligence, and the key thing was that it's buffered from realtime input - that is, our mind just predicts its responses, and is recalibrated by the incoming sensory data. AI and natural intelligence are kin, in that way. AI haters use it as a slur, without being self aware enough to realize they are not special, just a highly evolved biological entity that has been shaped over a billion years to optimize those prediction machines in our skulls.
> Counter: "Humans are just biological autocorrect on caffeine" This is not a counter LOL, this is factually wrong (and does not even make any sense at all).
@@alst4817 It is literally obvious, you have conscious experiences, you have independent thoughts, you have volitive and intentional action, you have physical direct interactions with the world, there's nothing "autocomplete" in what you are, and more than that, there is nothing to complete.
@@diadetediotedio6918you’re referring to the ‘Chinese room’ argument, but it’s circular. You have to first assume that humans and computers are different before the argument makes sense. You’re begging the question.
"Doesn't matter if the AI replaces programming, at the end the problem is to teach people to think. Believe not many people have the gift. On the other hand, it is to understand the situation and explain it to the computer. It will be the same to use logic, math, code, or simple words."
Would like to take some exception with Gregg. I suspect many may use AI as a substitute for thinking but that does not have to be. As for myself I find AI to be a great stimulus to thinking. Just have to recognize what makes sense, make queries from different angles.
Hey thanks for this, is a great video, I agree with several points, and I love how they aren't afraid to give common opinions about those tools, as well as they challange how we think about it. Cheers up
It sounds premature to assume that application-level programmers will no longer be needed. It rests on two key assumptions: 1) There will be no further development of new high-level programming languages, like when GO or Java emerged, or Swift six years ago. 2) There will be no issues with copyright or conflicts arising from using coding logic across companies. It also seems to oversimplify things, as if every company’s coding process is the same. That’s why I tend to agree with Linus-it's better to take a 'wait and see' approach. Who knows? We might even see a new programming language specifically for AI, or a convergence between iOS and Android, with a single common language for both platforms and so on.
We called it constructive learning rather than representation learning current systems are just mimicking representation rather than constructive reasoning
totally agree and for a reason, human capacity is exceeded very quickly and the views of the world have something to do with that. The ai can guid various perspective on reality. (Non coder -prompter guy)
I had literally never even written a single line of code in my life until about 3 months ago. Using ChatGPT and Claude, I have now managed to code a very simple "to-do list" program (as a test) and I'm currently putting the finishing touches to a database's GUI that can fetch images from my google drive. And speaking of debugging, it was Claude who pointed out that my GUI script couldn't fetch the images from my drive because I needed to use the Google API to do so. I'm also about to launch an even more ambitious and complicated project soon. AI is a game changer.
"At a certain point LLMs will write a code in a language which does not look familiar to us at all, because they don't need to." 100% agree. I was trying to make the same point in different programming forums to spark a discussion, but this did not attract much interest. Really, everybody asks LLMs to write a snake game in python as a benchmark. Why python? Just give me a file which I can run on my device. Or give me a document which I can browse on my desktop. I don't care which language stack is used inside. All programming languages and development tools are optimized to make a job of human coders easier, but do we still need them if 90% of the code is generated by AI anyway?
Like I said, when "Humans" allow AI to write code directly to key devices and systems, without a human readable format intermediary, then we are giving all the agency to the AI, and removing all the agency from the human. A human should ALWAYS act as the supervisor... when we stop providing that service, we will become too lazy and too ignorant to understand what is happening, and we will fall into the great decline.
In principle, they could just write machine code directly. In practice though, the hallucination problem means that they require human supervision, and thus for now they need to write human understandable code.
The biggest problem for most development is not writing code but writing the code in a *secure* environment. Permission requests ++. Security is hard. Requirements change and remembering the decision process history and reasons why it wasn't done like that is important and LLM's won't have the whole visibility of all the interactions of all people. Programmers should be nervous, but developers will be fine. They'll just move up the stack. This has been the trend and it will continue.
Great video! While open data is crucial for kickstarting AI, the real game-changer lies in the shift toward real-time data streams from devices directly interacting with users and the world. These streams will train local models, feeding into federated learning systems to refine and adapt base models continuously. The future of AI is deeply embedded in the dynamic, real-world feedback loops!
Have you tried Replit Agent or Cursor AI? Code generation is magic with these early stage tools. Wait until big companies like Microsoft and Google launch their solutions.
About "we are also autocompletes on steroids". We are actually world simulators. We build and run a world simulation in our mind, to predict possible futures and make decisions. We were strugling to build abstract enough systems that can do this. With llms we got close over language. Now on top of that we can build and look for even better architectures. LLMs are far from perfect, but they give us foothold to stand on and reach further.
Matt, I love your enthusiasm for AI and the info you give the community. Thanks for all your efforts. The problem with AI writing complex software (-if it ever gets there-) is that the source code becomes a black box just like the model files. No one in their life time will have the time or inclination to debug obfuscated code. So, unless is some harmless application, no corporation in their right mind will ever employ such a thing, so therefore it will remain a toy that could get out of hand because of fearless minds out there will take risks in duplicating controlled software and call it safe and controlled. Yes, people can read assembly language. It is only strange to the python generation.
LLMs passed the Turing test a long time ago. If they are intelligent (have a deeper understanding) or not, it doesn't matter they passed the Turing test.
Many services like e-commerce , banking, government services could be just a single column database that holds every conversation between a bot and the human. Whenever a human starts a conversation, the llm behind the bot can use the entire conversation history of the user as context to determine that latest user state and answer accordingly and store the latest conversation back.
Exactly! Hallucinations involve perceiving something that isn’t there, which doesn't apply to LLMs. They aren't 'seeing' anything-they're generating responses based on patterns in data. Confabulation, on the other hand, refers to creating a plausible but inaccurate narrative, which is much closer to what LLMs do when they produce incorrect or misleading information. It's all about filling in gaps, not perceiving non-existent realities.
I think this is an insightful distinction, especially since the AI "hallucinations" are highly correlated with lack of training data in a context. This results in creating false memories (confabulations) rather than having a false perception (hallucination). IMO true "hallucinations" would have less correlation to training and memory, this "confabulation" is a better representation of what is going on/
Please stop misinforming. It is not true that everyone is using "cloud computing". Small companies still use dedicated servers, due to less costs and less complexity.
everyon means everyone in day to day lives ,small companies are small companies, though most still use cloud computing to save costs and infrastructure, till they become big enough to afford own infra.
Code / algorithms are just another form of data. The world of math is infinitely rich. For every level of intelligence no matter how high there is a (useful if solved) math problem vastly exceeding it's capabilities.
What are you talking about @13:35? The LLM can not store anything... Everything has to be presented to it in the context window. It definitely does not know when to "wakeup" and remind you. There has to be a framework built around it to do those things. Someone or something would have to write an app that calls LLM every minute or so with your data and parse the results. This would be very cost prohibitive right now. The fundamental models might not change the way they work in the future since you can't put personal information into their training dataset for privacy reasons. In 10 years you could have a personal model running at home but I doubt corporate will want that.
There is something about Matts comment that got me thinking about the simulation question: ".."when we are talking, when we're responding, maybe we are just trying to predict the next token in our own sentence.." (mic drop)(mind blown)
@13:57 "... the application layer will be going away", i'm with you; do you see an LLM between me and my favorite youtube channels (eg. Matthew Berman Tech and Futurism Channel)? i can imagine the "amazing" (healthy paranoia applied) curated content presented to me, but i can also imagine how excited the curator would be with this capability.
in your "to-do list" example. Wouldn't that still need to be an app? What if you want to see a list of your to-do items? You need to look at an app...right?
9:37 the thing I’m the most scared about on a daily basis is Human Hallucinations! And little to nothing is done to fix this one. Human in the corporate world and IT world hallucinate a lot, as everyone want to come with his own definition of everything or anything, at a point that even for simple thing like Analytics I have to ask people what does it mean for them as too many have a different definition of this simple word. So yes LLM hallucinate, maybe because they’re learning from our human messy data, which in earlier LLM wasn’t as clean as in current datasets, as we now know data quality affect drastically accuracy. The thing that bug me as you rightly point out is human are not perfect, and LLM might eventually never be as well, then why do we want such higher standard of perfection for LLM than for ourselves? Why do we human want LLM to be perfect when we are not?
Last step will be machines realizing what is needed and creating it, most of us will be so pampered that we will not even care of what's going on, we will expect it to be ubiquitous and fall into entitlement. We will fuse and be absorbed until extiction of carbon entities replaced by silicon etc, the first wave will be from medical issues (blindness, disabilities) and access to information, and the expectation to live forever, also to allow space exploration (more resistant). My greatest concern is the loss of soul.
You are wrong, AI in its current fundamentals will never be able to target deep contexts, it is just out of the possibilities of any traditional hardware, no matter how scaled up it is. Unless the transformers support discrete mathematics in its fundamentals, it will never get rid of hallucination.
2:48 - Binary and "machine code" (hexadecimal) was pretty much the same time. I think the reason for using base-16 over base-10 (what humans generally use to count with) was down to the number of pins on a standard microchip.
@@J2897Tutorials it's because base 10 is actually very awkward for binary computers to deal with. Base 16 (hexadecimal) is a more compact representation of binary that can yet be easily converted directly back into binary by humans. Decimal is awkward because the set of decimal digits 0 to 9 requires more than three bits to encode but less than 4 bits; whereas the set of hexadecimal digits fits exactly in four bits. Decimal numbers (of an arbitrary number of digits) are also surprisingly difficult for computers to convert into binary, and there was actually a bug in a commonly used piece of computer code to do so. Machine code is the specific instructions that tell computers what to do; in other words, they're what fundamentally make up machine executable programs. Machine code is often called binary code because it's represented in binary in modern digital computers. OK, that was probably more info. than you were looking for.
Mr. Berman I have to say I have never heard about the idea, that codes can become incomprehensible to humans, and it is comprehensible today only because humans need to read it. What a fascinating idea! Can you elaborate more on it, just your thoughts and no reaction to someone else, no distractions?
When you abstract up from assembly, you are applying the same principles but gluing together higher and higher level libraries. It's deterministic and no matter which level you write at you will get the same output if you use the same algorithms. With AI, to make it deterministic enough you will need to detail every single input and output to the point you may as well code it. For simple projects, AI is amazing as it automates 90% of the boilerplate code for you. It's hard to see it doing more complex systems other than non essential like games. Unless it's modifying something that already exists. Phillip.
15:32 everyone forgets about the business logic, that will never be something that is easily handed off to an AI. It's the niggly details that have to get programmed and most times these details are decided in some sort of committee
Yes. Embedded applications require deterministic code. But I use AI extensively now. I told an LLM to Simulate a Z80 micro. I fed it some Op-codes (machine code) and asked for the status of the internal registers. Using it's knowledge of this microprocessor it can not only write code, It can simulate its' entire function.
The todo app is still, underlyingly, an app. You might be able to use LLMs to write the app and make the app accepts natural language (parsed by LLM). However, you still need another program code that counts the timer (not an LLM).
Today, I just forgot my sense of humor bag on the way out of the house, but ‘he is the creator and lead developer of the Linux kernel. It’s not a big deal.’ Is this supposed to be some kind of sarcasm?
I find it very interesting that we don't see hallucinations in LLMs as a feature. If you never get anything wrong you don't expand the current mental model. Inductive reasoning vs deductive reasoning. Infer and test is a powerful model. Perfectionists acheive very little.
I had thought of something similar. If it gets it wrong and i tell it so, does it actually "learn" from that experience? I still dont think i know what it means when people say the model learns. Does it mean the code changes, or is that the data set changes and then the next time that data type is used, does it see the updated data? But then i ask does this require a totally new retraining? Apologies for my lack of understanding as i am trying to learn more here.
@@malcolmvanhilten125 This is a really good question. At this stage most of the LLM providers will be capturing the feedback of thumbs up or your sentiment to the response. Do you ask more questions or respond posibively and prompt more. If you like we are free feedback. So the human in the loop is being at the minimum captured you can bet on this for future training. It is also possible to capture feedback and store it for further queries or lookups by the LLM. Many companies when implementing LLMs will do this to hold proprietary and unique data. But what is comming is more dynamic models that will learn in real time. This will provide a way of adjusting the weights in the model as you go and probably lead to a next level of break through. I am sure it will be already developed but possibly not in the public domain as yet. Clearly the checks and balances to ensure the model does not get corrupted or lose its way over time are key when you go to live updating a model. Much more to talk about here! Great to see you are learning in this area. It's very exciting space.
One thing that I have figured out is that the tech industry tends to way overstate what powers they really have. I still haven’t found a good image generator that can look realistic nor one that would do whatever ask it to without trying to stop me. Video generationAI is basically nonexistent still. I don’t see tech jobs going away anytime in the near future due to AI.
I can't honestly claim that my own brain is doing anything more sophisticated than predicting the next token. When I reflect, that seems a plausible explanation for what is happening.
What the f? "Predicting the next token" _requires_ intelligence, if it has to be done correctly. Predicting the next token involves understanding the context and making an educated guess based on that context. A good author is predicting the next token all the time. It requires coherence, understanding the characters, establishing and following a plot, language and style, creativity. A musician "predicts the next token". A programmer, puzzle solver, doctor, mathematician, everyone "predicts the next token". Such gobbledygook I have never heard, I think people have never been actually able to leave behind anthropocentrism since Copernicus' day despite the tremendous progress in science and technology.
Bro you do realise that the Turing machine is just "predictive token" 0s and 1s is just "predictive token" if I code a Turing machine to pour my coffee when heat =100 it isnt intelligence in any way shape or form, you mids are annoying 😂
It would be nice if you provided the link to the original recording of the talk, you interrupt temp so often that you might have as well just not included the fragment of their conversation at all.
He created the kernel for the _libre_ operating system known as GNU. Libre is a Spanish word that means free, which has nothing to do with money. It means _free_ as in freedom, not as in price. And version 1.0 was quite small in comparison to the rest of the OS.
Imagine being able to juggle 7 things at once. It's crazy hard. But after 20 seconds of amazement watching you, a lot of people will say, "okay but can you do 8?" And if you then do 8, they will say, "okay now do 9." The biggest cause of AI skepticism is that we can see a miracle today and find it mundane tomorrow.
this is what ray kurzweil said like 30 years ago, a new tech comes out and everyone goes ah thats cool but it has bugs and makes mistakes, then in a few years later it works perfectly but its been around for a while and people just think oh yeah well its old tech now it should work perfectly, forgetting that it was ground breaking when it was released
Moving the goal posts further and further. Show this tech 20 years ago and it would look like Gandalf’s magic. And in 20 years skeptics will be like “sure, AI solved commercial fusion but can it make it pocket sized?! Checkmate!” And in 20 more they’ll say pocket sized is too large.
Yeah, I just saw a guy "complaining" that AI was not able to create completely new texts for a new novel he is working on, in pretty much exactly his writing style....😮😮 Expectations keep going up and we don't seem to realize how incredible all this would have seemed just a few months ago😮😮😮
Well I still doubt that LLM will be able to completely replace apps, especially backend and devops. Frontend devs for sure. Some part of backend devs as well, mainly related to simple API layer and work with DB. But architecture vise, scale wise, and high loaded projects. Will still be managed or at least reviewed by very experienced devs. Maybe when AGI will be created, then possibly that would replace everyone...
I just completed a 50K line webapp. I still needed to hire a visual designer. I still needed to become expert in CSS to implement that design. I eagerly await the day when it can rewrite my front end for me non-interactively, but we’re nowhere near that.
@@markplutowski and now imagine big companies or at least medium sized companies. Where codebase is huge, plus micro-services architecture. I highly doubt that it can comprehend and connect multiple services to implement a required feature. Plus no one will allow that code until it is reviewed but tons of real human devs...
no, developers may stop using an IDE, but consumers will always want a nice UI. When vanity was introduced into computing (read: apple), adoption by consumers skyrocketed.
Coding AI is great for easy cumbersome task. If I need to call a function a few times but with various arguments, the copilot is able to just fill all of that for me quickly.
It seems to me that we are digging ourselves into a rabbit hole we may not get out of. Once AI uses its own language, not understood by humans, we lose control and input for better or worse. Maybe for the worse.
Just because something 'powers trillion dollar industries' doesn't mean they are not 'hype' (look up the Tulipmania), I think whether something is hype is somewhat a retrospective evaluation, if it didn't last and everyone wants to pretend they didn't get 'caught up' in the marketing frenzy then it was hype.
With AI granting unprecedented power to individuals, I believe that in the near future, its public access will be heavily restricted under the pretext of safety.
"we are just trying to predict a next token in our own sentence" - yes, that's pretty obvious if you know mathematics of transformers - the tokens and sentences are abstract structures. It doesn't need to imply any normal language.
People keep talking about AIs writing programs, but I see things a little different. We won't need programmers, not because the AIs will be programming for us, but because the AIs will BE the programs. You don't need a spreadsheet if the AI can give you a UI and handle all the background calculations. Want to play a submarine game? The AI will BE the submarine game. It won't need to be programmed, nothing will. If you need the computer to do something, you tell the AI to do it, and it does it. Programs will not be needed for that reason. Not because the AIs will be programming them, but because the AIs will be doing what the programs once did.
@@AardvarkDream Maybe I was little bit harsh, but if AI can do such complex tasks on the fly like creating a pseudo spreadsheet, a submarine game etc. it rivals human intelligence, both the analytical side and the creative side, and then it can do any other job out there that requires high level thinking, not just programmers will become extinct. You just don't realize how complex the tasks you've described are.
@@Teodor-ValentinMaxim I do realize. I am a retired software engineer who has a perfectly adequate understanding of current tech. But I'm not talking about current tech, I'm talking about five years from now. Maybe ten, but at this point I see development happening at an accelerating rate. We'll have chips optimized for inference. We'll have logic engines (possibly). We'll have a host of tools that the AIs are capable of using. Today they generate text relatively quickly, images slower, video even slower than that. Unusably slow. But that's just right now. Five years from now isn't right now, and a lot of companies will have put a lot of resources into this by then. At some point the AIs will be capable of generating UIs that are optimized for whatever task is being performed, they will be capable of generating video on the fly. They will have memories that they can write to and read from and update. That they can't do that today is irrelevant, we're still in the DOS 1.0 era of AI, but that era will end. The Windows equivalent is still ahead of us. But when it happens, that's the end of most programming. There just won't be a need for it.
@@AardvarkDream lol :)) bruv. You are clearly not a software engineer, most of your time spent is thinking about the problem and how to design a solution around it, and then you spend like 5% of your time to write the code. You could explain your solution to an AI in english or whatever to generate the code for you, but since human language is so nuanced and subjective, you are more faster than an LLM if you went to writing the code straight by yourself. You are just assuming that every problem will be fixed by an AI, but programs are ever evolving, and the training material can't keep up with how many new problems arrive everyday. A programmer's job is so much more than writing code. What AI can replace is code monkeys from outsourcing countries like India, Brazil, Argentina, Philiphines, etc, where they are given an exact task with minute specifications and details.
Will there be an application layer in the future?
AI natively running on the metal ;)
For sure. In education, documenting the reasoning for how you approach one kid's way of learning versus another's is the goal of individualized education. Scientific discovery. We'll need agents that can act on the reasoning and thinking. And ways to determine if the agent properly acted on the reasoning both in the digital space but in the physical real world. Exciting times.
I like to imagine that we'll use with computers like we've seen in scifi movies/shows. A simple example is like in star trek.When there's a problem, "computer, what is the ...." or similar then the guys in the shirts go do things. I agree with Gary Vaynerchuk when he talks about the technology (it's development and adoption of use), it is undefeated. Things will change. Yes, some jobs will go away, and some new ones will come. I think Gary said once when electricity was finally available at a citizens home some people didn't want it because they thought demons were electricity.
Yes, because not every application lends itself to being a conversation. I want to play a game of chess, browse through my personal photos, or design a house. I think rather the paradigm of application development shifts to where it's no longer an advanced skill learned and practiced by a few, but that basically everyone that uses software and has any ideas for how to customize or improve it becomes a software developer, or rather the AI develops all or most software just based on the ideas and customizations submitted by users of that software. I imagine the process for customizing your applications by having AI modify it and then also submitting and ranking these changes and ideas to open repositories becomes streamlined and ubiquitous to the point that there is very little distinction left between software user and software developer.
It will continue to exist, but it will coexist with AI, just as radio coexisted with TV, and both with the internet.
I think there's a huge conceptual error when talking about work and AI, and this error becomes apparent when the word "replace" is used (e.g., "AI will replace your job").
AI won't replace many jobs, but it will make a large number of them obsolete.
Many jobs exist only because we are human, and as such, we are limited in our capabilities. But when a tool appears that doesn't have our limitations, those jobs become unnecessary.
I always say that a model will never be trained to be a film lighting technician, but a video generator makes the demand for lighting technicians drop to the point where many of them will have to abandon their jobs.
It's worth noting that there are people (like the guy that is interviewing Linus) who clearly show a kind of rejection, in my opinion, driven by fear. You can see they're constantly on the defensive.
If these people were in positions like the lighting technician in the previous example, they would be saying, "AI will never replace me," because the fear of discovering that their job might disappear blinds them.
🎯 Key points for quick navigation:
00:15 *Famous computer scientist*
01:48 *AI code assistance*
02:58 *Next abstraction level*
03:41 *Language model evolution*
05:47 *Obvious bug detection*
07:30 *Human pattern recognition*
10:14 *Continuous human errors*
11:55 *Caution against hype*
12:34 *AI's positive influences*
14:05 *Natural language applications*
15:27 *Improved development tools*
18:14 *Importance of open data*
Made with HARPA AI
As a comparison, here is ChatGPT's attempt:
0:00:00 - Introduction and Overview of Linus Torvalds
0:01:32 - Linus Torvalds on AI Written Code
0:03:17 - The Evolution of Abstraction in Programming
0:04:40 - The Future of AI-Generated Coding Languages
0:05:49 - AI's Potential in Code Review and Bug Detection
0:07:25 - Human Error vs. AI Error in Coding
0:08:30 - Linus Torvalds' Optimism and Skepticism About AI
0:10:28 - Concerns About AI Hallucinations in Coding
0:11:04 - AI Hype and Linus Torvalds' Perspective on Its Impact
0:12:59 - AI's Positive Impact on the Kernel Community
0:14:59 - The Potential Decline of Application Developers
0:16:10 - The Evolution of Development Tools with AI
0:18:01 - The Importance of Open Data vs. Open Algorithms
0:19:21 - Closing Remarks and Audience Engagement
And FYI, the prompt
`This is a transcript of a youtube video. Can you pick out the key sections of the video, a one line title and the timestamp in the format:
hours:minutes:seconds - section title`
Hi @MikePreston-darkflib, just a quick note: your prompt for the transcript extraction is clear and precise. I'll make sure to provide the key sections, one-line title, and timestamps as requested.
Great overview! Could you provide more details on the specific points covered, especially any highlights from Linus Torvalds' thoughts on AI in coding? It would be interesting to know which of his insights you found most compelling.
enterprise-ai AI fixes this. Torvalds discusses AI's programming future.
Great content, we're looking forward to more AI breakthrough updates!
a nuclear plant is just some refined earth making heat
A nuclear powerplant is just a molecular steam engine too. Nothing fancy about that.
no it's not... there's a huge complex metabolism keeping the thing in equilibrium, which requires tons of thought, theory and experimentation.. but, to go with your analogy, we humans are just some some collection of atoms typing on youtube. and these atoms are just some deterministic state of the wave function of universe.
Humans are just big bags of water
@@ai_serf It's not a 'huge complex metabolism' you just get fuel rods (condensed uranium ore) close enough you get a chain reaction and that makes a lot of heat, which boils water and spins turbines just like any other power plant. A kid built one in is back yard.
A rocket is just a can that farts
To an observer who doesnt consider the nuances, when I am coding all Im doing is writing the next token.
and to me, you are giving it the lif, the human touch, and the debuggers, that AI didnt see. Coders will be the money makers, people who think you dont have to code, will be consumers. Thank you for your contribution ( token)
Who will write the next AI algorithms? AI? Nahhh.. high paid people like us
@@AjarnSpencer90% of programmers are not creating new algorithms, they are taking already existing algorithms and fitting them together. You’re right that there will still be programmers that are pushing the field forward, but the point is it’ll be way less than it used to be. Even those last few who were able to keep their job due to being on the cutting edge of discovery may be replaced someday as well. There’s already AI that can be used to discover new mathematical proofs that can compete on a global scale.
You are right. I'm so tired of devs pretending they are inventors or developing new libraries. Almost all dev work is maintaining and building upon current business apps. Very small percentage of devs are building new apps libraries or languages. @@TerraGlide
Well…that *is* what you’re doing from the point of view of the *outcome*, although perhaps not the process.
But as programmers we should understand that it’s the outcome that matters right?
@@AjarnSpencer Human touch is a huge cope
LLMs are not just predicting the next token, they actually map the representational space and this can be seen across different human languages where the models activate the same patterns. They are actually learning something besides the next token.
I think you are overestimating what ever you read once and took as fact.
@@T_Time_they’re describing the embedding layer
Aren't you just defining neural networks. It is still just using statistical algorithms to predict the next token.
What does this even mean. Of course, the purpose of embedding and representation learning is to determine the correct distribution that best approximates human language distribution.
There is nothing really... More than that to it. That is not what we generally call "reasoning", though.
Rather, reasoning might be considered as the cause to such a distribution: LLMs learn only the distribution, not its cause, though.
@@albertonovati451819:46 the learn a proxy to the cause. From ONLY 2d data they learn to infer a 3d world (that sorta causes the 2d)
I think you're completely wrong on two specific points:
1) It's possible simplistic apps like a "to-do" app will be accomplishable via an LLM/AI directly, but enterprise grade software at scale is not even on the horizon for something AI will be able to do natively, so this idea that applications are going away is ridiculous. Maybe simple consumer apps on your smartphone, but not one meaningful and scalable application is even close to being replicated natively by AI.
2.) You've spoken before about this idea that one day LLMs might just write code in a completely esoteric and proprietary language only other AI systems can understand, and I find that to be the most horrible and dystopian idea ive ever heard. Having software and systems out there running and interfacing with humans lives whereby we have zero people on the planet who can read the underlying code and/or validate how these systems actually work under the hood is absolutely HORRIBLE.
I cannot think of one singular system we have created thats out there running today, whereby there are zero humans on the planet that can validate for us how it works. Having software written in completely unreadable code should legitimately be outlawed at the governmental level; this idea is just that toxic and profoundly dangerous to humanity.
There!
Maybe we don't have systems we created, but there are systems we don't know how they work on earth, they even kill us, and nobody care. But some people want to understand them, we call them scientists.
This guy probably never wrote anything more than a 20 line python script, even just basic apps are gonna be incredibly difficult to just embed into LLMs, he should remember, most AI today just output text, in order for that idea to become reality you really need to rewrite your operating system entirely to rely on LLM, so it can by itself access every low level info in real time
You will be surprised by what you can do with LLM/AI. I felt the same way you did until I started writing. There are many issues when using LLM; its memory is one thing, but the potential is there. I wrote tens of thousands of lines of code using LLM, but I had to break down everything to a manageable size because of the limits they had put on it. And yes, its logic is good but not good enough; many times, I had to teach it to take a different solution. A new developer can't manage it, but in the hands of an experienced developer, LLM/AI is a huge help. It sure helps in debugging, but it can't debug itself; it still needs an experienced developer. An experienced developer can use it to program in any environment, in any language, and accomplish any programming goal. I love it.
my son uses llms to write his projects, buts its a matter of getting it to do 20 line functions and then fixing them and pasting them together. As he's gotten better at a language, he's finds using the LLM less and less is more productive because you don't have to deal with the constant hallucinations, or that you can never design a prompt to perfectly match what you really want.
And now LLMs are training us to write in the language called "Prompt" which marginally resembles English. That is if you actually want the code to work.
I like the term 'LLMs are training us' that is a very good way to say it.
It is your English teacher now... just think of how our language will change in 10 years of this as an unregulated language medium...
And these data entry jobs are renamed as prompt engineer so that we can associate some pride is thisshitty jobs of prompt engineer
enterprise-ai AI fixes this. Future of programming with AI.
Torvalds is a rare breed. Intelligent, creative, self aware and detailed. Very rarely these attributes come together.
Intelligent is an understatement.
happy to say i have finally reached that but feel stupid and self judgy that it took me this long, in my early 40s. Oh well. Yes Torvalds holds his core values to heart and is humble (unlike most, including me trying)
Two types of devs, humble, open to change. Elitest, closeminded.
I think Linus is really spot on with a insightful and pragmatic view of AI's potential. I also think this could be very disruptive to the software development industry, but in a way that could be very good for consumers and end users as the millions of mediocre and bad developers who are currently producing the bug riddled garbage we currently suffer with will be rendered redundant.
LLMs will likely transform how we interact with the application layer by simplifying access and creating more natural, conversational interfaces. However, they won’t completely eliminate the need for dedicated applications, especially those with complex functionalities, specialized workflows, or stringent security requirements. Instead, LLMs will act as a bridge between users and applications, enhancing the user experience by providing a more intuitive way to interact with software.
In short, LLMs will change the role of the application layer rather than do away with it altogether.
1:10 he doesn't give an opinion about ai, it is an opinion about LLMs
That’s the same- nvm
So after like 3-4 months of using AI tools to code, I’ve realized something. The whole "NO MORE CODING" thing in the future might actually be kinda true, but "NO MORE SOFTWARE ENGINEERS" nah, that’s not happening.
AI is cool and all, but it gets stuck in these dumb infinite loops sometimes-like it keeps trying to fix itself but just ends up making the same mistake over and over. That's where we humans come in. We can spot those mistakes and come up with new solutions AI just can't think of.
Plus, there’s literally endless stuff to innovate in tech. We’re always gonna need engineers to come up with new ideas, like inventing algorithms AI can’t even imagine. For example, a human might invent a compression algorithm that’s never existed before, something AI isn’t gonna do on its own.
So yeah, AI might cut down on the actual coding we do, but we’re still gonna need engineers to keep pushing the limits and creating new things. Things like creating landing pages , simple web or mobile apps for small bussinesses would become more simple with ai. but creating new things and new technology i think that is what engineers would be doing in the future.
same what I said, engineering is thinking and planning and that is not going away, but "doing" maybe, coding will be automated and coders will not be needed, but engineers will know how to code
You do realize that you are moving goalpost. 1-2 years ago it could not do working code most of the time.
You are claiming that it will never be able to do large scale planing, and maping from that level to lower level actions it can do now.
@@EduardsRuzga Every time I see someone post that AI "will never be able" to do X they only look at the state of AI at the time. If you never factor in AI development then you're always going to end up with this moving goalposts scenario. Sometimes I even see people claim AI won't ever be able to do something that it can already do at the time because the state of AI that they're basing their claim on is outdated.
Problem with people like you is you do not look in the future.
You are describing this point of time and still thinks AI not be the same in the future.
@@mirek190No one can look into the future. It's pure speculation.
Good job brother. I agree with your insight. Started programming in 1985 with a Commodore 64 and BASIC. AI is a tool and a very usable tool when you know what it actually does vs. magic.
I think of prompting as meta-programming. Do what I want, not what I ask.
I love how you fill in the details as we move through the interview - SOOO helpful
Thanks Matt
I lead teams of developers and have always told my team, 'Our job in writing code is to automate ourselves out of existence!' I didn't think it would happen so soon. Developers love shiny new tools!
at a former job, a decade ago, I did a lot of that stuff, automate the obstacles or the tedious stuff everyone was doing for years (mainly because I hated it).
On my last day, a colleague teased me by putting a post-it on my screen "Tom automated all the things, now he's gone (and added a a big smiley)"
Which was quite funny tbh. Now it's on another level and scale.
Recently discovered Linux. Installed it on my server PCs where I use it with ollama and LMStudio.
Even brought back to life a 9 year old laptop that was literal garbage with windows. Now it's like new.
I'm so happy with it that I'm seriously considering switching my main to linux.
Welcome to linux :)
See you in 5y on arch/gentoo/nix with custom a rice..
Except it won't be like new. It's 9 years old and will have the performance of a 9 year old laptop. Yeah, the OS will be more responsive becaause its more basic and cut down, but apps will run as they do on a 9 year old laptop.
I have an old i3 laptop with 4Gb of RAM. It won't run Windows 10. I could do what you did. I'm not sure it's worth it. There'll be driver issues, the battery probably needs replacing, the drive is old... etc
@@toby9999 def worth it, try installing linux mint xfce edition, only takes 10 min to install... and put firefox instead of chrome as only 4gb ram... old laptops in linux never have driver issues unlike in windows
Human language is not the last step, it is just thinking and visualizing
It’s the last step for human interaction with machines. The last LAST step will be machine just using binary to communicate with itself… until it learns to reverse entropy. Anyways, it’s four dollars a pound.
@@guzmanjrmarco wait, what does that mean? reverse entropy? is that even possible?
Last step will be machines realizing what is needed and creating it, most of us will be so pampered that we will not even care of what's going on, we will expect it to be ubiquitous and fall into entitlement. We will fuse and be absorbed until extiction of carbon entities replaced by silicon etc, the first wave will be from medical issues (blindness, disabilities) and access to information, and the expectation to live forever, also to allow space exploration (more resistant). My greatest concern is the loss of soul.
isn't that an old video? i think i already saw that interview.
ok now you said it
and the primagen reacted to it, too, i think
yap, old
Old, and boring rehash. Hard pass...
There's a 2nd newer at the end were Linus is more optimistic but still very cautious.
you are a machine!
🫡
13:17 "What is the point of building an application when you can get what you want from an LLM?"
That's like saying "What's the point of manufacturing hand-held cooking utensils when you can have an automated factory make your meals?" What you want is sometimes not a goal but a process. If I enjoy cooking, there's a point in having hand-held cooking utensils for the process of cooking. Some apps are for artistic activities. I personally use apps like Ableton Live to make music for my own enjoyment. Sure, we're beginning to be able to completely automate music creation, and I welcome it as a great thing, but I'm going to still enjoy the process of manually creating melodies, harmonies, chord progressions, etc in a non-linear dedicated software environment, which can't be replaced with linear prompt interactions with an AI.
Dude they all hype train UI was created because talking to something is a pain and ambiguous
I’m with Linus on this. AI coding is getting better and better but you still have to be careful with what it produces. You still need to know the language and know general purpose coding concepts to have it work effectively for you. So at least for now it’s a tool developers can use to speed up development. You should not use it as an alternative to learning to code and expect to get a job.
I don’t really agree with Matthew here because LLMs aren’t writing code “better” than humans, they are writing code faster. The models are predicting the next token based on context and also on the data they were trained on. So it’s very easy for LLMs to introduce mistakes that are made by humans because they are trained off data that humans created. Not to mention the possibility of the model using outdated APIs or libraries.
I think an AI specific coding language would be a mistake at least with the current technology. Maybe in the future.
where is the links to the originals?
Linus is great for many reasons. One of the most interesting things for me is often his openess to adoption and adaptation. Often when you are as great as Linus you have to become "set in your ways". After doing something that has worked or made the world a little better for 30 yrs changing it might not make sense or be tough.
I could see him saying something like, "When we did it, we did great!" But "now we do it this new way" and it is better!
The AI is a tool, but it's a tool in the hands of the final user. A calculator is no longer a tool in the hands of the middlemen whose job was performing calculations by hand for a living. Those middlemen are gone and the calculator is now a tool in the hands of the final user. Same goes for AI.
Good analogy. Bet you got that from Claude! 😅
@@raishallan
Ask David Bauman, I'm a better AI from the future, don't be disrespectful 😄
Esta en manos del usuario final si, pero ahora mismo sin conocimientos precisos no puedes hacerlo solo. Puedes intentarlo y conseguirlo pero a cambio de mucho tiempo. Sigue requiriendo una persona capacitada y entrenada en eso. Esto es aplicable a cualquier disciplina.
Sorry, no middlemen? Who's getting rich now? you?
@@melski9205
You're right 👍 I meant no human labour as middleman but one can consider the owners of AI+Robots as the last and ultimate middlemen which is a serious matter in itself.
Anyway I don't expect "jobs" to outlast ASI since no-one would be hiring humans and -if things go well, no-one will be willing to "work for a living"
Dude, thanks for this video and all of the others. It is a very good channel to be in touch with important AI and programming stuff. English is not my firt language but your english is so clear that I can undestand everything you say. Cheers!
Holding up a "todo" app as an example is pretty close to saying: no one's going to need a "hello world" app anymore. There will always be a demand for precision 3D modeling software, video editing software, etc Maybe not by noobs who would be content to ask an LLM, but certainly by those who can use them to their greatest potential. Likewise for the software itself. There will be less need for programmers sure, but the need will never completely eliminated.
This. Matthew Berman does not understand how coding works at scale.
We are not even close to be there yet.. and I’m saying this as someone who works with LLMs on a daily basis as a software developer. Most of the hype is just that.. hype.
@@ninjarogue don't let your thoughts fool yourself, otherwise it will be an unexpected "bye bye" for you... Time will teach you :)
@@ey00000 I never said we won't get there one day, on the contrary I believe just as you said, it is a matter of time. What I am saying is we are not there yet. I also mentioned that I use these tools everyday.. so I'm pretty sure there won't be an "unexpected bye bye" for me lol.
@@ninjarogue pretty soon it won't be a tool anymore, but more like a better version of you.
Thanks for your perspective, Matthew❤
AI Hater: "AI is just autocorrect on steroids" Counter: "Humans are just biological autocorrect on caffeine" - Kyle Hill had a great video about human intelligence, and the key thing was that it's buffered from realtime input - that is, our mind just predicts its responses, and is recalibrated by the incoming sensory data. AI and natural intelligence are kin, in that way. AI haters use it as a slur, without being self aware enough to realize they are not special, just a highly evolved biological entity that has been shaped over a billion years to optimize those prediction machines in our skulls.
Currently they are just autocomplete on steroids. LLMs I mean.
> Counter: "Humans are just biological autocorrect on caffeine"
This is not a counter LOL, this is factually wrong (and does not even make any sense at all).
@@diadetediotedio6918explain why it’s factually wrong
@@alst4817
It is literally obvious, you have conscious experiences, you have independent thoughts, you have volitive and intentional action, you have physical direct interactions with the world, there's nothing "autocomplete" in what you are, and more than that, there is nothing to complete.
@@diadetediotedio6918you’re referring to the ‘Chinese room’ argument, but it’s circular. You have to first assume that humans and computers are different before the argument makes sense. You’re begging the question.
"Doesn't matter if the AI replaces programming, at the end the problem is to teach people to think. Believe not many people have the gift. On the other hand, it is to understand the situation and explain it to the computer. It will be the same to use logic, math, code, or simple words."
people will think less, in fact as AI proliferates.
Would like to take some exception with Gregg. I suspect many may use AI as a substitute for thinking but that does not have to be. As for myself I find AI to be a great stimulus to thinking. Just have to recognize what makes sense, make queries from different angles.
Hey thanks for this, is a great video, I agree with several points, and I love how they aren't afraid to give common opinions about those tools, as well as they challange how we think about it. Cheers up
It sounds premature to assume that application-level programmers will no longer be needed. It rests on two key assumptions: 1) There will be no further development of new high-level programming languages, like when GO or Java emerged, or Swift six years ago. 2) There will be no issues with copyright or conflicts arising from using coding logic across companies.
It also seems to oversimplify things, as if every company’s coding process is the same.
That’s why I tend to agree with Linus-it's better to take a 'wait and see' approach. Who knows? We might even see a new programming language specifically for AI, or a convergence between iOS and Android, with a single common language for both platforms and so on.
We called it constructive learning rather than representation learning current systems are just mimicking representation rather than constructive reasoning
totally agree and for a reason, human capacity is exceeded very quickly and the views of the world have something to do with that. The ai can guid various perspective on reality. (Non coder -prompter guy)
I had literally never even written a single line of code in my life until about 3 months ago. Using ChatGPT and Claude, I have now managed to code a very simple "to-do list" program (as a test) and I'm currently putting the finishing touches to a database's GUI that can fetch images from my google drive. And speaking of debugging, it was Claude who pointed out that my GUI script couldn't fetch the images from my drive because I needed to use the Google API to do so.
I'm also about to launch an even more ambitious and complicated project soon.
AI is a game changer.
I guess... but it sounds like you still have a long way to go. Good luck though.
@@roguegryphonica3147 Of course I do, that's the whole point.
I just love those facinating interviews.
"At a certain point LLMs will write a code in a language which does not look familiar to us at all, because they don't need to." 100% agree. I was trying to make the same point in different programming forums to spark a discussion, but this did not attract much interest. Really, everybody asks LLMs to write a snake game in python as a benchmark. Why python? Just give me a file which I can run on my device. Or give me a document which I can browse on my desktop. I don't care which language stack is used inside. All programming languages and development tools are optimized to make a job of human coders easier, but do we still need them if 90% of the code is generated by AI anyway?
That’s an interesting point that I hadn’t considered.
Absolutely correct 💯
Ppl ll
Like I said, when "Humans" allow AI to write code directly to key devices and systems, without a human readable format intermediary, then we are giving all the agency to the AI, and removing all the agency from the human. A human should ALWAYS act as the supervisor... when we stop providing that service, we will become too lazy and too ignorant to understand what is happening, and we will fall into the great decline.
In principle, they could just write machine code directly. In practice though, the hallucination problem means that they require human supervision, and thus for now they need to write human understandable code.
Let's just take a moment and recognize Matthew for his amazing breakdown of what is actually happaning on the frontiers of tech!
this video is old and he is a click baiter
Not sure how an Operating systems expert is the best person to ask about AI
The biggest problem for most development is not writing code but writing the code in a *secure* environment. Permission requests ++. Security is hard. Requirements change and remembering the decision process history and reasons why it wasn't done like that is important and LLM's won't have the whole visibility of all the interactions of all people. Programmers should be nervous, but developers will be fine. They'll just move up the stack. This has been the trend and it will continue.
Great video! While open data is crucial for kickstarting AI, the real game-changer lies in the shift toward real-time data streams from devices directly interacting with users and the world. These streams will train local models, feeding into federated learning systems to refine and adapt base models continuously. The future of AI is deeply embedded in the dynamic, real-world feedback loops!
I would suggest all to pronounce his name as LEE NOOS , I love all his work for over 20 years.
Thanks for the video and commentary.
Have you tried Replit Agent or Cursor AI? Code generation is magic with these early stage tools. Wait until big companies like Microsoft and Google launch their solutions.
“Autocorrect on steroids”
About "we are also autocompletes on steroids". We are actually world simulators. We build and run a world simulation in our mind, to predict possible futures and make decisions.
We were strugling to build abstract enough systems that can do this. With llms we got close over language.
Now on top of that we can build and look for even better architectures.
LLMs are far from perfect, but they give us foothold to stand on and reach further.
Matt, I love your enthusiasm for AI and the info you give the community. Thanks for all your efforts. The problem with AI writing complex software (-if it ever gets there-) is that the source code becomes a black box just like the model files. No one in their life time will have the time or inclination to debug obfuscated code. So, unless is some harmless application, no corporation in their right mind will ever employ such a thing, so therefore it will remain a toy that could get out of hand because of fearless minds out there will take risks in duplicating controlled software and call it safe and controlled. Yes, people can read assembly language. It is only strange to the python generation.
LLMs passed the Turing test a long time ago. If they are intelligent (have a deeper understanding) or not, it doesn't matter they passed the Turing test.
The Turing test is a little vague, though. Like, an AI expert could easily spot the AI by making questions that test stuff most people wouldn't.
No one makes toasters from scratch yet there’s value in appreciating and understanding
Many services like e-commerce , banking, government services could be just a single column database that holds every conversation between a bot and the human. Whenever a human starts a conversation, the llm behind the bot can use the entire conversation history of the user as context to determine that latest user state and answer accordingly and store the latest conversation back.
It is always refreshing to hear Linus. I don’t think many people have influenced humanity as much as he has.
Hallucinations is such a dumb term, LLMs don't hallucinate they confabulate.
Exactly! Hallucinations involve perceiving something that isn’t there, which doesn't apply to LLMs. They aren't 'seeing' anything-they're generating responses based on patterns in data. Confabulation, on the other hand, refers to creating a plausible but inaccurate narrative, which is much closer to what LLMs do when they produce incorrect or misleading information. It's all about filling in gaps, not perceiving non-existent realities.
Let's say it how it is; they make it up as they go along.
Not nearly cool enough, sorry. 😎
I think this is an insightful distinction, especially since the AI "hallucinations" are highly correlated with lack of training data in a context. This results in creating false memories (confabulations) rather than having a false perception (hallucination). IMO true "hallucinations" would have less correlation to training and memory, this "confabulation" is a better representation of what is going on/
They confabulate just like children do (and some grown-ups). Honestly, I'd be worried if they didn't, it gives "them" a very human touch! 😊
Please stop misinforming.
It is not true that everyone is using "cloud computing".
Small companies still use dedicated servers, due to less costs and less complexity.
everyon means everyone in day to day lives ,small companies are small companies, though most still use cloud computing to save costs and infrastructure, till they become big enough to afford own infra.
7 months ago I gave a talk about GenAI-first software development. The Prompt is more important than anything else.
Code / algorithms are just another form of data. The world of math is infinitely rich. For every level of intelligence no matter how high there is a (useful if solved) math problem vastly exceeding it's capabilities.
What are you talking about @13:35? The LLM can not store anything... Everything has to be presented to it in the context window. It definitely does not know when to "wakeup" and remind you. There has to be a framework built around it to do those things. Someone or something would have to write an app that calls LLM every minute or so with your data and parse the results. This would be very cost prohibitive right now. The fundamental models might not change the way they work in the future since you can't put personal information into their training dataset for privacy reasons. In 10 years you could have a personal model running at home but I doubt corporate will want that.
Thanks, Mathew. I pretty much share Linus's view.
There is something about Matts comment that got me thinking about the simulation question: ".."when we are talking, when we're responding, maybe we are just trying to predict the next token in our own sentence.." (mic drop)(mind blown)
finds month old video, thanks so much, how far back will we go? plenty of content in the past 🤔
You do have to be exact and precise, the LLM is on the spectrum!
Can you share the name if the tools you use for video edition?
Keep up the good work...
One thing I’ve learned with this whole AI boom…..is that the word “never” does not age well at all lol.
@13:57 "... the application layer will be going away", i'm with you; do you see an LLM between me and my favorite youtube channels (eg. Matthew Berman Tech and Futurism Channel)? i can imagine the "amazing" (healthy paranoia applied) curated content presented to me, but i can also imagine how excited the curator would be with this capability.
in your "to-do list" example. Wouldn't that still need to be an app? What if you want to see a list of your to-do items? You need to look at an app...right?
9:37 the thing I’m the most scared about on a daily basis is Human Hallucinations! And little to nothing is done to fix this one. Human in the corporate world and IT world hallucinate a lot, as everyone want to come with his own definition of everything or anything, at a point that even for simple thing like Analytics I have to ask people what does it mean for them as too many have a different definition of this simple word.
So yes LLM hallucinate, maybe because they’re learning from our human messy data, which in earlier LLM wasn’t as clean as in current datasets, as we now know data quality affect drastically accuracy.
The thing that bug me as you rightly point out is human are not perfect, and LLM might eventually never be as well, then why do we want such higher standard of perfection for LLM than for ourselves? Why do we human want LLM to be perfect when we are not?
Last step will be machines realizing what is needed and creating it, most of us will be so pampered that we will not even care of what's going on, we will expect it to be ubiquitous and fall into entitlement. We will fuse and be absorbed until extiction of carbon entities replaced by silicon etc, the first wave will be from medical issues (blindness, disabilities) and access to information, and the expectation to live forever, also to allow space exploration (more resistant). My greatest concern is the loss of soul.
You are wrong, AI in its current fundamentals will never be able to target deep contexts, it is just out of the possibilities of any traditional hardware, no matter how scaled up it is. Unless the transformers support discrete mathematics in its fundamentals, it will never get rid of hallucination.
2:48 - Binary and "machine code" (hexadecimal) was pretty much the same time. I think the reason for using base-16 over base-10 (what humans generally use to count with) was down to the number of pins on a standard microchip.
@@J2897Tutorials it's because base 10 is actually very awkward for binary computers to deal with. Base 16 (hexadecimal) is a more compact representation of binary that can yet be easily converted directly back into binary by humans. Decimal is awkward because the set of decimal digits 0 to 9 requires more than three bits to encode but less than 4 bits; whereas the set of hexadecimal digits fits exactly in four bits. Decimal numbers (of an arbitrary number of digits) are also surprisingly difficult for computers to convert into binary, and there was actually a bug in a commonly used piece of computer code to do so.
Machine code is the specific instructions that tell computers what to do; in other words, they're what fundamentally make up machine executable programs. Machine code is often called binary code because it's represented in binary in modern digital computers.
OK, that was probably more info. than you were looking for.
Mr. Berman I have to say I have never heard about the idea, that codes can become incomprehensible to humans, and it is comprehensible today only because humans need to read it. What a fascinating idea! Can you elaborate more on it, just your thoughts and no reaction to someone else, no distractions?
When you abstract up from assembly, you are applying the same principles but gluing together higher and higher level libraries. It's deterministic and no matter which level you write at you will get the same output if you use the same algorithms.
With AI, to make it deterministic enough you will need to detail every single input and output to the point you may as well code it. For simple projects, AI is amazing as it automates 90% of the boilerplate code for you. It's hard to see it doing more complex systems other than non essential like games. Unless it's modifying something that already exists.
Phillip.
I love your analysis. Very on point.
I think that people in general are in a Denial state. AI is going to be bigger than we can even fathom.
According to Gartner ai is in the "peak of inflated expectations" state ;)
15:32 everyone forgets about the business logic, that will never be something that is easily handed off to an AI. It's the niggly details that have to get programmed and most times these details are decided in some sort of committee
Yes. Embedded applications require deterministic code. But I use AI extensively now. I told an LLM to Simulate a Z80 micro. I fed it some Op-codes (machine code) and asked for the status of the internal registers. Using it's knowledge of this microprocessor it can not only write code, It can simulate its' entire function.
The todo app is still, underlyingly, an app. You might be able to use LLMs to write the app and make the app accepts natural language (parsed by LLM). However, you still need another program code that counts the timer (not an LLM).
This was in Japan isn’t it? I was sitting in that talk 🥳🥳🥳
7 month old video means this is all out of date.
Today, I just forgot my sense of humor bag on the way out of the house, but ‘he is the creator and lead developer of the Linux kernel. It’s not a big deal.’ Is this supposed to be some kind of sarcasm?
I find it very interesting that we don't see hallucinations in LLMs as a feature. If you never get anything wrong you don't expand the current mental model. Inductive reasoning vs deductive reasoning. Infer and test is a powerful model. Perfectionists acheive very little.
I had thought of something similar. If it gets it wrong and i tell it so, does it actually "learn" from that experience? I still dont think i know what it means when people say the model learns. Does it mean the code changes, or is that the data set changes and then the next time that data type is used, does it see the updated data? But then i ask does this require a totally new retraining? Apologies for my lack of understanding as i am trying to learn more here.
@@malcolmvanhilten125 This is a really good question. At this stage most of the LLM providers will be capturing the feedback of thumbs up or your sentiment to the response. Do you ask more questions or respond posibively and prompt more. If you like we are free feedback. So the human in the loop is being at the minimum captured you can bet on this for future training. It is also possible to capture feedback and store it for further queries or lookups by the LLM. Many companies when implementing LLMs will do this to hold proprietary and unique data. But what is comming is more dynamic models that will learn in real time. This will provide a way of adjusting the weights in the model as you go and probably lead to a next level of break through. I am sure it will be already developed but possibly not in the public domain as yet. Clearly the checks and balances to ensure the model does not get corrupted or lose its way over time are key when you go to live updating a model. Much more to talk about here! Great to see you are learning in this area. It's very exciting space.
One thing that I have figured out is that the tech industry tends to way overstate what powers they really have. I still haven’t found a good image generator that can look realistic nor one that would do whatever ask it to without trying to stop me. Video generationAI is basically nonexistent still. I don’t see tech jobs going away anytime in the near future due to AI.
I can't honestly claim that my own brain is doing anything more sophisticated than predicting the next token. When I reflect, that seems a plausible explanation for what is happening.
What the f? "Predicting the next token" _requires_ intelligence, if it has to be done correctly.
Predicting the next token involves understanding the context and making an educated guess based on that context. A good author is predicting the next token all the time. It requires coherence, understanding the characters, establishing and following a plot, language and style, creativity. A musician "predicts the next token". A programmer, puzzle solver, doctor, mathematician, everyone "predicts the next token".
Such gobbledygook I have never heard, I think people have never been actually able to leave behind anthropocentrism since Copernicus' day despite the tremendous progress in science and technology.
Bro you do realise that the Turing machine is just "predictive token" 0s and 1s is just "predictive token" if I code a Turing machine to pour my coffee when heat =100 it isnt intelligence in any way shape or form, you mids are annoying 😂
That is how children learn a token at a time, then experimenting with the results.
@@JackNorthrup if 99% of us believe we are highly intelligent, someone has to be wrong right? Not sure... 🤔
Is Linux getting paid or looking to get paid..lmao
It would be nice if you provided the link to the original recording of the talk, you interrupt temp so often that you might have as well just not included the fragment of their conversation at all.
He created the kernel for the _libre_ operating system known as GNU.
Libre is a Spanish word that means free, which has nothing to do with money.
It means _free_ as in freedom, not as in price.
And version 1.0 was quite small in comparison to the rest of the OS.
4:27 We are already on Modern #Hieroglyphs like;
Love You - ❤️👈👩❤️👨 (Heart + You)
Sunshine -☀️🌻 (Sun + Flower)
Happy Birthday - 🎉🎂🎈 (Party Popper + Cake + Balloon)
Peace - ☮️✌️ (Peace Symbol + Victory Hand)
Adventure - 🌍✈️🧗 (Earth + Airplane + Climbing)
May I ask what conferences were theses Linus' interviews, or interventions, extracted from?
Imagine being able to juggle 7 things at once. It's crazy hard. But after 20 seconds of amazement watching you, a lot of people will say, "okay but can you do 8?" And if you then do 8, they will say, "okay now do 9." The biggest cause of AI skepticism is that we can see a miracle today and find it mundane tomorrow.
this is what ray kurzweil said like 30 years ago, a new tech comes out and everyone goes ah thats cool but it has bugs and makes mistakes, then in a few years later it works perfectly but its been around for a while and people just think oh yeah well its old tech now it should work perfectly, forgetting that it was ground breaking when it was released
Moving the goal posts further and further. Show this tech 20 years ago and it would look like Gandalf’s magic. And in 20 years skeptics will be like “sure, AI solved commercial fusion but can it make it pocket sized?! Checkmate!” And in 20 more they’ll say pocket sized is too large.
Giving skeptics credibility, is like giving game journalists credibility. They have zero. Laugh at them and move on.
Yeah, I just saw a guy "complaining" that AI was not able to create completely new texts for a new novel he is working on, in pretty much exactly his writing style....😮😮
Expectations keep going up and we don't seem to realize how incredible all this would have seemed just a few months ago😮😮😮
@@klarad3978exactly, hit the nail on the head there😂😂
Well I still doubt that LLM will be able to completely replace apps, especially backend and devops. Frontend devs for sure. Some part of backend devs as well, mainly related to simple API layer and work with DB. But architecture vise, scale wise, and high loaded projects. Will still be managed or at least reviewed by very experienced devs.
Maybe when AGI will be created, then possibly that would replace everyone...
I just completed a 50K line webapp. I still needed to hire a visual designer. I still needed to become expert in CSS to implement that design. I eagerly await the day when it can rewrite my front end for me non-interactively, but we’re nowhere near that.
@@markplutowski and now imagine big companies or at least medium sized companies. Where codebase is huge, plus micro-services architecture. I highly doubt that it can comprehend and connect multiple services to implement a required feature. Plus no one will allow that code until it is reviewed but tons of real human devs...
no, developers may stop using an IDE, but consumers will always want a nice UI. When vanity was introduced into computing (read: apple), adoption by consumers skyrocketed.
Coding AI is great for easy cumbersome task. If I need to call a function a few times but with various arguments, the copilot is able to just fill all of that for me quickly.
Mate... that's Linus Torvalds! If he says it's hype, then it's hype. And what you think doesn't really matter
It seems to me that we are digging ourselves into a rabbit hole we may not get out of. Once AI uses its own language, not understood by humans, we lose control and input for better or worse. Maybe for the worse.
6:20 GPT-4o and Claude 3.5 are already pretty good at fixing the problems that Linters tell you about.
Just because something 'powers trillion dollar industries' doesn't mean they are not 'hype' (look up the Tulipmania), I think whether something is hype is somewhat a retrospective evaluation, if it didn't last and everyone wants to pretend they didn't get 'caught up' in the marketing frenzy then it was hype.
With AI granting unprecedented power to individuals, I believe that in the near future, its public access will be heavily restricted under the pretext of safety.
"we are just trying to predict a next token in our own sentence" - yes, that's pretty obvious if you know mathematics of transformers - the tokens and sentences are abstract structures. It doesn't need to imply any normal language.
Not "One of the most." He is "The Most" - He who IS CALLED IAM
People keep talking about AIs writing programs, but I see things a little different. We won't need programmers, not because the AIs will be programming for us, but because the AIs will BE the programs. You don't need a spreadsheet if the AI can give you a UI and handle all the background calculations. Want to play a submarine game? The AI will BE the submarine game. It won't need to be programmed, nothing will. If you need the computer to do something, you tell the AI to do it, and it does it. Programs will not be needed for that reason. Not because the AIs will be programming them, but because the AIs will be doing what the programs once did.
This one of the most stupid opinions I've ever read on the internet since the start of the AI hoax.
@@Teodor-ValentinMaxim Ya, OK.
@@AardvarkDream Maybe I was little bit harsh, but if AI can do such complex tasks on the fly like creating a pseudo spreadsheet, a submarine game etc. it rivals human intelligence, both the analytical side and the creative side, and then it can do any other job out there that requires high level thinking, not just programmers will become extinct.
You just don't realize how complex the tasks you've described are.
@@Teodor-ValentinMaxim I do realize. I am a retired software engineer who has a perfectly adequate understanding of current tech. But I'm not talking about current tech, I'm talking about five years from now. Maybe ten, but at this point I see development happening at an accelerating rate. We'll have chips optimized for inference. We'll have logic engines (possibly). We'll have a host of tools that the AIs are capable of using. Today they generate text relatively quickly, images slower, video even slower than that. Unusably slow. But that's just right now. Five years from now isn't right now, and a lot of companies will have put a lot of resources into this by then. At some point the AIs will be capable of generating UIs that are optimized for whatever task is being performed, they will be capable of generating video on the fly. They will have memories that they can write to and read from and update. That they can't do that today is irrelevant, we're still in the DOS 1.0 era of AI, but that era will end. The Windows equivalent is still ahead of us. But when it happens, that's the end of most programming. There just won't be a need for it.
@@AardvarkDream lol :)) bruv. You are clearly not a software engineer, most of your time spent is thinking about the problem and how to design a solution around it, and then you spend like 5% of your time to write the code. You could explain your solution to an AI in english or whatever to generate the code for you, but since human language is so nuanced and subjective, you are more faster than an LLM if you went to writing the code straight by yourself.
You are just assuming that every problem will be fixed by an AI, but programs are ever evolving, and the training material can't keep up with how many new problems arrive everyday. A programmer's job is so much more than writing code. What AI can replace is code monkeys from outsourcing countries like India, Brazil, Argentina, Philiphines, etc, where they are given an exact task with minute specifications and details.