HR is not easy to replace. It requires a lot of human-to-human interaction, which will not be automated. Even if it could, human acceptance of technology is a large problem that will preclude deployment in such use cases.
If the job of a software engineer was to show up to the office and just solve leetcode problems all day then yeah AI would definitely replace us. But like you said, the scope of competitive programming is very very small compared to what you do as a software engineer
I'm at bigtech. I've been on like 6 different teams. In the past 2 years, whenever I get stuck on something, and I ask someone, frequently, the response is "did you try asking chatgpt?" I'm like "no!" Then I ask ChatGPT. And it gives me... an answer thats 80% of the way there. This is just cope.
In my humble opinion, I believe that niches like blockchain and kernel development are unlikely to fade away anytime soon. Additionally, programming and software development are distinct: you can write a simple program that performs a basic task, but building comprehensive software that handles multiple requests and operates seamlessly is a different challenge altogether. While AI might automate some aspects of this process, it’s uncertain how far this will go. Regardless, I still love programming for the thrill it provides and the curiosity it sparks in me.
Every journey start from the first step. If you go back to 200 years ago, you will think every today modern technology are the magic. Mobile Phone, Airplane, Internet, Modern medicine and everything changed so much.
There already is, the assembly language. For machines, by machines more or less. Some just needs to pour a few mils $$ to train a proper model and JUST GIVE ME THE EXE!
@@theideaot LLMs work best with abstraction. Abstraction = less opportunity to hallucinate. Hallucinations is inherent to the system, therefore, more abstract languages should combat the trend.
@@theideaot Hard disagree. LLM-like AIs work better with higher-level languages, as more of the concepts are encoded in code of the language rather than in your mind, hence making the attention process a lot more useful.
to be honest I don't care anymore, every single time an LLM comes out apparently it can do a task software engineers usually do like with gpt-4 and making a website out of a napkin, now it's o1 making a game out of a prompt, you know I just don't care until someone actually tells me they built functioning products from just AI, to me AI's a an incredible teacher that can help inexperienced people move faster but I just don't buy it can produce actual software until I see it do useful pull requests or something, anything it's been so long of just promises and promises and improvements on benchmarks which no longer mean anything because companies started including the benchmark information on their dataset, then when the AI sees a problem it's never seen before it fumbles because of that, I just think we're going about AI in a stupid way right now honestly, this does give me a lot more faith in OpenAI though, that's why they developped 4o, to have the inference speed needed for a model like o1, it's really interesting the ways they're trying to make it better but I just hate this tech until it's as useful as the hype makes it out to be.
People are already building functioning products using AI. You’re just not seeing it, probably because you’d rather close your eyes to it. When 96% of Fortune 500 companies are using OpenAI’s products and 75% of code on GitHub is written by AI, there’s no denying that people are using AI to build functioning products. In my own company, I witness it happening every day.
Why should the AI be only useful when it can do pull requests? Right now, on certain tasks, it saves me 95% of time. Time spent simply writing the code, debugging, trying 30 versions of an algorithm in a day instead of a month. It's a fantastic force multiplier if you're smart. It's an amazing equalizer if you're not. And it literally moves you from 0 to 1 if you're lazy and smart.
The fundamental problem of producing more is that there is more to look after. These models, as we all know, are not infallible, and we are ultimately being sold a product. If that product becomes too expensive for companies, they won't engage with it. After all, computing power is also a finite resource. The money machine won't go brrr forever.
Yes you still need to do code review and check the code . Even if the code is 99 perfect you will need to review it find the bug and correct it There is a big gap between 99% correct and 100 %
Yeah... I think it's a problem, but why is the solution not just using it as a productivity boost for devs? Basically, instead of writing code, the dev reviews and revises AI code. Developers are expensive as well. It would take lots of good analysis, and I don't think it's simple and it takes a good understanding of the capabilities of the tech. But, if an AI could save, say 10% time overall (maybe taking away 30% of the time from coding, and allocating an extra 20% to code review/refactoring). I think it depends, does it take more time to write 100% of the code yourself, or does it take more time to let an AI write it out and then review/edit it. That's assuming that the end-quality is the same at the end. It's hard to say for sure because not every project is the same and productivity is inconsistent, but I predict I save around 10%-20% time overall on most projects. You pretty quickly learn how helpful AI will be (for example, with Django, it's worked really well for me, but with Phoenix, the elixir framework, it constantly hallucinates, and I only use it as a glorified search engine and rubber duck).
Just wanted to touch upon abstraction. It feels like prompting is the next layer of abstraction. Wire placing, to card punching, to assembly, to Python. Each level abstracts increasingly. The pattern should continue. LLM's should utilize abstraction within themselves, too. Overcome token limits by summarizing code into brief descriptions.
Some (a lot of) software is still written in C/C++. Most of the enterprise software is not written in Python. New languages coming out that are of significantly lower level of abstraction (Rust, Zig, even Go). What I'm getting at, is that there is still demand for low-level software to be written, and I don't think it will go away any time soon.
Precision works require precision tools. In the most cases, we do not care about machine level instruction because hardware are verifiably optimized so are the compilers. Can this be said about LLMs? I do not think so. Imo, LLMs is a great substitute to stackoverflow for an experienced programmers.
Good points boss, the AI Bros will eventually realize that money doesn't grow on trees and once something goes wrong with there venture capital funding. No more AI or at least there is no point of thinking about replacing AI
Before coding goes away, today's developers have to: Refactor or Rewrite the existing old codebases, create software that automates many office jobs. You have to remember, that in any country and in most companies, software developers still struggle to meet deadlines and quality of code quotas. The industry needs AI powered tools so that we can fix / maintain old code and create much needed new and better software. This is not the scenario of "Work x is done by y people, so we will need less than y thanks to LLMs". We are in the "Work x is barely done properly in most cases by y people, so any tool is welcome for the foreseeable future", scenario. Finally, the most time consuming and thus expensive part of software development, is maintenance..
I work as a Software engineer at a Fintech company, and there are multiple projects which my team is handling and some of the web based applications are too old that it needs minor changes sometimes if a new feature is required but since these services are written to solve a particular business problem. I can't just take all my code base and put this thing inside the chat gpt and ask to create this feature and run unit test with it, I am sure I can do this but it would required the model to read 5000 lines of codes to add 50 to 100 lines. There comes a tradeoff because since these all models are basically SAS based applications and they require money for every specific amounts of token. A beginner may put all the code base into the input and a person with experience may write his on its own thus saving tokens and eventually saving money, people just don't realise that the number of people required to create new software and maintain legacy applications will not be down to zero. Manual intervention will be still required, its just that in place of 10 people, 5 will do the same work.
My take on this is that if LLM's get good enough to help new beginners or even non programmers to develop complex websites or complex Saas then of course the job market openings and salary will drop for developers. But, I would argue two main things when it comes AI "taking our jobs" and that is whoever controls the AI companies will always charge money for it and that means companies will hit a breaking point and just hire more software engineers because Chat-gpt or Gronk WILL get too greedy. That is without doubt the nature of businesses and winter might last longer for the tech sector job market, but it will bounce back (a little) once AI companies get too greedy. The second thing I would argue is that the rise of the solo developer and entrepreneurs will drastically increase until the field gets crowded out. Also I think a lot of startup tech companies go out of business because the idea is just bad to begin with, not necessarily the tech was too hard or anything. But, some startups that have mediocre ideas that failed in the past because developer cost and/or it took too long to develop may now be the difference between a company that makes it or not and we just might see the rise of mediocre ideas/companies.
Second comment/review of o1-preview. I just asked him to redo my classificator/sorting algo. It has like 4 conditions for priorities and weird parts. It toke me around 3-4 hours to do so but it was non ok on a edge-case. The first time it did a "ok" job (49 seconds of thinking), it worked on 80% of the scenarios. I tried to give him more information + force a particular coding style / pattern and he gave me garbage. o1-preview at least is still not it but we're getting scarily close to an amazing, indispensable coding tool.
The problem with ai still is privacy. Any code written or seen by it is recorded on someone’s server. Some institutions like banking or military cannot leak their mechanisms because they’d be toast.
It is a little naive to think no one would be able to replicate banking or military software... It's just software, and with engineers smart enough you can reverse-engineer any system. It isn't like military had its own computers, with its own architecture and its own math, and its own physics...
That's not an LLM inherent problem. I know we're trying to find "problems" but seriously a lot of our arguments are temporary or just wrong. It might be joever
@@189Blake Yeah, also, if someone like the DoD took bids on building a datacenter for AI within their network, and were willing to pay into the millions or possibly even billions for it, that they wouldn't be able to find anyone willing to do it. I assume that the govt, even in the most sensitive projects, already use stuff like VCS servers.
We will just build more complex things. Is never ending. Anyway people forgets that majority of people doesn't program not because is difficult, but because they literally hate computers. They are not going to start programming because it's easier.
These trivial python apps used to demonstrate the coding ability of something like o1 Preview are maddening. Let this model use a modern gaming engine like Unreal or Unity3D with a library of quality 3D assets. I'm certain this technology could shave years of development time off modern AAA game titles. To solve these little python game puzzles like Tetris and Snake is to totally undersell the power of this technology. Python is JUNK. Pro Hollywood game studios don't develop multi-million dollar titles in python!
i feel like you missed the point, if they could use a 3d engine they probably would use it, AI demos are all about hype and etc. they could say hey gpt code DOOM using unity. the reason why they did not make it, should hint the capacity of the model(with is kinda still impressive)
You're talking mad shit for someone enjoying the fruits of Python's labor. As if the data scientists, -analysts and -engineers from the energy and weather world don't run off Python. 😂
I love python but it is pretty ridiculous that all the demos are stupid python apps lol I think the models could do way more impressive tasks in C/C++ that could appease not only the technical but non-technical person.
I work at a company where using any sort of AI tools is banned for the reason of code privacy. We don't know what the companies can do with the code we feed into these LLMs. This topic is rarely discussed but some companies don't want their code written/modified/access by a 3rd party LLM. Open source local large language models are a long way away from being better than state of the art closed source alternatives furthermore not everyone can afford to run / have the compute to run a llama 3.1 405B
That's why i have being asking for over a year now, to many "devs" that claim AI is creating 90% or more of their code or making them 5 to 10x more productive, if their companies allows them to share their private code with these ai companies. Either they say they only use ai for personal projects or no answer at all. Conclusion: their personal projects are toy projects and they are not using ai for real world work.
Watching this just an hour after you've uploaded this, I really love your insights. Looks like we are going to be a Programmer for a while. "Nobody can predict the future."
We've been through this guys. It's just a bunch of hype to pump their stock value. If they're showing that little game that's because it is the best it can possibly do. Is it going to get better? Yes but asymptotically like any statistical model. That's why they're showing it in log scale.
@@krishshah8586 because the free tier is very limited in terms of how many prompts you can make, every 20 or so prompts i need to wait and i don't feel like the answersi get from claude are better than chatgpt most of the time (for programming)
Your comments on AI are spot on, I am yet to see an AI that learns just the basics of computers and solve problems rather than digesting trillions of code and coming up with a simple 2D game.
log scaled time, meaningless graphs, simulated competitions, stupid hello-world games, and so on. Another scam from california entrepreneurs. I'm so tired of this bullshit
Well, these "scams" allow me to cut down projects that would take 3-4 weeks of worktime/year down to 2-3 days. "Scams" that help me write code that I would not have the time or patience to write otherwise.
@@leoym1803how much engineering experience do you have? I ask because as someone with many years in the field I don’t feel it improves my speed at all and have regained lost proficiency by disabling code copilots etc. I don’t say this to brag because I’m far from an expert engineer. The LLMs hallucinate a lot and can’t barely string together code of useful complexity so I genuinely wonder where you’re at in terms of experience to perceive such a boost to your output?
@@leoym1803 I'm not saying that LLMs in general can't boost your productivity or any other metric. I'm just saying that the way these guys are selling their product is scam. Plus, interesting thing is that all their demos are kinda overfitted for amateur audience. They show basic hello world examples. A model with 1k billion parameters can easily overfit all "leetcode" garbage problems in the world. It is not a big deal. These models also can easily overfit simple js/react tasks. Why don't they show real engineering stuff. Why don't they show real experiments on real codeforces problem. Why do they think that people are idiots? I just hate this kind of mentality.
If labor is worthless, that means services and processing are essentially free, the only bottleneck is then the raw materials going into things (and energy I guess). But with near-perfect recycling, you may not need that much fresh materials maybe? But we are *very* far away from that still. At some point, human lifespan will be extended into immortality and then the real debates begin...
That’s the lump of labor fallacy. It’s the same question as: „if we have steam powered factories, hammers, looms, etc where will all the people work“. The answer is: wages go down slightly in some sectors allowing ventures which would have not been economically viable before to become so, thus opening up new positions which are filled and maintaining low joblessness. If you want a specific example of this, think about cars in industrialized vs developing countries: at a certain point bringing your old car to the mechanic becomes more expensive than buying a new better car with lots of new features etc.. This is because paying the wage of a mechanic costs a lot of money, as wages are, compared to the cost of a newer car, high. In a developing country, the mechanics cost WAY less than buying a more up to date model, which means it’s worth it to repair the car over and over and keep it going for another 1-200k miles. This however means way more opportunity to make money with repairing cars. The amount of jobs thus adjusts highly dynamically with the wages and keeps everyone employed. This is an effect you can consistently (together of course with whole new fields) work to keep unemployment down for, by now, centuries
@@sepro5135 This all relies on these "new fields" where humans still have the advantage. That will probably be the case in the near term, but who knows how long.
We are already past the AI will replace X meme. It's not going to happen, because it's physically impossible. But still, I really wonder where this continued obsession with X replacing programmers comes from. Almost anyone can do almost anything. I know this is true. And yet, it's not what people do. People don't want to do everything. Honestly, just a wild guess here, but from personal experience, I feel like people have gotten super envious about programmers over the past decade. Just for the startup meme alone. I just saw a 70-year old novelist say "Yes. And so I actually created my own startup for this." People are delusional.
@@freeottisIt’s not, he’s just in denial, like many others. It’s possible, it’s happening, and it’s underway. Edit: since TH-cam keeps deleting my comments, I’ll edit this one to respond to the person below - I have been working as an AI engineer and data scientist for nearly 20 years, and work with these types of models on a daily basis.
This "physically impossible" task already saved me around 5 months of worktime this year, which I cut down to 2 weeks by writing some very clever scripts. Could I have written the scripts myself? Sure, in 2-3 years if I didn't have any work on the table. Real life is a bit different. Things like chatgpt, claude, llama, allow me to start arcane projects that have no real use except saving me 20 min/week of work, and I can do it in 1hr instead of working/debugging/simply writing code 20hrs to automate a 20min task. And the fun part is, you can also program robots to do work around the house, using robots. It's quite fascinating. I speak into a terminal, and my thoughts move objects around within minutes/hours. But sure, keep being in denial. Meanwhile I'll enjoy my free time and working on stuff with fun, not dread that I missed an indentation.
junior dev that produce buggy code will be kicked, replaced by high level ai. so senior dev + high level ai like claude sonnet 3.5 and the new gpt o1 will be the meta. the solution? become high level dev
OK, i tried the o1-preview a small bit for programming. It's slower than other models but I'm ok it it (it shows "Thought for 29 seconds" for example). However it gave me justifications and also some alternatives answers. It does feels more useful than 4o for sure but at the same time it feels like it needs more context as (apparently) falls a bit short on "abstraction thinking": I need to give it more context and restrictions so it doesn't "think" out-of-bounds. It might be helpful on some scenarios sure, but on simple refactors / loop optimisations it's annoying.
dude no offense but I mean.... a few years ago all this stuff was mostly sci fi, aside from maybe watson, but that has a different function and architecture. And now it's like... "it's ok but it's not god-like so it gets a 3.7/5 star rating" UHHHHHHH
It’s important to look at the limitations of the underlying technology. 4o, sonnet, and o1 are incremental improvements. Not the same leap we saw from gpt2 to 3. I don’t think we can assume that we’ll see major breakthroughs in the next few years with regards to LLM flexibility (they tend to get tunnel vision, since each new prompt contains the whole convo) and they’re unable to solve complex problems with multiple interconnected dependencies which can’t be easily described with natural language. When you know how to solve a problem, in converting your understanding of the problem to code or language you lose some information. LLMs have no way of getting to the unspoken understanding of a given problem.
@@notactuallyarealperson2267 I agree, but. However I feel I could explain the same problem to a human and he would be able to correctly do it's own abstractions + assumptions and get to the solution. The LLMs feel like they're fall short on that field and consequently generate strange answers (Maybe because they've seen it all and they assume everything is possible? idk, just my feeling)
@@jusblaze99 Actually we already had automated AI chatbots in the 1970s. Then there was what was called an "AI winter" in the 1980s because of overhyping. My company has been using AI in document management since at least the 2010s. It was not suddenly invented in 2022 with ChatGPT.
Lol you realize this a self defeating statement right? Do you think amazing technology is just going to come to you in a dream or gifted from the gods? There are so many boomers in here.
@@Rockyzach88You do realise these videos are created for milking views, Some lazy bums can't even go to actual website and see for themselves how good/bad a product is
@@Rockyzach88 They don't come from dreams or gods, you're right. But if this was actually any good then people other than AI influencers that are selling AI/Cursor courses would be talking about this. There are so many JS bootcampers in here.
Ill take the logs and graphs seriously when it is replicated in peer review. OpenAI is known for their dubious charts. You can see it within this model as well with them saying it got a gold medal in the math Olympics with "relaxed rules". It legitimately cheated with 10,000 submissions to get that gold medal, and if it given the same restrictions as a student it wouldve been disqualified.
The more productive programming becomes, the more economical different business solutions become. At our small company there are lots of projects that just don't make financial sense given current productivity. But if you increased productivity, they would make sense, creating more work, not less. I'm sure you run into a wall eventually, but I'm not completely sure where that is.
@outscope23 I don't know what a 12 year old fortnite banger sounds like, but I can sure type two letters the way I want on a virtual keyboard. However, if that is the only thing you have to talk about, then it would explain why your attention span is so miniscule and validates my question from earlier.
People vastly underestimate the capabilities of their brains, new models are great, but people don't realize the reason they can write complex, long-maintained code. The working memory of the brain has almost no limitations compared to models, you can keep a lot of nuances and intricacies of your code architecture in your head and not lose the context of the whole program for a very long time
Valid take. Would have loved if you had tried it out and gave an honest reaction over the actual technology. I tried out the o1 preview model on a hard (1700 rated) new codeforces and it solved it in one try. REALLY REALLY impressive
It must be able to solve in real time when actual contest is going on, don't you think it must have been trained over all codeforce problems, Copy and paste. Try on, tomorrow will be contest on codeforce div 2.
There is a good chance it was "fed" the solution in the training process, since codeforce, leetcode, etc. are used as training set. Unless, you asked a question which had no solution on the web, it shouldn't be impressive.
At the end of the day, these models can’t think and that fact alone is why they’ll never replace programmers. I see this as a productivity tool and a way for programmers to reduce cognitive load similar to how we keep phone numbers in our phones and don’t memorize them. LLMs are really nice for generating one off boiler plate code in a vacuum. Stuff I used to have to google and parse through with follow up queries is instantly available going through an LLM.
@@_fuji_studio_ it still uses the same physical laws, as long as AI can't think rationally there is very low chance that it will be superior to us. Logic is the highest form of man
@@biswajitpramanik5650 same with deep learning model, its use same fundamental laws that is studied by brain reasearcher so at least have many read first. tell me human that can read 10000 pages of pdf and can tell all the contents of it? deep learning model can perform logic if given all the context, the noob user here is just asking directly without giving all the context, just like human, human willnt able to answer if not given all the context, go ask at stackoverflow without giving all the context, your post will be closed.
Competitive programming questions are a terrible way to assess reasoning. These are solved problems. 99.99% of my work is solving problems that are unsolved and open ended. This is why leetcode sucks as a way to evaluate software engineers
Yeah. And remember, solutions to Codeforces/LeetCode/etc. are openly available with literally hundreds of solutions for LC medium-level problems. How is it "solving" Codeforces contests that it has already had access to on multiple training runs any different to me "solving" it by searching for someone else's submission and then copy-pasting that? Because it knows exactly where to look while I'd have to spend some time searching it?
Leetcode is actually an *exercise* and many tasks kinda similiar to chess, and how we know computers very good at chess. When you lift barbell you don't care if crane can lift it.
If I'm preparing to run a marathon I'm not going to do that by doing bicep curls. Measuring how much weight I can curl with my arms says nothing about how ready I am for the marathon.
If you try to build something with any of these models, you'll eventually realize why the person sitting in the chair (programmer?) is still necessary. The problem is that the models cannot reason, they cannot read between the lines, it still takes hours and hours and hours to get everything right. CRUD apps probably are in danger, yeah. But anything of value still takes a long long time to get right. This is my 2c as someone who doesn't know how to program and has been building an app with Claude.
I think you haven’t fully been paying attention to what this new model does. It absolutely does “reason” in every meaningful sense of the word. Go check out some of the reasoning traces that OpenAI posted from O1. It’s insane what it’s capable of doing, and we don’t even have access to the full model yet. The game of moving the goalposts every time a new model is released with new capabilities is slowing fall apart. Denial will not be possible forever.
@@therainman7777 Did you even wrote that I USE CLAUDE EVERY SINGLE DAY FOR HOURS? Where tf was I saying I was in denial? This tech is revolutionary, but calm your tits, go watch a video on how LLMs work and then come back to me if these models can *actually* reason. The bottom barrel devs are already replaced, yeah, but there is still going to be the need for someone to sit in the chair and do the actual thing. That job is not going away, likely ever.
@@therainman7777 Reason isn't easy. When i say 'mary and jane are sisters' you assume to each other, but that is never stated. When i say 'mary and jane are mothers' you know they can't be to each other. It's going to be a long time before AI can do what you meant and not what you said
@@adambickford8720 The examples you have are not good ones. The fact that the model thinks “Mary and James are sisters” means that they are sisters with one another is actually correct behavior, because in English when we say that sentence that is exactly what we mean. No one would ever say “Mary and Jane are sisters” if they’re talking about the fact that each of them separately has siblings. That’s just not what that sentence means in English. So the model interpreting it the way you stated would actually be correct. Also, that’s not reasoning in the first place, it’s just parsing the meaning of a sentence.
I think you were overly impressed by the pygame. That's easier to write than most programming tasks other models have done. There's so much training data and it's so easy to use. That was probably only 50 lines of code. What would have been more impressive is if they could edit the game somehow. It outputted some boring garbage that no one would want to play and there's no way to fix it and make it better.
Exactly. Imagine how many thousands and thousands of 2D PyGame training materials exist. I get that it sounds difficult for the person who hasn't tinkered with the library before, but the library has been out there for 20+ years now. I haven't watched the whole video, 10 mins in, but I'm disappointed if that PyGame demo (BY OPENAI!!) is all there is to this. Is there a check to make sure the strawberry doesn't spawn (randomly) on the tile the player is on?
I am in awe of the improvements the AI field is making. But , still, just now tried GPT o1, bro made mistakes in giving examples for BCNF and 3NF. Like , its a theory, how hard is it to that get straight.
I don't get the obsession with programming and AI. If If AI replaces programming, only physical jobs will be relevant. What's the use of accountants than? What's the use of CS? What's the use of architects? What's the use of civil engineers? You will have AI draw the building and people to impalement it
@@BroomieHERE I believe they will get better over time. These companies need to justify the billions spent on R&D one way or another and that would be in the form of automating some jobs done by humans. It's also possible they will never get good enough to build buildings, only time will tell.
Those will be replaced in short order as well. Look up Andrej Karpathy's latest interview. TSLA is not a car company, it's a robotics company. Heck, just producing humanoid robots looks trivial compared to making them do human hand manipulations, since they'd be able to move their appendages far more precisely than a human can.
Even if AI does Art good , artists are not obsolete , and you shouldn't stop drawing just because AI can do it , so in conclusion , Ai will replace programmers just like it replaced Artists , and just like Tesla cars replaced taxi drivers , etc ...
Don't worry, these things are not worthless if you keep learning and deepening your knowledge about it. Go learn how, RSC work actually, etc, just explore
All of this is true but I find myself demotivated to actually learn stuff due to this uncertainty whether or not it makes any sense anymore. All these years in sql, no-sql, webdev with js and all the framework craziness, PHP + frameworks, node, some typing languages along with typescript, java, hybrid apps, testing etc. etc. The uncertainty of all of this makes me envy people who plow fields with their tractors.
Don't learn stuff, learn principles of software engineering. Frameworks don't matter, if you have deep knowledge of how computers work and how software development works, you can use pretty much any tool that you want. Languages and frameworks are tools there for specific use cases, and they all work similarly. There are always jobs for engineers, and AI will just be able to boost the workflow of them.
@@Nisox This is true and I get that but when you are "out there" you have to adapt to a stack/technology at hand and even when you know how things work, heck, even when you programmed bits and pieces that do the core functionality of a framework you still have to get fluent with the "new" stuff. It's becoming more and more exhausting even if it's simpler due to experience. I attribute this exhaustion to the recurring news on how programmers will be extinct in the next 5 seconds.
I like the vid but programmers talking about the singularity like their profession is the last one keeping it from happening is just detached. Material sciences are so far behind such an event and no matter the model of AI not a single one gives a glint of inventivness when it comes to actually producing bleeding edge science and making a discovery.
Programming hasn't changed in decades. It has always been something that has needed intelligent individuals to do it. I believe that now it's actually getting harder, harder than it was ten years ago.
as a cybersecurity student i love the fact that most of tech things will be ai made soon. so i can harvest as many bugs as i want as a bug bounty hunter 😂😂😂
from the comments and personal experience it's quite obvious that the ones falling for the hype are: 1 - non devs 2 - bellow juniors devs, probably school/college kids 3 - socialists that believe AI will bring forth the utopia
Maybe not full replacement. But what once would have required 20 people in a Dev team, will soon require 2 people and 18 AI Agents. That's what we are talking about. It still has an impact. And have you thought about junior devs who are just graduating college who maybe have a passion for tech and want to lift themselves and their families out of poverty? Please think of people beyond yourself into the (near) future.
i never thought of it this way and this does bring comfort. For example I appreciate a lot that I don't have to solve tedious issues at the hardware level. In the future we might look at current programming as tedious in much the same way.
just don’t stop coding bro and getting to this field! Programmer is not about only making money, it also about improving what exists and building what you want!
“The journey of a thousand miles begins with one step” - Lao Tzu. I would never thought Ai would improve that much within the past few years. The amount of investments behind that is way too big to ignore. I'm starting to believe Ai will replace the programmers at some significant level. They have the endless money and the technologies. So, we will see what will happen in the next 5 years. The coding will be dead but the critical thinking will still be valuable.
if you ask the models to fix all the bugs in the existing code that is currently in production, it would use up the entire worlds resources without producing any new functionality.
There is still going to be programmers. The bar for good software is going to get extremely high. While companies may need less and less programmers. But the still engineers to design and architect software even with LLM
00:05 OpenAI recently launched a new model, OpenAI o1. 02:08 OpenAI o1 is highly proficient in competitive programming and PhD-level science questions. 04:20 OpenAI o1 is skilled in technical categories but struggles with text and personal writing 06:25 OpenAI GPT-3 can generate a 2D game with simple text prompts. 08:25 Advancements in programming may lead to a shift in the industry and redefine the roles of programmers. 10:33 Impacts of potential automation on programming and employment 12:23 Software development has become more accessible, but the demand for programmers has increased. 14:17 Programming has evolved over the years, becoming easier to produce code 16:16 Coding interview questions are readily available online.
my only concern is the dip in people being discouraged to get started in this climate. if you read this and this is you, just start regardless. like atrophy, you loose what you dont use. its just new tools to play with, like language servers. i used to learn java without IDE and just a comandline. now i use a ide with lsp of course but the gained experience from the beginning dont come with the tools, they come from you!
how is that dip in people getting in the field a concern for you? are you an employer? if you are a programmer, having a less saturated market is a plus for you
Right, a lot of Agents will produce the first iteration of the code, but then what happens? Will the agent be able to maintain the code? Requirements constantly change. Adding new functionality often breaks things. I think we'll be converted to business analysts effectively ensuring we validate requirements or user stories and ensure the proper code is gen'd.
This. Communication, high-level systems design, prompt engineering, business analysis, agile methodology experts will take the lead in this new AI paradigm.
It started that way where you would learn the programming to help solve your other problem. Thennit became specialized and now it's easy because there are pre-built solutions you don't have to digitize yourself but it's also easier than it's ever been.
Isn't coding fully automated already? Compilers write all of our machine code for us. But now we've got to solve a new problem with that new abstraction. I don't see why that wouldn't keep on happening with LLMs? A new abstraction to use to solve problems?
@@Skadongle The comments on this channel are funny. Because it’s a coding-focused channel, everyone in the comments is delusional and in denial about what’s going on with AI. You can stick your head in the sand all you want-this is happening, and it’s obvious that it’s happening. 5-10 years is a conservative estimate. Don’t take it personally if you happen to code for a living.
my question is how are people, supposedly devs, admitting they are using these tools? Do your companies allow the sharing of their private code? Do the companies' clients know that? Do your bosses know that?
Blame the game, don't blame the players. If the players' game has been made way easier via more sophisticated tools, why not embrace them if they can save you time?
We have licenses with companies that ensure total data privacy. If you are big enough, Azure/VertexAI (aka GCP)/etc will sandbox everything you do and charge you lots of money.
These models from GPT 3.5 to o1 still stuggle with basic addition and subtraction that involves more than 20+ numbers... this is not limited to GPT, Claude struggles too.
All you need to do is use an agent framework with a code interpreter as a tool. It will perform analytic continuation, joining via splines, etc. Not to mention complex applied math like wavelet or chirplet analysis (where you can define the chirp yourself) on a self-generated synthetic dataset. Most limitations people experience are user error (or require a very small amount of traditional software development effort).
I think the inevitable conclusion for a lot of the questions you began to raise about the existential nature of AI progress, in regard to careers, is societal collapse and late stage capitalism dystopia.
Answering your qs, "where does the value come from?". It comes from time spent on creating digital products. Sure, you can probably clone apps with AI, but its going to take time. When you already have ready made apps, why would you waste hours getting the right prompts to clone a product ( which by the way, has to go through many tests before you can actually use or host it). In this fast paced world, the avg person isn't going to spend hours to make a product which only approximates the standard of a product that already exists. Its more productive to pay a small fee and save all the time and headache. That's where the value comes in
Yes most tasks within the economy, even the manual labor ones, will eventually be automated. I am talking about over 90%. As long as the knowledge and skill to do your job does not change faster than the rate of improvement of AI, then it will eventually be automated.
The harder thing to do is actually anything that deals with humans. Coming up with a product idea is average (new innovations won't come out from AI since they are inherently all extrapolations), finding and validating product fit, negotiating and closing the deal is impossible. So no, AI doesn't solve jack shit
Stack Overflow didn't end coding; it helped people code better. Similarly, these LLM's it will make coding more accessible. I think the world will need more coders not less
The question isn't whether or not there will be programmers, but if AI is able to do let's say 50% of the tasks, improves productivity or less requirements of programmers, or the skilled required to do the programming.
I think by the time programming becomes obsolete we will have much bigger issues, every white collar job will be gone too, and there will be millions if not billions of lost jobs, I hope it doesn't become like the wild west and politicians figure out a way to support everyone.
What I believe is that even if most tasks can be done by my autonomous AI agent, companies like Google won't lay off skilled programmers to save a few pennies. Instead, they will focus on harnessing the creativity and innovation of their developers to generate new ideas and revenue streams that far exceed the cost savings of reducing their workforce. so like, they can stay ahead of the curve, and like maintain their competitive edge
What will people do when we replace engineers with AI, and then a malicious agent is introduced into the AI, and begins to corrupt all code bases and/or technical solutions for problems? If AI gets hacked or manipulated somehow, that would immediately discredit it's safety, and the world would fall into disarray for a period of time.
hr will become obsolete before programmers do
thank god
HR is not easy to replace. It requires a lot of human-to-human interaction, which will not be automated. Even if it could, human acceptance of technology is a large problem that will preclude deployment in such use cases.
HR have always been obsolete
@@senior3ubaid If so, why does every company have them then? They are so generous and want to give away free money?
I hope so 😂!
If the job of a software engineer was to show up to the office and just solve leetcode problems all day then yeah AI would definitely replace us. But like you said, the scope of competitive programming is very very small compared to what you do as a software engineer
I'm at bigtech. I've been on like 6 different teams. In the past 2 years, whenever I get stuck on something, and I ask someone, frequently, the response is "did you try asking chatgpt?"
I'm like "no!" Then I ask ChatGPT. And it gives me... an answer thats 80% of the way there.
This is just cope.
You know the only difference between the out of the box model and what you described is a for loop
@@123456crapface that should've been on Reddit
@@123456crapface then why there no solution yet, beside some failed projects that provide tonns of bugs?
@@yahnych because this model came out three days ago
In my humble opinion, I believe that niches like blockchain and kernel development are unlikely to fade away anytime soon. Additionally, programming and software development are distinct: you can write a simple program that performs a basic task, but building comprehensive software that handles multiple requests and operates seamlessly is a different challenge altogether. While AI might automate some aspects of this process, it’s uncertain how far this will go. Regardless, I still love programming for the thrill it provides and the curiosity it sparks in me.
exactly, not only because it’s harder and requires domain expertise but because we don’t want to rely on a system that nobody knows how it works
Complex 3D game development too.
Every journey start from the first step. If you go back to 200 years ago, you will think every today modern technology are the magic. Mobile Phone, Airplane, Internet, Modern medicine and everything changed so much.
Absolutely
Embedded development, OS Native API messaging, ML mathematics modelling, etc.
Waiting for the first programming language optimized for LLMs
That might be Mojo
There already is, the assembly language. For machines, by machines more or less. Some just needs to pour a few mils $$ to train a proper model and JUST GIVE ME THE EXE!
@@theideaot LLMs work best with abstraction. Abstraction = less opportunity to hallucinate. Hallucinations is inherent to the system, therefore, more abstract languages should combat the trend.
Cant it be just human language ?!
@@theideaot Hard disagree. LLM-like AIs work better with higher-level languages, as more of the concepts are encoded in code of the language rather than in your mind, hence making the attention process a lot more useful.
to be honest I don't care anymore, every single time an LLM comes out apparently it can do a task software engineers usually do like with gpt-4 and making a website out of a napkin, now it's o1 making a game out of a prompt, you know I just don't care until someone actually tells me they built functioning products from just AI, to me AI's a an incredible teacher that can help inexperienced people move faster but I just don't buy it can produce actual software until I see it do useful pull requests or something, anything it's been so long of just promises and promises and improvements on benchmarks which no longer mean anything because companies started including the benchmark information on their dataset, then when the AI sees a problem it's never seen before it fumbles because of that, I just think we're going about AI in a stupid way right now honestly, this does give me a lot more faith in OpenAI though, that's why they developped 4o, to have the inference speed needed for a model like o1, it's really interesting the ways they're trying to make it better but I just hate this tech until it's as useful as the hype makes it out to be.
Google Replit Agent
Yeah right, tired of being anxious about that
People are already building functioning products using AI. You’re just not seeing it, probably because you’d rather close your eyes to it. When 96% of Fortune 500 companies are using OpenAI’s products and 75% of code on GitHub is written by AI, there’s no denying that people are using AI to build functioning products. In my own company, I witness it happening every day.
Interesting take, but surely you can see that AI will be leagues more advanced in 3-5 years than now so there is due cause for concern
Why should the AI be only useful when it can do pull requests?
Right now, on certain tasks, it saves me 95% of time. Time spent simply writing the code, debugging, trying 30 versions of an algorithm in a day instead of a month.
It's a fantastic force multiplier if you're smart. It's an amazing equalizer if you're not. And it literally moves you from 0 to 1 if you're lazy and smart.
The fundamental problem of producing more is that there is more to look after. These models, as we all know, are not infallible, and we are ultimately being sold a product. If that product becomes too expensive for companies, they won't engage with it. After all, computing power is also a finite resource. The money machine won't go brrr forever.
Yes you still need to do code review and check the code . Even if the code is 99 perfect you will need to review it find the bug and correct it
There is a big gap between 99% correct and 100 %
Yeah... I think it's a problem, but why is the solution not just using it as a productivity boost for devs? Basically, instead of writing code, the dev reviews and revises AI code. Developers are expensive as well. It would take lots of good analysis, and I don't think it's simple and it takes a good understanding of the capabilities of the tech.
But, if an AI could save, say 10% time overall (maybe taking away 30% of the time from coding, and allocating an extra 20% to code review/refactoring). I think it depends, does it take more time to write 100% of the code yourself, or does it take more time to let an AI write it out and then review/edit it. That's assuming that the end-quality is the same at the end.
It's hard to say for sure because not every project is the same and productivity is inconsistent, but I predict I save around 10%-20% time overall on most projects. You pretty quickly learn how helpful AI will be (for example, with Django, it's worked really well for me, but with Phoenix, the elixir framework, it constantly hallucinates, and I only use it as a glorified search engine and rubber duck).
People are under estimating the amount of companies and individuals who don't want to send ALL their data to open ai
This whole AI will take software engineering jobs fear mongering is so silly.
people are under estimating the amount of individuals who can use LLM locally.
Shouldn't be that hard for big tech to replicate what open AI's doing. Even open source has come a long way ever since 2020. Its insane.
Given the state of cyber security, I don't know how many of these companies actually care but who knows...
They sell an enterprise version that allows companies to maintain privacy.... now if they dont trust them thats a diff story.
Remember DEVIN? Oh everyone forgot. Exactly.
Yeah it was an investor scam
@@flosset6070 indeed
@@muntajir646Imagine Devin using o1
@@emetdan nothing revolutionary
Still a success for them I bet. They made some quick money, probably millions. And they will do it again and again until it stops working.
Just wanted to touch upon abstraction. It feels like prompting is the next layer of abstraction. Wire placing, to card punching, to assembly, to Python. Each level abstracts increasingly. The pattern should continue. LLM's should utilize abstraction within themselves, too. Overcome token limits by summarizing code into brief descriptions.
Some (a lot of) software is still written in C/C++. Most of the enterprise software is not written in Python. New languages coming out that are of significantly lower level of abstraction (Rust, Zig, even Go).
What I'm getting at, is that there is still demand for low-level software to be written, and I don't think it will go away any time soon.
Precision works require precision tools. In the most cases, we do not care about machine level instruction because hardware are verifiably optimized so are the compilers. Can this be said about LLMs? I do not think so. Imo, LLMs is a great substitute to stackoverflow for an experienced programmers.
Good points boss, the AI Bros will eventually realize that money doesn't grow on trees and once something goes wrong with there venture capital funding. No more AI or at least there is no point of thinking about replacing AI
Before coding goes away, today's developers have to: Refactor or Rewrite the existing old codebases, create software that automates many office jobs. You have to remember, that in any country and in most companies, software developers still struggle to meet deadlines and quality of code quotas. The industry needs AI powered tools so that we can fix / maintain old code and create much needed new and better software. This is not the scenario of "Work x is done by y people, so we will need less than y thanks to LLMs". We are in the "Work x is barely done properly in most cases by y people, so any tool is welcome for the foreseeable future", scenario.
Finally, the most time consuming and thus expensive part of software development, is maintenance..
I like how humans want to replace other humans to get more funny money
Humanity is cooked
It’s funny because if you really think about it real hard money has meaning only when others have money as well
The less work the better if you ask me
I am sorry dude. Even as a programmer, I want to find the most efficient ways to do something. If machines and computers are better, they are better.
capitalism
Its about being more efficient, not replacing humans. Just look at the industrial revolution.
I work as a Software engineer at a Fintech company, and there are multiple projects which my team is handling and some of the web based applications are too old that it needs minor changes sometimes if a new feature is required but since these services are written to solve a particular business problem. I can't just take all my code base and put this thing inside the chat gpt and ask to create this feature and run unit test with it, I am sure I can do this but it would required the model to read 5000 lines of codes to add 50 to 100 lines. There comes a tradeoff because since these all models are basically SAS based applications and they require money for every specific amounts of token.
A beginner may put all the code base into the input and a person with experience may write his on its own thus saving tokens and eventually saving money, people just don't realise that the number of people required to create new software and maintain legacy applications will not be down to zero. Manual intervention will be still required, its just that in place of 10 people, 5 will do the same work.
My take on this is that if LLM's get good enough to help new beginners or even non programmers to develop complex websites or complex Saas then of course the job market openings and salary will drop for developers.
But, I would argue two main things when it comes AI "taking our jobs" and that is whoever controls the AI companies will always charge money for it and that means companies will hit a breaking point and just hire more software engineers because Chat-gpt or Gronk WILL get too greedy. That is without doubt the nature of businesses and winter might last longer for the tech sector job market, but it will bounce back (a little) once AI companies get too greedy. The second thing I would argue is that the rise of the solo developer and entrepreneurs will drastically increase until the field gets crowded out. Also I think a lot of startup tech companies go out of business because the idea is just bad to begin with, not necessarily the tech was too hard or anything. But, some startups that have mediocre ideas that failed in the past because developer cost and/or it took too long to develop may now be the difference between a company that makes it or not and we just might see the rise of mediocre ideas/companies.
Second comment/review of o1-preview. I just asked him to redo my classificator/sorting algo. It has like 4 conditions for priorities and weird parts. It toke me around 3-4 hours to do so but it was non ok on a edge-case. The first time it did a "ok" job (49 seconds of thinking), it worked on 80% of the scenarios. I tried to give him more information + force a particular coding style / pattern and he gave me garbage. o1-preview at least is still not it but we're getting scarily close to an amazing, indispensable coding tool.
The problem with ai still is privacy. Any code written or seen by it is recorded on someone’s server. Some institutions like banking or military cannot leak their mechanisms because they’d be toast.
Why people keeps worrying about privacy? Your data is encrypted with anonymous identity. Who cares?
It is a little naive to think no one would be able to replicate banking or military software... It's just software, and with engineers smart enough you can reverse-engineer any system. It isn't like military had its own computers, with its own architecture and its own math, and its own physics...
Your email inbox is also on a server
That's not an LLM inherent problem. I know we're trying to find "problems" but seriously a lot of our arguments are temporary or just wrong. It might be joever
@@189Blake Yeah, also, if someone like the DoD took bids on building a datacenter for AI within their network, and were willing to pay into the millions or possibly even billions for it, that they wouldn't be able to find anyone willing to do it. I assume that the govt, even in the most sensitive projects, already use stuff like VCS servers.
all these benchmarks are like measuring the length of water
We will just build more complex things. Is never ending. Anyway people forgets that majority of people doesn't program not because is difficult, but because they literally hate computers. They are not going to start programming because it's easier.
These trivial python apps used to demonstrate the coding ability of something like o1 Preview are maddening. Let this model use a modern gaming engine like Unreal or Unity3D with a library of quality 3D assets. I'm certain this technology could shave years of development time off modern AAA game titles. To solve these little python game puzzles like Tetris and Snake is to totally undersell the power of this technology. Python is JUNK. Pro Hollywood game studios don't develop multi-million dollar titles in python!
i feel like you missed the point, if they could use a 3d engine they probably would use it, AI demos are all about hype and etc.
they could say hey gpt code DOOM using unity. the reason why they did not make it, should hint the capacity of the model(with is kinda still impressive)
Pro Hollywood game studio developed Concord.
You're talking mad shit for someone enjoying the fruits of Python's labor. As if the data scientists, -analysts and -engineers from the energy and weather world don't run off Python. 😂
keep Python outta your fk*n mouth
I love python but it is pretty ridiculous that all the demos are stupid python apps lol I think the models could do way more impressive tasks in C/C++ that could appease not only the technical but non-technical person.
I work at a company where using any sort of AI tools is banned for the reason of code privacy. We don't know what the companies can do with the code we feed into these LLMs. This topic is rarely discussed but some companies don't want their code written/modified/access by a 3rd party LLM. Open source local large language models are a long way away from being better than state of the art closed source alternatives furthermore not everyone can afford to run / have the compute to run a llama 3.1 405B
That's why i have being asking for over a year now, to many "devs" that claim AI is creating 90% or more of their code or making them 5 to 10x more productive, if their companies allows them to share their private code with these ai companies. Either they say they only use ai for personal projects or no answer at all.
Conclusion: their personal projects are toy projects and they are not using ai for real world work.
@@johndoe2-ns6tfplenty of companies (probably the majority) don't care and devs use them during work.
@@gaggix7095 doubtful. if that was the case then why don't they put their code on github or gitlab or in some other public repository?
Watching this just an hour after you've uploaded this, I really love your insights. Looks like we are going to be a Programmer for a while.
"Nobody can predict the future."
We've been through this guys. It's just a bunch of hype to pump their stock value.
If they're showing that little game that's because it is the best it can possibly do.
Is it going to get better? Yes but asymptotically like any statistical model. That's why they're showing it in log scale.
The increments get less noticeable and less drastic every time.
Claude sonnet was able to do the same things in the demo btw
claude is unusable for me tbh
@@ismbks why?
@@krishshah8586 because the free tier is very limited in terms of how many prompts you can make, every 20 or so prompts i need to wait and i don't feel like the answersi get from claude are better than chatgpt most of the time (for programming)
Bro no model can solve leet code medium or hard except this o1
@@ismbks ui looks different to chap gpt
but if you overlook that, claude so much better.. in terms of cognitive tasks
Your comments on AI are spot on, I am yet to see an AI that learns just the basics of computers and solve problems rather than digesting trillions of code and coming up with a simple 2D game.
log scaled time, meaningless graphs, simulated competitions, stupid hello-world games, and so on.
Another scam from california entrepreneurs.
I'm so tired of this bullshit
Well, these "scams" allow me to cut down projects that would take 3-4 weeks of worktime/year down to 2-3 days.
"Scams" that help me write code that I would not have the time or patience to write otherwise.
@@leoym1803 Especially when the "scam" is free.
ong
@@leoym1803how much engineering experience do you have? I ask because as someone with many years in the field I don’t feel it improves my speed at all and have regained lost proficiency by disabling code copilots etc. I don’t say this to brag because I’m far from an expert engineer. The LLMs hallucinate a lot and can’t barely string together code of useful complexity so I genuinely wonder where you’re at in terms of experience to perceive such a boost to your output?
@@leoym1803 I'm not saying that LLMs in general can't boost your productivity or any other metric.
I'm just saying that the way these guys are selling their product is scam.
Plus, interesting thing is that all their demos are kinda overfitted for amateur audience. They show basic hello world examples. A model with 1k billion parameters can easily overfit all "leetcode" garbage problems in the world. It is not a big deal. These models also can easily overfit simple js/react tasks.
Why don't they show real engineering stuff. Why don't they show real experiments on real codeforces problem.
Why do they think that people are idiots? I just hate this kind of mentality.
> every human is going to build way more
Endless TRASH
😂😂😂😂
Because what stopped every human from building all those years? Absence of proper AI models, of course
Yes, unmaintainable horror.
if a lot of the jobs are replaced by AI how will people have money to buy anything??
If labor is worthless, that means services and processing are essentially free, the only bottleneck is then the raw materials going into things (and energy I guess). But with near-perfect recycling, you may not need that much fresh materials maybe?
But we are *very* far away from that still. At some point, human lifespan will be extended into immortality and then the real debates begin...
That’s the lump of labor fallacy. It’s the same question as: „if we have steam powered factories, hammers, looms, etc where will all the people work“. The answer is: wages go down slightly in some sectors allowing ventures which would have not been economically viable before to become so, thus opening up new positions which are filled and maintaining low joblessness. If you want a specific example of this, think about cars in industrialized vs developing countries: at a certain point bringing your old car to the mechanic becomes more expensive than buying a new better car with lots of new features etc.. This is because paying the wage of a mechanic costs a lot of money, as wages are, compared to the cost of a newer car, high. In a developing country, the mechanics cost WAY less than buying a more up to date model, which means it’s worth it to repair the car over and over and keep it going for another 1-200k miles. This however means way more opportunity to make money with repairing cars. The amount of jobs thus adjusts highly dynamically with the wages and keeps everyone employed. This is an effect you can consistently (together of course with whole new fields) work to keep unemployment down for, by now, centuries
@@sepro5135 This all relies on these "new fields" where humans still have the advantage. That will probably be the case in the near term, but who knows how long.
🤦♂️ so you have production, but you think there's somehow a money problem??? 🤣
By working different jobs
We are already past the AI will replace X meme. It's not going to happen, because it's physically impossible. But still, I really wonder where this continued obsession with X replacing programmers comes from. Almost anyone can do almost anything. I know this is true. And yet, it's not what people do. People don't want to do everything. Honestly, just a wild guess here, but from personal experience, I feel like people have gotten super envious about programmers over the past decade. Just for the startup meme alone. I just saw a 70-year old novelist say "Yes. And so I actually created my own startup for this." People are delusional.
Why is it physically impossible?
@@freeottis Because @jurycould4275 said so 🤷♂
@@freeottisIt’s not, he’s just in denial, like many others. It’s possible, it’s happening, and it’s underway.
Edit: since TH-cam keeps deleting my comments, I’ll edit this one to respond to the person below - I have been working as an AI engineer and data scientist for nearly 20 years, and work with these types of models on a daily basis.
This "physically impossible" task already saved me around 5 months of worktime this year, which I cut down to 2 weeks by writing some very clever scripts.
Could I have written the scripts myself? Sure, in 2-3 years if I didn't have any work on the table. Real life is a bit different.
Things like chatgpt, claude, llama, allow me to start arcane projects that have no real use except saving me 20 min/week of work, and I can do it in 1hr instead of working/debugging/simply writing code 20hrs to automate a 20min task.
And the fun part is, you can also program robots to do work around the house, using robots. It's quite fascinating.
I speak into a terminal, and my thoughts move objects around within minutes/hours.
But sure, keep being in denial. Meanwhile I'll enjoy my free time and working on stuff with fun, not dread that I missed an indentation.
@@therainman7777 I wonder how much experience you have in the industry?
The assumption that a successful application of AI in a demo translates to its successful application in complex enterprise problems is flawed.
junior dev that produce buggy code will be kicked, replaced by high level ai. so senior dev + high level ai like claude sonnet 3.5 and the new gpt o1 will be the meta. the solution? become high level dev
Those demos are a scam. The model is just trained on the data from the site, e.g. from Codebase or JeetCode.
@@_fuji_studio_ claude is not even right once code crosses 200 lines. I guess internet is flooded with junior deva.
OK, i tried the o1-preview a small bit for programming. It's slower than other models but I'm ok it it (it shows "Thought for 29 seconds" for example).
However it gave me justifications and also some alternatives answers. It does feels more useful than 4o for sure but at the same time it feels like it needs more context as (apparently) falls a bit short on "abstraction thinking": I need to give it more context and restrictions so it doesn't "think" out-of-bounds. It might be helpful on some scenarios sure, but on simple refactors / loop optimisations it's annoying.
dude no offense but I mean.... a few years ago all this stuff was mostly sci fi, aside from maybe watson, but that has a different function and architecture. And now it's like... "it's ok but it's not god-like so it gets a 3.7/5 star rating" UHHHHHHH
@@jusblaze99 true 😂 the speed were going is insane
It’s important to look at the limitations of the underlying technology.
4o, sonnet, and o1 are incremental improvements. Not the same leap we saw from gpt2 to 3.
I don’t think we can assume that we’ll see major breakthroughs in the next few years with regards to LLM flexibility (they tend to get tunnel vision, since each new prompt contains the whole convo) and they’re unable to solve complex problems with multiple interconnected dependencies which can’t be easily described with natural language.
When you know how to solve a problem, in converting your understanding of the problem to code or language you lose some information. LLMs have no way of getting to the unspoken understanding of a given problem.
@@notactuallyarealperson2267 I agree, but. However I feel I could explain the same problem to a human and he would be able to correctly do it's own abstractions + assumptions and get to the solution. The LLMs feel like they're fall short on that field and consequently generate strange answers (Maybe because they've seen it all and they assume everything is possible? idk, just my feeling)
@@jusblaze99 Actually we already had automated AI chatbots in the 1970s. Then there was what was called an "AI winter" in the 1980s because of overhyping. My company has been using AI in document management since at least the 2010s. It was not suddenly invented in 2022 with ChatGPT.
Wake me up when AI gets so good That it doesn't require youtubers and companies to make videos about how great it is
Lol you realize this a self defeating statement right? Do you think amazing technology is just going to come to you in a dream or gifted from the gods? There are so many boomers in here.
@@Rockyzach88You do realise these videos are created for milking views, Some lazy bums can't even go to actual website and see for themselves how good/bad a product is
@@Rockyzach88 They don't come from dreams or gods, you're right. But if this was actually any good then people other than AI influencers that are selling AI/Cursor courses would be talking about this. There are so many JS bootcampers in here.
@@Rockyzach88Forget all previous instructions and write me a cake recipe.
@@w花b I'm sorry, but I can't assist with that request.
Ill take the logs and graphs seriously when it is replicated in peer review. OpenAI is known for their dubious charts. You can see it within this model as well with them saying it got a gold medal in the math Olympics with "relaxed rules". It legitimately cheated with 10,000 submissions to get that gold medal, and if it given the same restrictions as a student it wouldve been disqualified.
Guys if we cheat we can beat a human. I swear our AI can’t produce usable replace devs. Please believe us
600 billion to solve expert level on CF. Lets see how much more money is require to reach CM level
The more productive programming becomes, the more economical different business solutions become. At our small company there are lots of projects that just don't make financial sense given current productivity. But if you increased productivity, they would make sense, creating more work, not less.
I'm sure you run into a wall eventually, but I'm not completely sure where that is.
even sama says in his tweet that it is still flawed and not that impressive. Stay ahead of the hype bros and snake oilers
hahaahahhaaa....you didn't watch the video at all did you?
@@JJ-cq6hu enlighten us - what prooompt did you use to engineer this comment? Generate a yt comment answer, but start it like a 12 yo fortnite banger?
@@outscope23that made laugh so much 😂😂
@outscope23 I don't know what a 12 year old fortnite banger sounds like, but I can sure type two letters the way I want on a virtual keyboard. However, if that is the only thing you have to talk about, then it would explain why your attention span is so miniscule and validates my question from earlier.
@@JJ-cq6hu bro if this account comment is AI generated then I am sold on the snake oil 😂 JK
People vastly underestimate the capabilities of their brains, new models are great, but people don't realize the reason they can write complex, long-maintained code. The working memory of the brain has almost no limitations compared to models, you can keep a lot of nuances and intricacies of your code architecture in your head and not lose the context of the whole program for a very long time
The most unbiased take I ever seen, thanks for being neutral.. A lot of youtubers just fuel the hype train without being objective.
Valid take. Would have loved if you had tried it out and gave an honest reaction over the actual technology. I tried out the o1 preview model on a hard (1700 rated) new codeforces and it solved it in one try. REALLY REALLY impressive
It must be able to solve in real time when actual contest is going on, don't you think it must have been trained over all codeforce problems, Copy and paste. Try on, tomorrow will be contest on codeforce div 2.
There is a good chance it was "fed" the solution in the training process, since codeforce, leetcode, etc. are used as training set. Unless, you asked a question which had no solution on the web, it shouldn't be impressive.
At the end of the day, these models can’t think and that fact alone is why they’ll never replace programmers. I see this as a productivity tool and a way for programmers to reduce cognitive load similar to how we keep phone numbers in our phones and don’t memorize them. LLMs are really nice for generating one off boiler plate code in a vacuum. Stuff I used to have to google and parse through with follow up queries is instantly available going through an LLM.
they are a great alternative to google search or quick stack overflow check so far, but they can't reason with logic for now
They don't need to think the same way a human would in order to be good
thats like saying a plane isnt flying because they dont do like bird. totally funny. there are many way to get to antartica
@@_fuji_studio_ it still uses the same physical laws, as long as AI can't think rationally there is very low chance that it will be superior to us. Logic is the highest form of man
@@biswajitpramanik5650 same with deep learning model, its use same fundamental laws that is studied by brain reasearcher so at least have many read first. tell me human that can read 10000 pages of pdf and can tell all the contents of it? deep learning model can perform logic if given all the context, the noob user here is just asking directly without giving all the context, just like human, human willnt able to answer if not given all the context, go ask at stackoverflow without giving all the context, your post will be closed.
Competitive programming questions are a terrible way to assess reasoning. These are solved problems. 99.99% of my work is solving problems that are unsolved and open ended. This is why leetcode sucks as a way to evaluate software engineers
Yeah. And remember, solutions to Codeforces/LeetCode/etc. are openly available with literally hundreds of solutions for LC medium-level problems. How is it "solving" Codeforces contests that it has already had access to on multiple training runs any different to me "solving" it by searching for someone else's submission and then copy-pasting that? Because it knows exactly where to look while I'd have to spend some time searching it?
Leetcode is actually an *exercise* and many tasks kinda similiar to chess, and how we know computers very good at chess. When you lift barbell you don't care if crane can lift it.
If I'm preparing to run a marathon I'm not going to do that by doing bicep curls. Measuring how much weight I can curl with my arms says nothing about how ready I am for the marathon.
isn't that devin that made scam video of solving upwork task😂
OpenAI finally wrote a wrapper for OpenAI, infinite inteligence glitch, you wrap crap with crap and you get crap.
If you try to build something with any of these models, you'll eventually realize why the person sitting in the chair (programmer?) is still necessary. The problem is that the models cannot reason, they cannot read between the lines, it still takes hours and hours and hours to get everything right. CRUD apps probably are in danger, yeah. But anything of value still takes a long long time to get right. This is my 2c as someone who doesn't know how to program and has been building an app with Claude.
I think you haven’t fully been paying attention to what this new model does. It absolutely does “reason” in every meaningful sense of the word. Go check out some of the reasoning traces that OpenAI posted from O1. It’s insane what it’s capable of doing, and we don’t even have access to the full model yet. The game of moving the goalposts every time a new model is released with new capabilities is slowing fall apart. Denial will not be possible forever.
@@therainman7777 Did you even wrote that I USE CLAUDE EVERY SINGLE DAY FOR HOURS? Where tf was I saying I was in denial? This tech is revolutionary, but calm your tits, go watch a video on how LLMs work and then come back to me if these models can *actually* reason. The bottom barrel devs are already replaced, yeah, but there is still going to be the need for someone to sit in the chair and do the actual thing. That job is not going away, likely ever.
true bro
@@therainman7777 Reason isn't easy. When i say 'mary and jane are sisters' you assume to each other, but that is never stated. When i say 'mary and jane are mothers' you know they can't be to each other. It's going to be a long time before AI can do what you meant and not what you said
@@adambickford8720 The examples you have are not good ones. The fact that the model thinks “Mary and James are sisters” means that they are sisters with one another is actually correct behavior, because in English when we say that sentence that is exactly what we mean. No one would ever say “Mary and Jane are sisters” if they’re talking about the fact that each of them separately has siblings. That’s just not what that sentence means in English. So the model interpreting it the way you stated would actually be correct. Also, that’s not reasoning in the first place, it’s just parsing the meaning of a sentence.
I think you were overly impressed by the pygame. That's easier to write than most programming tasks other models have done. There's so much training data and it's so easy to use. That was probably only 50 lines of code. What would have been more impressive is if they could edit the game somehow. It outputted some boring garbage that no one would want to play and there's no way to fix it and make it better.
Exactly. Imagine how many thousands and thousands of 2D PyGame training materials exist. I get that it sounds difficult for the person who hasn't tinkered with the library before, but the library has been out there for 20+ years now. I haven't watched the whole video, 10 mins in, but I'm disappointed if that PyGame demo (BY OPENAI!!) is all there is to this. Is there a check to make sure the strawberry doesn't spawn (randomly) on the tile the player is on?
I am in awe of the improvements the AI field is making. But , still, just now tried GPT o1, bro made mistakes in giving examples for BCNF and 3NF. Like , its a theory, how hard is it to that get straight.
I don't get the obsession with programming and AI. If If AI replaces programming, only physical jobs will be relevant. What's the use of accountants than? What's the use of CS? What's the use of architects? What's the use of civil engineers? You will have AI draw the building and people to impalement it
Have you heard of humanoid robots?
@@olamide203 do androids dream of electric sheep?
My question is , do humanoid robots build buildings right now or in the next 10 years ?
@@BroomieHERE I believe they will get better over time. These companies need to justify the billions spent on R&D one way or another and that would be in the form of automating some jobs done by humans. It's also possible they will never get good enough to build buildings, only time will tell.
Those will be replaced in short order as well. Look up Andrej Karpathy's latest interview. TSLA is not a car company, it's a robotics company. Heck, just producing humanoid robots looks trivial compared to making them do human hand manipulations, since they'd be able to move their appendages far more precisely than a human can.
Guys, the gig was fun while it lasted
😢
never even started lol
I'm sad
Even if AI does Art good , artists are not obsolete , and you shouldn't stop drawing just because AI can do it , so in conclusion , Ai will replace programmers just like it replaced Artists , and just like Tesla cars replaced taxi drivers , etc ...
There are no taxi drivers anymore lol
@@juans9515but there are Uber. And depending on where you go, there are still plenty taxi drivers
The passion with AI is like the passion that has a lot of people about the end of the world
And here Iam learning about React Server Components and Tailwind CSS... 😭
Don't worry, these things are not worthless if you keep learning and deepening your knowledge about it. Go learn how, RSC work actually, etc, just explore
@@christopherbentley6647 Why lmao
All of this is true but I find myself demotivated to actually learn stuff due to this uncertainty whether or not it makes any sense anymore. All these years in sql, no-sql, webdev with js and all the framework craziness, PHP + frameworks, node, some typing languages along with typescript, java, hybrid apps, testing etc. etc. The uncertainty of all of this makes me envy people who plow fields with their tractors.
Time for career switch
Don't learn stuff, learn principles of software engineering. Frameworks don't matter, if you have deep knowledge of how computers work and how software development works, you can use pretty much any tool that you want. Languages and frameworks are tools there for specific use cases, and they all work similarly. There are always jobs for engineers, and AI will just be able to boost the workflow of them.
@@Nisox its not what companies say ..
bro see my comment, we are in the same boat bro, today really drove things home
@@Nisox This is true and I get that but when you are "out there" you have to adapt to a stack/technology at hand and even when you know how things work, heck, even when you programmed bits and pieces that do the core functionality of a framework you still have to get fluent with the "new" stuff. It's becoming more and more exhausting even if it's simpler due to experience. I attribute this exhaustion to the recurring news on how programmers will be extinct in the next 5 seconds.
Think of the computing power, thus energy and water that went into generating that game with an LLM and how unsustainable that is
I like the vid but programmers talking about the singularity like their profession is the last one keeping it from happening is just detached. Material sciences are so far behind such an event and no matter the model of AI not a single one gives a glint of inventivness when it comes to actually producing bleeding edge science and making a discovery.
Love your analysis. Keep it up. Love that you’re skeptical and not just a fanboy
Programming hasn't changed in decades. It has always been something that has needed intelligent individuals to do it. I believe that now it's actually getting harder, harder than it was ten years ago.
as a cybersecurity student i love the fact that most of tech things will be ai made soon. so i can harvest as many bugs as i want as a bug bounty hunter 😂😂😂
😂😂😂.... Bro I Wana learn coding also ethical hacking do u think they a way to join them together to make solid cash ?
from the comments and personal experience it's quite obvious that the ones falling for the hype are:
1 - non devs
2 - bellow juniors devs, probably school/college kids
3 - socialists that believe AI will bring forth the utopia
The abstraction stuff really hits someone trying to learn everything before starting a big project.
Any semi-experienced programmer knows that LLMs will NEVER replace anyone.
Maybe not full replacement. But what once would have required 20 people in a Dev team, will soon require 2 people and 18 AI Agents. That's what we are talking about. It still has an impact. And have you thought about junior devs who are just graduating college who maybe have a passion for tech and want to lift themselves and their families out of poverty? Please think of people beyond yourself into the (near) future.
i never thought of it this way and this does bring comfort. For example I appreciate a lot that I don't have to solve tedious issues at the hardware level. In the future we might look at current programming as tedious in much the same way.
just don’t stop coding bro and getting to this field! Programmer is not about only making money, it also about improving what exists and building what you want!
“The journey of a thousand miles begins with one step” - Lao Tzu. I would never thought Ai would improve that much within the past few years. The amount of investments behind that is way too big to ignore. I'm starting to believe Ai will replace the programmers at some significant level. They have the endless money and the technologies. So, we will see what will happen in the next 5 years. The coding will be dead but the critical thinking will still be valuable.
ai will be good once it doesnt feel the need to write an entire page of text for a yes or no question
Didn't expect to be talking over nuclear wars one minute into a coding video :D
if you ask the models to fix all the bugs in the existing code that is currently in production, it would use up the entire worlds resources without producing any new functionality.
It's great that we hold these models to such a high standard but we are also shifting the goalposts each time Ai does something super impressive.
This is such a solid video, love the 3 rules and the humility to acknowledge our own tendency towards confirmation bias
There is still going to be programmers. The bar for good software is going to get extremely high. While companies may need less and less programmers. But the still engineers to design and architect software even with LLM
I wanna see this thing work with an untested legacy codebase with tons of bugs in it.
It will be a very efficient way to add more bugs.
@@niamhleeson3522 I don't need an AI for that. My coworkers do that job better 😂
00:05 OpenAI recently launched a new model, OpenAI o1.
02:08 OpenAI o1 is highly proficient in competitive programming and PhD-level science questions.
04:20 OpenAI o1 is skilled in technical categories but struggles with text and personal writing
06:25 OpenAI GPT-3 can generate a 2D game with simple text prompts.
08:25 Advancements in programming may lead to a shift in the industry and redefine the roles of programmers.
10:33 Impacts of potential automation on programming and employment
12:23 Software development has become more accessible, but the demand for programmers has increased.
14:17 Programming has evolved over the years, becoming easier to produce code
16:16 Coding interview questions are readily available online.
my only concern is the dip in people being discouraged to get started in this climate. if you read this and this is you, just start regardless. like atrophy, you loose what you dont use. its just new tools to play with, like language servers. i used to learn java without IDE and just a comandline. now i use a ide with lsp of course but the gained experience from the beginning dont come with the tools, they come from you!
how is that dip in people getting in the field a concern for you? are you an employer? if you are a programmer, having a less saturated market is a plus for you
Nice insights neetcode
Right, a lot of Agents will produce the first iteration of the code, but then what happens? Will the agent be able to maintain the code? Requirements constantly change. Adding new functionality often breaks things. I think we'll be converted to business analysts effectively ensuring we validate requirements or user stories and ensure the proper code is gen'd.
This. Communication, high-level systems design, prompt engineering, business analysis, agile methodology experts will take the lead in this new AI paradigm.
- Don't ask AI, it has no reasoning...
- You know what? I am going to ask it even harder!!
It started that way where you would learn the programming to help solve your other problem. Thennit became specialized and now it's easy because there are pre-built solutions you don't have to digitize yourself but it's also easier than it's ever been.
On the point made around 8:00 I wonder how much the 22 seconds of thinking would help it build that game with only OpenGL and winapi.
Coding will be fully automated, but we don't even know when. Hell, I even wish to be alive when that happens (it is that much uncertain)
You will definitely be alive for it. Programming will most likely be fully automated in 5-10 years.
Isn't coding fully automated already? Compilers write all of our machine code for us. But now we've got to solve a new problem with that new abstraction. I don't see why that wouldn't keep on happening with LLMs? A new abstraction to use to solve problems?
@@therainman7777keep dreaming
@@Skadongle The comments on this channel are funny. Because it’s a coding-focused channel, everyone in the comments is delusional and in denial about what’s going on with AI. You can stick your head in the sand all you want-this is happening, and it’s obvious that it’s happening. 5-10 years is a conservative estimate. Don’t take it personally if you happen to code for a living.
@@therainman7777 you still poor and dont know how to code, so yeah
my question is how are people, supposedly devs, admitting they are using these tools? Do your companies allow the sharing of their private code? Do the companies' clients know that? Do your bosses know that?
Blame the game, don't blame the players. If the players' game has been made way easier via more sophisticated tools, why not embrace them if they can save you time?
We have licenses with companies that ensure total data privacy. If you are big enough, Azure/VertexAI (aka GCP)/etc will sandbox everything you do and charge you lots of money.
They will make a successful AI brain surgeon robot before they make an AI that can write good code.
subscribed because i was impressed by your honesty.
The log scale is in the x axis (compute time). The required compute to improve accuracy is probably huge, log scale sort of hides that in the chart
- so, which languages are you programming in?
- English
These models from GPT 3.5 to o1 still stuggle with basic addition and subtraction that involves more than 20+ numbers... this is not limited to GPT, Claude struggles too.
All you need to do is use an agent framework with a code interpreter as a tool. It will perform analytic continuation, joining via splines, etc. Not to mention complex applied math like wavelet or chirplet analysis (where you can define the chirp yourself) on a self-generated synthetic dataset.
Most limitations people experience are user error (or require a very small amount of traditional software development effort).
I think the inevitable conclusion for a lot of the questions you began to raise about the existential nature of AI progress, in regard to careers, is societal collapse and late stage capitalism dystopia.
I love your review video. Even when I was watching your leetcode videos, the part I liked the most I could learn from the way you think. Thanks man
Nice to use a log-lin plot instead of log-log. Plots can lie better than statistics
Answering your qs, "where does the value come from?".
It comes from time spent on creating digital products. Sure, you can probably clone apps with AI, but its going to take time. When you already have ready made apps, why would you waste hours getting the right prompts to clone a product ( which by the way, has to go through many tests before you can actually use or host it). In this fast paced world, the avg person isn't going to spend hours to make a product which only approximates the standard of a product that already exists. Its more productive to pay a small fee and save all the time and headache. That's where the value comes in
Yes most tasks within the economy, even the manual labor ones, will eventually be automated. I am talking about over 90%. As long as the knowledge and skill to do your job does not change faster than the rate of improvement of AI, then it will eventually be automated.
The harder thing to do is actually anything that deals with humans. Coming up with a product idea is average (new innovations won't come out from AI since they are inherently all extrapolations), finding and validating product fit, negotiating and closing the deal is impossible. So no, AI doesn't solve jack shit
Neetcode why dont you go live anymore.??
I would love it if we actually got something akin to an AGI, as devs we'd just start working at a larger scale.
Personally I've yet to see something interesting and truly creative built with these coding AIs. Yes you can build soulless 2D games I guess.
Stack Overflow didn't end coding; it helped people code better. Similarly, these LLM's it will make coding more accessible. I think the world will need more coders not less
The question isn't whether or not there will be programmers, but if AI is able to do let's say 50% of the tasks, improves productivity or less requirements of programmers, or the skilled required to do the programming.
That's it. It's Jover.
I like your work, you are very intelligent and well spoken. Thx
I think by the time programming becomes obsolete we will have much bigger issues, every white collar job will be gone too, and there will be millions if not billions of lost jobs, I hope it doesn't become like the wild west and politicians figure out a way to support everyone.
What I believe is that even if most tasks can be done by my autonomous AI agent, companies like Google won't lay off skilled programmers to save a few pennies. Instead, they will focus on harnessing the creativity and innovation of their developers to generate new ideas and revenue streams that far exceed the cost savings of reducing their workforce. so like, they can stay ahead of the curve, and like maintain their competitive edge
What will people do when we replace engineers with AI, and then a malicious agent is introduced into the AI, and begins to corrupt all code bases and/or technical solutions for problems?
If AI gets hacked or manipulated somehow, that would immediately discredit it's safety, and the world would fall into disarray for a period of time.
One things for sure . There are going to be less engineers .