People around me have been pushing "natural language code gen" for a while now in the data analysis space, to which I say-anyone who can execute a clear and unambiguous data ask using _natural language_ more efficiently than they can construct the ideal SQL query or DataFrame op is a savant, of one form or another.
I want to see how future pro ai managers, that fired all the developers, do when the client tells them the app just stopped working without other details and they have to find the error in the codebase with 20k lines of code that pass hundreds of states up and down the component tree like a seesaw
@@Gawroon7 I asked whether by "there are two Rs" chat GPT meant that there's only two phonemes of R. The reply was very off. something like "Yes, I mean actual graphemes. Even though the second R might be hard to perceive, there are still 2 Rs in the word "strawberry" in correctly spelled English" It's very funny.
You guys realize that they will get cheaper right. It has not even been 2 years since chatgpt 3.5 was released. It’s been about 7 years since transformers have been invented. So 7 years AT most, about 1.5 years of large scale efforts, and 5.5 years of niche work before that. Keep coping, how old will you be in 2035?
So, apparently this new million dollar idea from openai is just a self-proompter? Ironic how prompt "engineers" got replaced way before programmers ever could be
I've never been more unsure of a joke. Are you are saying it's easy to write proper HTML it's just no one does it. Or you think it is hard to write proper HTMl because everyone has their own opinion or something. Because it is really easy to write proper HTML just nobody does it because they don't see learning it or taking the time worth the effort for their genius brains.
another man with a decade of engineering experience, and a CS degree, using AI will* which is not too different to what was happening before AI. there's always been guys that are drastically faster than the average. the issue is that they're always rare and as tools and tasks become more complicated they become rarer.
@@rumfordc yep, exactly. It's an eternal regularity and "using AI" is a coincidence here. They will win not because of "using AI", but because of being "at the top of their game", which *coincidentally* may now involve using AI, or may not. Different times different tools. May even find your own. Looking at the broad picture it's "staying ahead" what matters, not "using AI" per se. Those are not equal yet and hardly ever will be, at least for some parts of IT industry.
@@rumfordcthere will be day when AI will not need human for anything and it is coming within 5-6 years, so your quote HUMAN USING AI WILL REPLACE HUMAN WITHOUT AI which is a parrot quote repeated by many AI supporter is a blind and misleading quote. They are working to make AI more intelligent then human they don't need human intervention in AI
Most of my job as a software engineer is meetings, design, documentation, and watching Fireship. Sitting down to code probably only accounts for 20%. I'm either totally safe or I'm doing it wrong and I'm in imminent danger.
I'm a data engineer. I spend more time talking to humans to figure out the requirements, quelling indecisive humans to create the requirements, translating the requirements into foundational/architectural decisions, clicking some stuff in whatever cloud tool I'm using and then, for a brief period of time, I code and maintain some intermediate level SQL in an 800-line query.
It's exactly how it should be, People just doesn't know how many projects companies (Mostly the big ones talking from experience) having so many projects on hold/delays. At very least for the next 5 years I guaranteed there is no need to panic, It will push more interns/juniors to certain projects they would've need been able to join beforehand. The question should be in that regards, What would happens in the far future if there won't be enough projects (Or the need for more)? It's less likely in the upcoming years but I'm sure it's very likely situation.. And there is a raise of CS degrees already so ye, There is a case here but at very least not in the near future.
Your job isn't in danger, at least for now, it's juniors the ones that should be concerned, especially the ones graduating in 3 or 4 years. The barrier of entry has grown and will keep growing exponentially.
I think it’s pretty amazing they managed to build the equivalent of an all knowing but also friendly and helpful person on stackoverflow considering the lack of real training data.
If only a PhD were about skills like programming and solving equations. Literally every PhD student uses solvers for anything more complex than basic calculus anyways. The challenge of a PhD is learning how to think about things in unique ways and pushing boundaries and exploring new possibilities.
I've been seeing people freaking out about this new model, "it's better than PHD humans at X,Y,Z!" where X,Y,Z basically amounts to data processing... like oh my god??? A computer can process data faster than a person???? WHAT???? lmao
Literally any modern computer can process data 'faster' than a human brain. Because a human brain is doing a whole bunch of shit at once in ADDITION to that data processing, while a computer does far less at any one time simply maintaining its 'active' state and therefor has more processing power to allocate for useful computation.
"It can beat programmers in olympics" Yeah if given unlimited amount of submissions, those same issues that are either ENTIRELY on the web or every single concept is on the web already, most of those olympics are for undergrad students
what exactly means it can "beat phd students"? I suspect is faster pretty well known problems that are well documented over the interned lol, so totally worthless.
@@gabrielbarrantes6946 well, it can either mean beating them in a fist fight, or getting more correct answers than they can. Im not sure which one though🤔
The fact that everyone is forgetting for some reason is that AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society. Why should anyone be excited? We are witnessing the start of something really bad.
more likely a limitation from how the tokenizer breaks the word down (i.e. it's not aware of individual characters), than something fundamentally wrong with the model itself.
@HessW No, wronger, it's even deeper. The car with his horsepower would bestow the horse, revealing a zero sum. Which after would divide the AI capability of coding.
@@jamaludeenameen5361 The phrase "A car won’t take your job, another horse driving a car will" can be interpreted to mean that technology (like AI or cars) on its own doesn't inherently replace humans or living creatures in a direct way. Horses can't drive cars, just like AI can't independently replace the complex, nuanced roles humans perform. Instead, it's humans who use AI or other technologies effectively that change the job landscape. In the context of AI, this means that AI alone isn’t going to take jobs. It doesn’t have the inherent ability to think, adapt, or make decisions like humans can. Instead, humans who adapt and incorporate AI into their work will have the advantage. They’ll be the ones who change industries, outperform their peers, and potentially replace those who don’t evolve with the times. The point is that AI, like a car, is just a tool. It requires a driver-someone capable of steering it effectively. The future of jobs won’t be one where AI takes over, but one where people who master AI technology will reshape industries, and those who don’t learn to "drive" will be left behind. In essence: AI won’t replace humans because it isn’t natural for it to perform human roles. But humans who learn how to harness AI will redefine how those roles are performed, much like a person who learned to drive a car left behind those relying on horses for transportation.
Very true. All these ai models look amazing but once you have used it for anything besides asking it rudimentary stuff then it falls apart very quickly.
But each version pushes further up against the rudimentary limit. The first cars randomly exploded and had to have horses travelling behind to carry extra fuel.
@@michaelnurse9089You can’t equate past advances in some field with advances in a completely other one. Quite a few parameters are different. You can however try to formulate rules for technological advancements in general. Processes like these tend to follow a logistical curve and the question is at what point of the curve are we right now. I would argue we’re about to hit the plateau.
@@michaelnurse9089 Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society. Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
Unemployment will go up in the future and people will wonder why. Very few will get a lot richer, the masses will be poor. We're just really bad at thinking about the future and the consequences of what we do. Just look at how long we've already been knowing about climate change.
I've been having a blast with it. I used gpt4 to setup the bare bones of a mud-like text game, I've got a compass in every room showing the direction of exits, inventory, can equip and unequip items, drop items from inventory, pick them up, place monsters, really simple combat (saving the in depth stuff for later) but what I couldn't do with gpt4 or gpt4o was make a top down map that shows all the rooms and their connections in relation to each other just using unicode characters. No matter how I tried to break the problem down and describe it I just couldn't get useful code. o1 produced the code and put in a legend. I'm talking with it about branching dialogue solutions and think it may be able to help me import TWINE exports as json as a solution for doing branching dialogue. I litteraly could never have done any of this without these tools, I'm in love.
@@Demoralized88 I played Gemstone IV briefly years ago, I don't think I ever gave dregonrealms a try, may have to rectify that. I mostly played around in the infinite supply of mediocre MUDs searching mud connector and similar listing sites.
Fuck it, I’m becoming a plumber. I’m also tired of these “snake game” examples. It’s just a glorified google at that point. Tons of snake examples on the web.
Buddy, the robots will be the plumbers, no job is safe plus you're not guaranteed to be a plumber since the workforce will be saturated from all the people that lost their jobs turning into plumbers
I used to be hopeful that AI could help me out a little through school but if this stuff’s already doing phd level physics I might not have school to finish
@@ryzikx Now the calculator can automatically do every job on Earth at 100 times the speed you can for 1/1000th of the cost, so you have no reason to be alive according to Capitalism
@@maxave7448 We're getting better at making software that throws sh*t on the wall and sees what sticks. Also known in the human world as a sh*tty programmer.
Many people are forgetting for some reason is that AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society. Why should anyone be excited? We are witnessing the start of something really bad.
@@RokeJulianLockhart.s13ouq Well, if an AI someday gets created which is equally as smart and conscious as a human, if not more, of course they can replace those jobs I mentioned as well. Edit: Before you mention, I know there is no such thing yet as a conscious AI and hopefully never will be. The speed of change in society would be so quick that it would mean hard times worldwide.
@@Tozu25 LLMs are search engines, like Google is. They're nothing more than correlators. They're not a form of intelligence, as their confident incorrectness when they get stuck in recursive loops demonstrates.
@@Tozu25 It is used as a tool stop being dumb you need human interaction even in programming it's not like I would give full access to an AI model to my business.
Is it just me who feels so sad that words are disappearing from the internet ? In this video, the word drug is censored just to please an algorithm. The other day I even saw someone who censored the word hate in «she hates being called wifey» smh
Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society. Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
@@Tozu25I'm not sure whether this will really replace doctors and stuff like that. Being a surgeon or dentist requires very fine motor control, extremely reliable expertise and knowledge, accountability, personality, etc., so as to not make a single mistake and to always navigate the patient's ill state perfectly. AIs and robots, which at this stage are far from known for their rigid foundations in any of these things, definitely have no ability to take any of these jobs. Moreover, if we really do eventually "solve" jobs, so that no one ever needs to work again, then we can rejoice at the fact that no one will be required to toil again. Things like UBI will become possible. The real doomsday scenario is if AI only succeeds in taking creative and artistic jobs, leaving humanity to do all the dead, manual labour. That is what I fear, not that doctors or actual trained professionals will be replaced.
@@spaghettiking653 I was diagnosed by an AI chatbot when I got my paid sick leave. I told the AI my symptoms, and got questions and then a real doctor signed the digital document and left. So it's already happening. Similar to anything, the AI does the task and then someone checks the result. But it's good that you are critical about AI, and looking both ways. You are the first one out of anyone, and I've spoken to like 15 people. That tells about intelligence, in you.
@@Tozu25 No, mass unemployment = new economic system and a break from the relentless capitalism dystopia we're experiencing. In big cities like London, regular new graduates can't even afford to buy houses on good salaries. The system is bullshit and needs to be torn down.
before gpt used to be bad at doing even basic force questions. But to o1, i gave my fluid mechanics problem and it was able to do it and i didn't even upload the diagram pictures. Its gotten really good now
OpenAI needs money, releases some reskinned GPT3.5 that asks "are you sure" secretly and send the response after that to the user to maintain hype, investor money and altmans job. Same bubble. Same hot (AI)r.
Yeah, this was plain dissapointing. I was expecting some major architectural change with all the hype around 'Q*' but this is just another chatbot except it's trained to ask itself 'are you sure about that?' a couple of times and provide long COTs with a fancy UI to hide the complexity from users who don't know how to prompt worth a dang.
Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society. Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
@@mr.nixtheboarddrawer1175 Well, the possible future products made by AI are not gonna be handed for free to you, unless society becomes socialist, and I don’t think that’s any more good.
@@Tozu25 None of that is going to happen. I wouldn't trust AI to be doing heart surgery even in 1,000 years, AI is AI, it's all guesswork, I would be more scared of *computers and simulations, as they actually involve math and physics, while AI just involves numbers multiplied by numbers multiplied by more numbers that eventually have an error that's small enough that works "good enough"* imagine that as your doctor, a doctor that MAYBE quite POSSIBLY will do the job right, also, you really think everyone's gonna lose their jobs in one night? Have you considered *us humans wanting the same thing as you, a livable society and preventing any of this happening, finding a solution, doing anything to make it all work out?* tl;dr AI is guesswork and we should worry more about nukes and simulations as simulations actually get math right (AI cannot make complex simulations because AI will get this line wrong or get this number slightly off)
Both, angular and firebase, are currently being re-obsoleted (by react, htmx, and svelte (or some combination)). Firebase has been dead about 8 years after it was born. Most wise programmers never used Firebase.
That's what i stumbled across. A channel supposed to be firebase documentation is doing all crazy stuff in th name of firebase. How could that be. Thank you now I get it.
Call me when it can become a professional poker player or blackjack counter so I can make millions at Stake, or how about a pro stock trader or something? Why has no one used openAI for this yet? In the future OpenAI might run entire countries GDP systems💀 Welcome our overlords.
3:17 I've just tried asking the o1-preview model `How many "r" in the word strawberry?`, it answered 3 "r"s correctly at first try. Then in the same chat, I switched to 4o model, it said 2. 🤷 Then switched back to o1-preview, it even apologized for the mistake in the previous answer made by 4o. Pretty smart to me. 🎉
2:08 the reason many people are moving over to Claude is because Claude isn't censored and is more useful for things like generating erotic content and conversations that don't sound like you're talking to HR, which is all that the majority of people care about. The o1 model is going to be great for jobs, it's a little more reliable for perfect answers, but the problem remains that corporations want something that's specifically useful and not generally useful, a lot of them have internal systems and custom setups that don't generalize, and they worry about data leaks, and would prefer the ability to run all of this in-house. The majority of AI users are fine with some generalization, can't afford to run the best ones in-house, and want it uncensored. Unless Microsoft can stay ahead, people will move on the moment something almost as good comes out that isn't censored, and Microsoft will be stuck catering to corporations who have demands.
You're thinking about this all wrong. Consumer software is not where the money is at. Most profitable MS divisions are all centered around business products. They obviously want to sell AI to the business first and foremost. If you thought MS expects regular consumers to buy the Copilot+ computers, you're dead wrong. They don't care if literally no one buys it. Because business will eat that shit up. And big companies will pay insane money to get as you say their own specialized AI solutions. While things like Claude, will struggle to finance anything after they run out of venture capital.
In my experience Claude censors more. I tried asking it a question about what a stolen vehicle could be used for (a screenshot from a driver’s license exam) and it said nope. Chatgpt answered it.
I mean, as a professional dev, it seems to me that 74.2% of problems are the first 10% of time spent on a project the other 90% is the other 26.8% of issues, and we're still safe there. It's actually nice that AI will get us there quicker.
this is always my question but the answer is always hard to find. where would they even get all these completely original coding questions to test these models on?
The core innovation driving o1 was made public about 6 months ago. And it really works, but we still have a long way to go. I tried it on 2 challenging problems, and it almost didn't suck.
I expected something crazy, but when I saw the benchmarks, they're really not that groundbreaking. o1's reasoning token paradigm serves as a middle layer for handling complex instructions, so it's more internally organised, but that doesn't necessarily mean the underlying architecture has substantially improved. Coding, maths and science are all topics where handling information in a purely linguistic context by default is detrimental, so it naturally follows that it would be more effective to logically deconstruct problems. However, you might see similar improvements with any other LLM by manually creating an intermediary prompting stage. This is still an improvement, but remember, a significant leap ahead at this stage would mean something as groundbreaking to transformers, as transformers were to RNNs, and this is nowhere close. Make no mistake, this is part of the plateau. There will still be progress, and we should be looking to concentrate that towards building tools to aid developers, rather an attempt to replace them.
I am pretty sure GPT4 also prompts itself somewhat at least because I am remember one time it accidentally showed me it's internal prompting. It said something like "user wants to understand blah blah..." then abruptly switched to explaining what I wanted.
You are correct, this is something ChatGPT does. It basically tries to create a more sophisticated prompt out of your prompt before actually addressing it. However, what these new models essentially do is check their answer and try to sanity check themselves several times before giving you the final response.
@@Caphalem I figured something like that. I just thought this distinction wasn't totally clear in the video, or maybe I wasn't paying enough attention. Thanks for the reply
Hi Jeff, I'm writing this comment to delightfully let you know that I absolutely like the way you do the "last kick" at the end of your videos sometimes. Beautifully crafted kick! Thanks. ❤
well, what you just said is quite obvious because if we think about it, no company is going to redesign the entire algorithm again to come up with a new model.
@@SkegAudio Nobody does the developer / comedy / memes / but still informative style he has. He's one of those "never miss a video" channels I have to watch on the spot.
As long as it can't solve the _"Okay, so hear me out."_ problems the client has with all the help of _"I'm sure you'll figure it out!"_ and (of course) no further details, I think my job is pretty safe.
To me the worst part is that it fails to strawberry test. For something that is a recursive self prompter, it sucks at prompting because constructing a proper prompt is literally the easiest way to pass the test.
As a coder and developer, I have no fear of "LLMs" taking my job. A lot of the stuff I code is too specific and niche for an LLM to figure out without having hella bugs.
@@fullstackweebdev well said. I push back on the garbage requirements I receive and help point the customer in the right direction for something more sane. A.I. will happily write a clucking fsck.
based on what you said, I think this confirms that they are now at the phase where they're doing clever implementations of the LLMs and being more specific in what it should generate well. In my opinion this is a sign that the technology is maturing, and the real potentially world changing products are coming. But it may also be a sign that this technology is at it's peak, when you can't go up, you go side ways
I love how, by this point, people should've already realized they shouldn't freak out when new AI DLC drops, yet it all follows the same hype trend. They keep being like "oh, but this time it's for real", but until we see a real and fair example of it actually doing all these revolutionary things, it's illogical to assume things will be any different. It's not copium, it's just a matter of proof of concept
Just use it. AI has become an amazing pair programmer and conversational wiki page. I like bouncing logic off of it and getting its feedback, and its ability to answer questions id normally send to stack exchange. If anything is loosing its job it will be stack exchange 😂 Just leverage the tool already. What it will make obsolete are low level junior programmers which no ai skills because ai fills in skill gaps. Junior devs will be expected to do more, and senior devs will be expected to do more. If anything AI will just make it so our jobs demand more of us, we’ll be expected faster turn around times or to ship twice as much code.
@@jonwinder6622 I mean- Good luck throwing a 10.000 line project at chatgpt. As a matter of fact- Go and create a simple 2000 line vite project, with let's keep it small and simple and say 10 scripts vanilla js, a simple small game on an html canvas. No AI in the world comes even close to having enough tokens to even just read through that small af project- Let alone provide good additional code that doesn't suck aboslute balls without spending hours proompting - at which point you may aswell just write it yourself. AI is cool for stuff like: "How did flexbox go again? i'm too lazy to google, ai do it" or "ah crap i forgot the syntax for a switch case in some niche language - ai, you do it"
Experienced the same with coding, its initial output was impressive but I also hit that limit pretty quickly on what it could accomplish and it failed at certain tasks. So a marginal improvement from GPT-4o, which in itself is pretty impressive. Another huge leap in capabilities is still hard to imagine, but looking forward to it.
@@w.mcnamara I just remind them of how crazy their ideas are. I remind them that they are claiming that linear algebra and statistics have literally become living beings and can now reason like humans. The hype is just silly at this point, I just ask them how its possible literal math became conscious and I never get a reply back.
@@iraniansuperhacker4382 Probably the same way a few neurons sending singals back and forth can become conscoius aka we don't know. We don't know what consciousness is, what do you need for that or how it comes to exist. Maybe even math can become conscious who knows. That said I'm not saying any AI is conscious or even that it will ever reach consciousness, just that we don't know if it is possible.
@@micca971 I would go as far as to say that math being processed on a silicon chip becoming conscious is physically impossible no matter how complex of a system it is. This is like saying if we write a sufficiently advanced piece of literature it will eventually be able to think or reason in some way. It just fundamentally doesnt make any sense.
@@iraniansuperhacker4382 that's not same at all, a piece of literature does not compute or process anything it does not receive and manipulate energy, therefore it cannot do aynthing on its own. If however you said a lot of monkeys were writing books, then possibly the entire collective of monkeys writing books (a lot of them, trillions or quadrillions at least or maybe more) can become conscious or at least exhibit intelligent behaviour as we see with the current AI. Aka it's not just about complexity, it's about manipulating energy and data using some logic. Also keep in mind this is all very hypothetical, but you can't say it is fundamentaly wrong. We just don't know.
Something just to keep in mind, AI might get better, polishing existing tools etc.. butbin reality to ship a production grade ready software solution you always need a bunch of people, human thinking, applying execption rules here and there. Make tradeoff between technical debt and performance at multiple stages of product life. So a single or double queries to build up something is not gonna go anywhere..
Probably not because at that point none of the models would be even public even to anyone, .05% is a lot, but yeah it is getting pretty resource dependent
Not really. Coding has tons of sample data to train on. There's tons of obscure roles or tasks in the business world that could be replicated if the right training data was available but it isn't since it's only in some guys head
When we eventually get AGI it will be so expensive to run that we will only be able to turn it on for a fraction of a second to resolve all of humanity's problems. It will then take 10 years to work through all of the data created.
Weirdly I've had the exact opposite experience from you. My first question what the strawberry question and it answered correctly and showed me the thought process. The code I've asked it to generate has been flawless and I've not experienced a single hallucination. Very strange.
First thing I asked o1 was what the difference between o1 and 4o was. It ran in circles for a little bit and ultimately asked me for more information. I said “it’s you. It’s gpt models” and it took like 25 more seconds of thought and came up with the answer it had no idea what I was talking about because its training was capped to Sept 2023. I then gave it a prompt about colostomy bags, and it’s only here in this video I’m now learning about that these steps I’m getting it to take might one day cost me extra money. Well nuts to that, the subscription is already expensive enough and barely justifiable. Guess I’ll stick with 4o for most things
What I was disappointed most about with this new "thinking" preview model is that it still has almost no awareness of anything relating to itself. Whenever I ask a question about itself, its hallucination rate is like 85%
It will take years for AI to plateau, sure the specific method like GPT might plateau, but not the field in general. We have barely started with this and I 100% believe that the improvements are going to be even faster and better now
@fireship Devin did not go to 74% with o1 model, that is Devins own production model (some fine tuned version of existing models). the comparison was betwen base GPT4o and o1 and it got up to 51%.
So it's basically just a custom version of GPT-4o which iteratively prompts itself until it has found the desired solution? Or is there something more to it?
Most likely what happened is they trained a model that is better or fine-tuned rather for looking at a previous generated context window of questions and thought processes. So the “reasoning model”. Which was likely trained from the same distilled data as GPT4o since this model actually only goes to oct 2023 and current 4o goes to December. But then they take the question, “reasoning tokens” are 4o or 4o mini variant that creates all kinds of prompts and potential solutions. Then this new model reasons “” about it as it’s designed to do based on training on looking at potential options and then tries to come up with a better solution. Hence the chain of thought here is really all they did and is something people built pretty much in the first 2 months of llama coming out and has been a concept since then that people have had high success with. So likely nothing truly special here. People had already reported smashing zero shot benchmarks with chain of thought on math and other stuff.
That's not the raw chain of thought, just a summary. This is what OpenAI says "After weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users"
5:03 Did you try using language to explain to gpt that you wanted the original code provided but to look out for the errors? I've found you can often convince it to fix the error it made if proper language and goal-seeking is used
Correct me if I'm mistaken, but doesn't this new model just apply (invisible) chain-of-thought reasoning to any given prompt, which is what good prompters used to do themselves when the model gets stuck? It's still useful because the user doesn't need to know or take the time to craft their prompt out that way (I get it - I've only done COT once or twice myself out of laziness), but is it actually BETTER than the older models or is it just comparable to using optimized prompts for each question?
@@PatrickHoodDaniel LLM's don't give consistent answers because 1) they're rate limited and the amount of compute spent changes the answer and 2) they have a 'temperature' parameter which is effectively just RNG when selecting from the top token candidates 3) every single character you type is a completely new input so something as simple as leaving out a question mark will potentially get a different answer
74% might sound like a lot for a non-technical person, but for those who know what is SLA and how hard to go from 99.9 to 99.99, 74% is not even worth looking. Though I have doubts that LLM models will ever reach 99%
Right being 95 percent accurate in your compute is terrible for most things. Imagine 1/20 words you speak and interpret wrong while not even knowing they were wrong. Errors would compound all over.
"Or maybe I'm just a horse influencer saying, a car won't take your job, but a horse driving a car will"...deep stuff man
Need tshirt with this written on it
yup, that one made my day
Same 🤯
Damn, I had no idea BoJack Horseman was an Uber driver.
that didn't age well
If my job was coding solutions to problems with rigorously-defined requirements, this would be concerning.
If my job ever had a single rigorously-defined requirement, I would be happy
🤣🤣
People around me have been pushing "natural language code gen" for a while now in the data analysis space, to which I say-anyone who can execute a clear and unambiguous data ask using _natural language_ more efficiently than they can construct the ideal SQL query or DataFrame op is a savant, of one form or another.
I want to see how future pro ai managers, that fired all the developers, do when the client tells them the app just stopped working without other details and they have to find the error in the codebase with 20k lines of code that pass hundreds of states up and down the component tree like a seesaw
That sounds like aerospace software development. I assure you they do not want AI code in their planes 😄
I like how Turing test now is how many r's are there in Strawberry.
lol
hahahaha
I have a friend who manages to say "strawberry" without using any of the "r" in it.
This example shows that is also a philosophical issue.
@@Gawroon7 I asked whether by "there are two Rs" chat GPT meant that there's only two phonemes of R. The reply was very off. something like "Yes, I mean actual graphemes. Even though the second R might be hard to perceive, there are still 2 Rs in the word "strawberry" in correctly spelled English"
It's very funny.
why is this task so hard anyway?
PHD student here, the key to beat any LLM is to use a stick
I'll beat you with that , you are useless now.
Or a strawberry
Also, unplugging it from the wall socket xD
@@last_fanboy_of_golb where to find this "stick" Is that some software?
@@roosterru A strawberry on a stick.
EDIT: Sorry, Strawbery.
O1 is a hilarious name for a program which has an exponential energy bill
LOL
This comment section is next level.
so many were freaking out about crypto energy costs, but since AI, everyone is like "well, we gotta advance"
You guys realize that they will get cheaper right. It has not even been 2 years since chatgpt 3.5 was released. It’s been about 7 years since transformers have been invented.
So 7 years AT most, about 1.5 years of large scale efforts, and 5.5 years of niche work before that.
Keep coping, how old will you be in 2035?
@@Manwith6secondmemory32
Thanks to fireship for almost giving me a heart attack at the beginning and then relieving me at the end lol
That's literally his formula
So, apparently this new million dollar idea from openai is just a self-proompter? Ironic how prompt "engineers" got replaced way before programmers ever could be
@@maxave7448 good.
He's master that
@@maxave7448 >prompt "engineers" got replaced
hilarious how you pointed that out lol
My HTML job is really gone now
cry more😂
Don't worry: no one knows how to do good HTML, neither the AI
Front Page Express, Windows 98! 😊😊😊
You're still coding in html? Oh, sh*t. 😂😂
I've never been more unsure of a joke. Are you are saying it's easy to write proper HTML it's just no one does it. Or you think it is hard to write proper HTMl because everyone has their own opinion or something.
Because it is really easy to write proper HTML just nobody does it because they don't see learning it or taking the time worth the effort for their genius brains.
5:40 *"Ai won't take your job, but another man using Ai will.."*
3-x
another man with a decade of engineering experience, and a CS degree, using AI will* which is not too different to what was happening before AI. there's always been guys that are drastically faster than the average. the issue is that they're always rare and as tools and tasks become more complicated they become rarer.
@@rumfordc yep, exactly. It's an eternal regularity and "using AI" is a coincidence here. They will win not because of "using AI", but because of being "at the top of their game", which *coincidentally* may now involve using AI, or may not. Different times different tools. May even find your own. Looking at the broad picture it's "staying ahead" what matters, not "using AI" per se. Those are not equal yet and hardly ever will be, at least for some parts of IT industry.
@@rumfordcthere will be day when AI will not need human for anything and it is coming within 5-6 years, so your quote HUMAN USING AI WILL REPLACE HUMAN WITHOUT AI which is a parrot quote repeated by many AI supporter is a blind and misleading quote.
They are working to make AI more intelligent then human they don't need human intervention in AI
@@moonwine7398 😆🤦♂ come back when you know what a quote is
Most of my job as a software engineer is meetings, design, documentation, and watching Fireship. Sitting down to code probably only accounts for 20%. I'm either totally safe or I'm doing it wrong and I'm in imminent danger.
I'm a data engineer. I spend more time talking to humans to figure out the requirements, quelling indecisive humans to create the requirements, translating the requirements into foundational/architectural decisions, clicking some stuff in whatever cloud tool I'm using and then, for a brief period of time, I code and maintain some intermediate level SQL in an 800-line query.
It's exactly how it should be, People just doesn't know how many projects companies (Mostly the big ones talking from experience) having so many projects on hold/delays. At very least for the next 5 years I guaranteed there is no need to panic, It will push more interns/juniors to certain projects they would've need been able to join beforehand.
The question should be in that regards, What would happens in the far future if there won't be enough projects (Or the need for more)? It's less likely in the upcoming years but I'm sure it's very likely situation.. And there is a raise of CS degrees already so ye, There is a case here but at very least not in the near future.
Your job isn't in danger, at least for now, it's juniors the ones that should be concerned, especially the ones graduating in 3 or 4 years. The barrier of entry has grown and will keep growing exponentially.
I'm pretty sure ChatGPT 4o is great at meetings. ;)
This will change with AI agents
I think it’s pretty amazing they managed to build the equivalent of an all knowing but also friendly and helpful person on stackoverflow considering the lack of real training data.
This is outrageously funny. They probably had to mash together Pinterest or a recipe blog with stack overflow answers just to make it palatable.
If only a PhD were about skills like programming and solving equations. Literally every PhD student uses solvers for anything more complex than basic calculus anyways. The challenge of a PhD is learning how to think about things in unique ways and pushing boundaries and exploring new possibilities.
No no no you got it all wrong, you get a PhD to solve standardized questions on a test!
it has learning tokens now, wait another 2 models and get back to me
@@o1-preview facts
There are too many PhDs with closed minds out there for it to be true...
@@o1-preview just another 2 models bro ... trust me
I've been seeing people freaking out about this new model, "it's better than PHD humans at X,Y,Z!" where X,Y,Z basically amounts to data processing... like oh my god??? A computer can process data faster than a person???? WHAT???? lmao
Literally any modern computer can process data 'faster' than a human brain. Because a human brain is doing a whole bunch of shit at once in ADDITION to that data processing, while a computer does far less at any one time simply maintaining its 'active' state and therefor has more processing power to allocate for useful computation.
Not surprising, since most people hyping AI have no idea what a PhD actually is.
"It can beat programmers in olympics" Yeah if given unlimited amount of submissions, those same issues that are either ENTIRELY on the web or every single concept is on the web already, most of those olympics are for undergrad students
@@deividfostit doesn't matter it's evolving fast in 10 years it will be better than humans at everything EVERYTHING
Exactly, it's as if one were trying to compete with the calculator hahahahahahaha
Impressive it can beat PhD students. But remember a PhD in breakdancing is not the same as being a breakdancer.
This one could be called GPT-Raygun.
😂 good one
what exactly means it can "beat phd students"? I suspect is faster pretty well known problems that are well documented over the interned lol, so totally worthless.
@@gabrielbarrantes6946 well, it can either mean beating them in a fist fight, or getting more correct answers than they can. Im not sure which one though🤔
If AI had feelings it would definitely being hurt by this insult
PhD students are also still learning. How does it compare to the pissed off post doc who’s been stuck in academia for 15 years after he graduated…
“It’s basically just like GPT4 with the ability to recursively prompt itself”. Exactly. We are in the parlor tricks phase of this hype cycle.
*How many 'r' characters are in the word "strawberry" ?*
GPT-4 : TWO!!
GPT-o1: "I have the answer for realsies, but it'll cost you $2,000"
Strawbery obviously has 2 R's, idk what all the hubbub is about....
@@kindlin just trolls
The cutting edge of Code Reports.
EDGE
@@perthecther__203 EDGE OR Chrome 😭😭😭😭😭😭😭😭😭😭😭😭😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😭😭😭😭😭😭😭😭😭😭😒😒😒😒😒😒😒😒🥰🥰🥰🥰🥰🥰🥰🥰☺☺☺☺☺☺☺☺😐😐😐😐😣😣😣😣😐😐🥳🥳🥳🥳🥳🥳🥳🥳
OF
The fact that everyone is forgetting for some reason is that AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society.
Why should anyone be excited? We are witnessing the start of something really bad.
@@Tozu25 To be fair, if billions of people have nothing to lose i can't imagine companies can keep such a status quo going for long. I hope anyway.
It still can't count how many r's in strawberry.
I think we good for a while...
I too hope the Sarcasm hold us above the water... at least for a week or too! 😂😂😂
It can...
more likely a limitation from how the tokenizer breaks the word down (i.e. it's not aware of individual characters), than something fundamentally wrong with the model itself.
there are two r's in strawberry though. There are also three r's and one r.
How many r's are there in strawberrry though?
They may replace PhDs. But never will they approach your PhD in sarcasm.
If I have learned anything...everything is a few models away
Try prompting it to write the office starting scene.
Have you seen Neuro on Twitch? That little AI is the master of sarcasm. It's so strong that you can even tell despite the monotone tts.
They can replace PhDs. In the sense that they can answer standard questions that a PhD can answer in theory.
@@soulsmith4787 you mean that AI loli Vtuber that sings songs like Bury the Light and Never Gonna Give You Up?
Thanks mate 🙏
0:19 - it is now 100% proven that English is the hardest subject.
Also this is O(#); that is, the number of prompts until an AI that can't count letters properly thinks its answer is correct.
Hearing a slight raspiness in Fireship's voice is a subtle reminder that it is not AI-generated yet.
Didn't someone else close his voice and he said that he didn't minded?
*Yet.*
Or maybe that's a sign this video was... For the first time
prompt: add raspiness, increase by 15.000%
fireship cloned his own voice waaaay back when he had very few subs and used it for a couple of vids
"A car won't take your job, but another horse driving a car will." That hit way harder than it needed to.
I dont understand it, please explain
@@jamaludeenameen5361this new technology won’t take your job, but someone who knows how to use that technology will, not the machines itself.
@HessW No, wronger, it's even deeper. The car with his horsepower would bestow the horse, revealing a zero sum. Which after would divide the AI capability of coding.
No worries guys. Afghanistan still has a big market for horses.
@@jamaludeenameen5361
The phrase "A car won’t take your job, another horse driving a car will" can be interpreted to mean that technology (like AI or cars) on its own doesn't inherently replace humans or living creatures in a direct way. Horses can't drive cars, just like AI can't independently replace the complex, nuanced roles humans perform. Instead, it's humans who use AI or other technologies effectively that change the job landscape.
In the context of AI, this means that AI alone isn’t going to take jobs. It doesn’t have the inherent ability to think, adapt, or make decisions like humans can. Instead, humans who adapt and incorporate AI into their work will have the advantage. They’ll be the ones who change industries, outperform their peers, and potentially replace those who don’t evolve with the times.
The point is that AI, like a car, is just a tool. It requires a driver-someone capable of steering it effectively. The future of jobs won’t be one where AI takes over, but one where people who master AI technology will reshape industries, and those who don’t learn to "drive" will be left behind.
In essence: AI won’t replace humans because it isn’t natural for it to perform human roles. But humans who learn how to harness AI will redefine how those roles are performed, much like a person who learned to drive a car left behind those relying on horses for transportation.
Very true. All these ai models look amazing but once you have used it for anything besides asking it rudimentary stuff then it falls apart very quickly.
But each version pushes further up against the rudimentary limit. The first cars randomly exploded and had to have horses travelling behind to carry extra fuel.
@@michaelnurse9089You can’t equate past advances in some field with advances in a completely other one. Quite a few parameters are different. You can however try to formulate rules for technological advancements in general. Processes like these tend to follow a logistical curve and the question is at what point of the curve are we right now. I would argue we’re about to hit the plateau.
@@Pfennigfuchs-z7vAlso its just a confirmation bias. For every technological innovation there is a problem unsolved since decades
@@michaelnurse9089 Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society.
Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
@@Tozu25You have a fundamental misunderstanding of how LLMs work if you think they could ever replace engineers and doctors.
all this energy to just not pay employees properly, it's crazy
True
Unemployment will go up in the future and people will wonder why. Very few will get a lot richer, the masses will be poor. We're just really bad at thinking about the future and the consequences of what we do. Just look at how long we've already been knowing about climate change.
I've been having a blast with it. I used gpt4 to setup the bare bones of a mud-like text game, I've got a compass in every room showing the direction of exits, inventory, can equip and unequip items, drop items from inventory, pick them up, place monsters, really simple combat (saving the in depth stuff for later) but what I couldn't do with gpt4 or gpt4o was make a top down map that shows all the rooms and their connections in relation to each other just using unicode characters. No matter how I tried to break the problem down and describe it I just couldn't get useful code.
o1 produced the code and put in a legend. I'm talking with it about branching dialogue solutions and think it may be able to help me import TWINE exports as json as a solution for doing branching dialogue.
I litteraly could never have done any of this without these tools, I'm in love.
you by chance a former or current dragonrealms player?
@@Demoralized88 I played Gemstone IV briefly years ago, I don't think I ever gave dregonrealms a try, may have to rectify that. I mostly played around in the infinite supply of mediocre MUDs searching mud connector and similar listing sites.
Fuck it, I’m becoming a plumber.
I’m also tired of these “snake game” examples. It’s just a glorified google at that point. Tons of snake examples on the web.
and they mostly suck, which is what this "ai" is using to teach itself, garbage in - garbage out
I laughed out loud at these coding demos
Iv already given up on programming. And just on how to use already created softwares.😢😢😢
Buddy, the robots will be the plumbers, no job is safe plus you're not guaranteed to be a plumber since the workforce will be saturated from all the people that lost their jobs turning into plumbers
@@SMGA14 Nah, robots are California tech bro copium. Trade jobs are mostly safe for the next 20 years
I used to be hopeful that AI could help me out a little through school but if this stuff’s already doing phd level physics I might not have school to finish
Atm there is no point in studying.
calculators can do arithmetic better than any humans why learn math ?
@@ryzikx Now the calculator can automatically do every job on Earth at 100 times the speed you can for 1/1000th of the cost, so you have no reason to be alive according to Capitalism
@@ryzikx its cool
@ryzikx because then the AI realises you are stupid and it will tell you that 2 + 2 = 5 and so on, you will end up becoming it's dog.
This is concerning, it took the AI over 10,000 attempts with access to every relevant example on the internet during a contest to get gold lmao
It basically tried everything until somwhing worked lol
Like dr strange searching through every possibility to win against Thanos
@@maxave7448 We're getting better at making software that throws sh*t on the wall and sees what sticks. Also known in the human world as a sh*tty programmer.
Its not about those 10000 attempts, but how long it takes.
@@genpotrait2274 Not really, its not viable to run 10000 attempts. In reality it won't know which scenario is the correct one
The improvements are impressive, but there's still a lot to uncover about the true impact and capabilities of these models.
Many people are forgetting for some reason is that AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society.
Why should anyone be excited? We are witnessing the start of something really bad.
@@Tozu25 I disagree.
@@RokeJulianLockhart.s13ouq Well, if an AI someday gets created which is equally as smart and conscious as a human, if not more, of course they can replace those jobs I mentioned as well.
Edit: Before you mention, I know there is no such thing yet as a conscious AI and hopefully never will be. The speed of change in society would be so quick that it would mean hard times worldwide.
@@Tozu25 LLMs are search engines, like Google is. They're nothing more than correlators. They're not a form of intelligence, as their confident incorrectness when they get stuck in recursive loops demonstrates.
@@Tozu25 It is used as a tool stop being dumb you need human interaction even in programming it's not like I would give full access to an AI model to my business.
The potential of AI is indeed vast yet it falls short at times. In the end, it's a tool, at least for now.
Is it just me who feels so sad that words are disappearing from the internet ? In this video, the word drug is censored just to please an algorithm. The other day I even saw someone who censored the word hate in «she hates being called wifey» smh
You're lucky the word "wifey" survived. Gotta cherish what we have.
even scarier, we are now using words like "unalive" in real life which stems directly from online advertising censorship. Corpo speak
*t’s n*t j*st y*o b*d 😢
The other day i replied to a comment with 100% innocent sentence, no reason to censor it, yet it was deleted, soon we won't be able to say anything.
that's just how language works. internet is not being special here
They took our jerbs!
They Turk are Durrr
took yer durr!!!
Tuk yer jerbs !!!!!
Yarrrrr haarrrr
Make no mistake, they need that to happen to pay for the billions they’ve sunk into training these models. (It won’t work though).
This is a huge leap forward in Sam Altman's ability to separate AI bros from their trust funds and crypto hodlings.
Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society.
Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
@@Tozu25I'm not sure whether this will really replace doctors and stuff like that. Being a surgeon or dentist requires very fine motor control, extremely reliable expertise and knowledge, accountability, personality, etc., so as to not make a single mistake and to always navigate the patient's ill state perfectly. AIs and robots, which at this stage are far from known for their rigid foundations in any of these things, definitely have no ability to take any of these jobs. Moreover, if we really do eventually "solve" jobs, so that no one ever needs to work again, then we can rejoice at the fact that no one will be required to toil again. Things like UBI will become possible. The real doomsday scenario is if AI only succeeds in taking creative and artistic jobs, leaving humanity to do all the dead, manual labour. That is what I fear, not that doctors or actual trained professionals will be replaced.
@@spaghettiking653 I was diagnosed by an AI chatbot when I got my paid sick leave. I told the AI my symptoms, and got questions and then a real doctor signed the digital document and left. So it's already happening. Similar to anything, the AI does the task and then someone checks the result. But it's good that you are critical about AI, and looking both ways. You are the first one out of anyone, and I've spoken to like 15 people. That tells about intelligence, in you.
@@Tozu25 No, mass unemployment = new economic system and a break from the relentless capitalism dystopia we're experiencing. In big cities like London, regular new graduates can't even afford to buy houses on good salaries. The system is bullshit and needs to be torn down.
Cope.
"officer hardass" kills me every time with that picture 😭😭
before gpt used to be bad at doing even basic force questions. But to o1, i gave my fluid mechanics problem and it was able to do it and i didn't even upload the diagram pictures. Its gotten really good now
OpenAI needs money, releases some reskinned GPT3.5 that asks "are you sure" secretly and send the response after that to the user to maintain hype, investor money and altmans job. Same bubble. Same hot (AI)r.
Yeah, this was plain dissapointing. I was expecting some major architectural change with all the hype around 'Q*' but this is just another chatbot except it's trained to ask itself 'are you sure about that?' a couple of times and provide long COTs with a fancy UI to hide the complexity from users who don't know how to prompt worth a dang.
(AI)r = Air. I see what you did there 😏
@@justanotherchannelname1273How in the hell is consistently beating human experts in several abstract fields not impressive to you
@justano so what you expect from new AI, ?
Altman write a for loop on chatgpt UI
Ah! 0 days since AI again?
Spoiler alert-this will happen every time Fireship uploads about AI
Many people are forgetting for some reason that its not only affecting developers. AI will also take doctors, engineers, architechts, creators, actors, editors, pretty much everyones jobs. It will be mass unemployment = no livable society.
Why should anyone be excited and be joking? Now this is what’s should be concerning, nothing else. We are witnessing the start of something really bad.
@@Tozu25 people don't want to work thats why
@@mr.nixtheboarddrawer1175 Well, the possible future products made by AI are not gonna be handed for free to you, unless society becomes socialist, and I don’t think that’s any more good.
@@Tozu25 None of that is going to happen. I wouldn't trust AI to be doing heart surgery even in 1,000 years, AI is AI, it's all guesswork, I would be more scared of *computers and simulations, as they actually involve math and physics, while AI just involves numbers multiplied by numbers multiplied by more numbers that eventually have an error that's small enough that works "good enough"* imagine that as your doctor, a doctor that MAYBE quite POSSIBLY will do the job right, also, you really think everyone's gonna lose their jobs in one night? Have you considered *us humans wanting the same thing as you, a livable society and preventing any of this happening, finding a solution, doing anything to make it all work out?*
tl;dr AI is guesswork and we should worry more about nukes and simulations as simulations actually get math right (AI cannot make complex simulations because AI will get this line wrong or get this number slightly off)
It amazes me every time how I think about this channel was all about angular and firebase back in the days and where it is now.
That's like a startup pivoting when they discover what the customers really need
Both, angular and firebase, are currently being re-obsoleted (by react, htmx, and svelte (or some combination)). Firebase has been dead about 8 years after it was born. Most wise programmers never used Firebase.
That's what i stumbled across. A channel supposed to be firebase documentation is doing all crazy stuff in th name of firebase. How could that be. Thank you now I get it.
No kidding! lol
Call me when it can become a professional poker player or blackjack counter so I can make millions at Stake, or how about a pro stock trader or something? Why has no one used openAI for this yet? In the future OpenAI might run entire countries GDP systems💀 Welcome our overlords.
LOL that probably exist already but you cant rly share that with the public can u?? use ur brain
1.4k likes and nobody has mentioned that AI has been and it's used for both atm, you are for a wild ride pretty soon 😵💫
@@peyopeev8909 yep. Botted likes?
Been there done that
"GDP systems"
3:17 I've just tried asking the o1-preview model `How many "r" in the word strawberry?`, it answered 3 "r"s correctly at first try. Then in the same chat, I switched to 4o model, it said 2. 🤷 Then switched back to o1-preview, it even apologized for the mistake in the previous answer made by 4o. Pretty smart to me. 🎉
then you're not very smart
2:08 the reason many people are moving over to Claude is because Claude isn't censored and is more useful for things like generating erotic content and conversations that don't sound like you're talking to HR, which is all that the majority of people care about. The o1 model is going to be great for jobs, it's a little more reliable for perfect answers, but the problem remains that corporations want something that's specifically useful and not generally useful, a lot of them have internal systems and custom setups that don't generalize, and they worry about data leaks, and would prefer the ability to run all of this in-house. The majority of AI users are fine with some generalization, can't afford to run the best ones in-house, and want it uncensored. Unless Microsoft can stay ahead, people will move on the moment something almost as good comes out that isn't censored, and Microsoft will be stuck catering to corporations who have demands.
You're thinking about this all wrong. Consumer software is not where the money is at. Most profitable MS divisions are all centered around business products. They obviously want to sell AI to the business first and foremost. If you thought MS expects regular consumers to buy the Copilot+ computers, you're dead wrong. They don't care if literally no one buys it. Because business will eat that shit up. And big companies will pay insane money to get as you say their own specialized AI solutions. While things like Claude, will struggle to finance anything after they run out of venture capital.
I just asked Claude for erotic content and he treated me like a pervert
You think the reason most people use Claude is for “ erotic content” ? Dude you need to go outside and talk to actual humans more
Claude is much more censored. I can't get it to help me with the CTFs in my ethical hacking course.
In my experience Claude censors more. I tried asking it a question about what a stolen vehicle could be used for (a screenshot from a driver’s license exam) and it said nope. Chatgpt answered it.
How can be someone so funny and so informative at the same time in just 5 minutes
Something, not someone
Humans are the original AI
And so biased. No our jobs are never going away!!!! 😡😡😡😭😭😭
@@turolretar wdym
He's from 4chan, that's why.
1:19 good to see o1 is struggling big time with chemistry, gonna make a lot of chemists happy.
I’ll be the 25th Chemist to give that a thumbs up 👍
I mean, as a professional dev, it seems to me that 74.2% of problems are the first 10% of time spent on a project the other 90% is the other 26.8% of issues, and we're still safe there. It's actually nice that AI will get us there quicker.
Thanks!
But can this center a div?
not yet. It can plagiarize the code for a snake game though.
😂
Cursor can, i think?
But can it do this?
*bends chair backwards*
@@theterribleanimator1793 Why do you people always accuse it of ‘plagiarism’ like that even makes any sense
In those competitions were they using new challenges or old ones that the AI might have gone through during training?
They were using old ones lmao
this is always my question but the answer is always hard to find. where would they even get all these completely original coding questions to test these models on?
@@veloce5491 for GPT 4 they published a paper and they show the result for both. Can't find the paper on this model, but didn't look that hard.
"A car won't take your job, but a horse driving a car will" .... damn!!!! deeeeeeeeeeeppp
The is the best overview of o1 I have seen yet 😊😊😊
3:23 you can really feel the frustration, amazing
Whenever i see your video notifications, i start laughing even before watching the video😂
The core innovation driving o1 was made public about 6 months ago. And it really works, but we still have a long way to go. I tried it on 2 challenging problems, and it almost didn't suck.
where is it posted?
Fireship's definitely my favorite horse influencer
most based comment ever
"And O stands for ohh sh*t we are gonna d*e" is so apt and hilarious lmao
0:25 Man, that clip was perfect lol
I expected something crazy, but when I saw the benchmarks, they're really not that groundbreaking.
o1's reasoning token paradigm serves as a middle layer for handling complex instructions, so it's more internally organised, but that doesn't necessarily mean the underlying architecture has substantially improved.
Coding, maths and science are all topics where handling information in a purely linguistic context by default is detrimental, so it naturally follows that it would be more effective to logically deconstruct problems. However, you might see similar improvements with any other LLM by manually creating an intermediary prompting stage.
This is still an improvement, but remember, a significant leap ahead at this stage would mean something as groundbreaking to transformers, as transformers were to RNNs, and this is nowhere close.
Make no mistake, this is part of the plateau. There will still be progress, and we should be looking to concentrate that towards building tools to aid developers, rather an attempt to replace them.
We should be aiming to replace everyone. Always aim high.
Ok so when will they replace HR?
I am pretty sure GPT4 also prompts itself somewhat at least because I am remember one time it accidentally showed me it's internal prompting. It said something like "user wants to understand blah blah..." then abruptly switched to explaining what I wanted.
You are correct, this is something ChatGPT does. It basically tries to create a more sophisticated prompt out of your prompt before actually addressing it. However, what these new models essentially do is check their answer and try to sanity check themselves several times before giving you the final response.
@@Caphalem I figured something like that. I just thought this distinction wasn't totally clear in the video, or maybe I wasn't paying enough attention. Thanks for the reply
My only concern is that AI goes full apocalypse mode after spending 2 days with my manager
Hi Jeff, I'm writing this comment to delightfully let you know that I absolutely like the way you do the "last kick" at the end of your videos sometimes. Beautifully crafted kick! Thanks. ❤
to be clear, o1 are not actually new models themselves, they're built on top of gpt-4o models with extended inference abilities.
well, what you just said is
quite obvious because if we think about it, no company is going to redesign the entire algorithm again to come up with a new model.
Correcto. Now in a few months gpt5 is coming out with all these advancements.
Doesn't it use feed back now? Adding "one little change" can have profound effects...
0:23 was a legit lol moment. Oh wait, so was most of the video.
came here to take a break from coursework, that avocado bit had laughing way too loud for a library 😂
@@SkegAudio Nobody does the developer / comedy / memes / but still informative style he has. He's one of those "never miss a video" channels I have to watch on the spot.
As long as it can't solve the _"Okay, so hear me out."_ problems the client has with all the help of _"I'm sure you'll figure it out!"_ and (of course) no further details, I think my job is pretty safe.
Thanks
The balance between the potential and the realistic expectations is much needed in these discussions.
To me the worst part is that it fails to strawberry test. For something that is a recursive self prompter, it sucks at prompting because constructing a proper prompt is literally the easiest way to pass the test.
Remember guys, we nerfed o1 when the hype was over, but o2 is gonna make a killing
Please, write the same statement but for o3 in the future
@@otpezdal Don't worry guys, o8 was a flop but o9 is gonna beat us all
As a coder and developer, I have no fear of "LLMs" taking my job. A lot of the stuff I code is too specific and niche for an LLM to figure out without having hella bugs.
as a 0.1x developper I am very afraid
agreed, same
To replace me, the customer would need to know what they want and accurately describe it to an AI.
I’m perfectly safe.
@@fullstackweebdev and then be able to debug the trashy code AI produces
@@fullstackweebdev well said. I push back on the garbage requirements I receive and help point the customer in the right direction for something more sane. A.I. will happily write a clucking fsck.
Like o1, Fireship's video production value and depth are getting better and better.
based on what you said, I think this confirms that they are now at the phase where they're doing clever implementations of the LLMs and being more specific in what it should generate well. In my opinion this is a sign that the technology is maturing, and the real potentially world changing products are coming. But it may also be a sign that this technology is at it's peak, when you can't go up, you go side ways
I love how, by this point, people should've already realized they shouldn't freak out when new AI DLC drops, yet it all follows the same hype trend. They keep being like "oh, but this time it's for real", but until we see a real and fair example of it actually doing all these revolutionary things, it's illogical to assume things will be any different. It's not copium, it's just a matter of proof of concept
A horse walks into a bar. The bartender asks - why the long face?
I almost want fireship to stop posting. This channel is scaring the shit out of me and my career. This is fucking nuts
why, is your job creating tiny 300 line snake games?
Just use it. AI has become an amazing pair programmer and conversational wiki page. I like bouncing logic off of it and getting its feedback, and its ability to answer questions id normally send to stack exchange.
If anything is loosing its job it will be stack exchange 😂
Just leverage the tool already.
What it will make obsolete are low level junior programmers which no ai skills because ai fills in skill gaps. Junior devs will be expected to do more, and senior devs will be expected to do more.
If anything AI will just make it so our jobs demand more of us, we’ll be expected faster turn around times or to ship twice as much code.
@@hiya2793 Oh if only that were all it could do.
@@jonwinder6622 I mean-
Good luck throwing a 10.000 line project at chatgpt.
As a matter of fact-
Go and create a simple 2000 line vite project, with let's keep it small and simple and say 10 scripts vanilla js, a simple small game on an html canvas.
No AI in the world comes even close to having enough tokens to even just read through that small af project-
Let alone provide good additional code that doesn't suck aboslute balls without spending hours proompting - at which point you may aswell just write it yourself.
AI is cool for stuff like: "How did flexbox go again? i'm too lazy to google, ai do it"
or "ah crap i forgot the syntax for a switch case in some niche language - ai, you do it"
Lmao dude your comment got copied and stolen by a bot.
Thanks for updating us!
Experienced the same with coding, its initial output was impressive but I also hit that limit pretty quickly on what it could accomplish and it failed at certain tasks. So a marginal improvement from GPT-4o, which in itself is pretty impressive. Another huge leap in capabilities is still hard to imagine, but looking forward to it.
o1ways 2 steps ahead!
It can build a game of Snake because there are thousands of open source examples online.
This a million times over lmfao. If only the people hyping ai through the moon knew even the most basic aspects of how llms work
@@w.mcnamara I just remind them of how crazy their ideas are. I remind them that they are claiming that linear algebra and statistics have literally become living beings and can now reason like humans. The hype is just silly at this point, I just ask them how its possible literal math became conscious and I never get a reply back.
@@iraniansuperhacker4382 Probably the same way a few neurons sending singals back and forth can become conscoius aka we don't know. We don't know what consciousness is, what do you need for that or how it comes to exist. Maybe even math can become conscious who knows. That said I'm not saying any AI is conscious or even that it will ever reach consciousness, just that we don't know if it is possible.
@@micca971 I would go as far as to say that math being processed on a silicon chip becoming conscious is physically impossible no matter how complex of a system it is. This is like saying if we write a sufficiently advanced piece of literature it will eventually be able to think or reason in some way. It just fundamentally doesnt make any sense.
@@iraniansuperhacker4382 that's not same at all, a piece of literature does not compute or process anything it does not receive and manipulate energy, therefore it cannot do aynthing on its own. If however you said a lot of monkeys were writing books, then possibly the entire collective of monkeys writing books (a lot of them, trillions or quadrillions at least or maybe more) can become conscious or at least exhibit intelligent behaviour as we see with the current AI. Aka it's not just about complexity, it's about manipulating energy and data using some logic. Also keep in mind this is all very hypothetical, but you can't say it is fundamentaly wrong. We just don't know.
3:32 Ну за фруктовый сад лайк однозначно
Something just to keep in mind, AI might get better, polishing existing tools etc.. butbin reality to ship a production grade ready software solution you always need a bunch of people, human thinking, applying execption rules here and there. Make tradeoff between technical debt and performance at multiple stages of product life. So a single or double queries to build up something is not gonna go anywhere..
Okay but at this rate the next model with need .05% of the worlds energy to solve a question
Probably not because at that point none of the models would be even public even to anyone, .05% is a lot, but yeah it is getting pretty resource dependent
If ai can replace programmers, it can replace anyone
yup!
Spy from TF2: "It could replace you, it could replace me. It could even replace..."
@@CyanRooper "it could even be your mother !"I haven't seen it for a while
Sadly, we don't leave in fantasy world and this thing will be massively disappointing
Not really. Coding has tons of sample data to train on.
There's tons of obscure roles or tasks in the business world that could be replicated if the right training data was available but it isn't since it's only in some guys head
When we eventually get AGI it will be so expensive to run that we will only be able to turn it on for a fraction of a second to resolve all of humanity's problems. It will then take 10 years to work through all of the data created.
Weirdly I've had the exact opposite experience from you. My first question what the strawberry question and it answered correctly and showed me the thought process. The code I've asked it to generate has been flawless and I've not experienced a single hallucination. Very strange.
First thing I asked o1 was what the difference between o1 and 4o was. It ran in circles for a little bit and ultimately asked me for more information. I said “it’s you. It’s gpt models” and it took like 25 more seconds of thought and came up with the answer it had no idea what I was talking about because its training was capped to Sept 2023.
I then gave it a prompt about colostomy bags, and it’s only here in this video I’m now learning about that these steps I’m getting it to take might one day cost me extra money. Well nuts to that, the subscription is already expensive enough and barely justifiable. Guess I’ll stick with 4o for most things
so cooked I'm watching this during comp sci class
Too late for a refund?
Have faith brother, see this AI scare as a good thing.
What’s cooking? Where’s mine
Me too 😢. Does anyone have any suggestions about how to stay relevant.
Don't go study what everyone does.
Go off the beaten path
But can it be monetised?
What I was disappointed most about with this new "thinking" preview model is that it still has almost no awareness of anything relating to itself. Whenever I ask a question about itself, its hallucination rate is like 85%
This is likely intentional. They said they are intentionally hiding the chain of thought from the users.
I mean yeah it still is a LLM
Why would it be aware of itself? It's just an LLM.
I‘m pretty sure that chatgpt is not aware of any concept it generates as output. Pattern recognition and awareness are two distinct things.
It will take years for AI to plateau, sure the specific method like GPT might plateau, but not the field in general.
We have barely started with this and I 100% believe that the improvements are going to be even faster and better now
@fireship Devin did not go to 74% with o1 model, that is Devins own production model (some fine tuned version of existing models). the comparison was betwen base GPT4o and o1 and it got up to 51%.
So it's basically just a custom version of GPT-4o which iteratively prompts itself until it has found the desired solution? Or is there something more to it?
We don't know. Code is close sourced
@@igorthelight OPEN AI :)
@@szymoniak75 Yeah, OpenAI is not open. Not really logical xD
Most likely what happened is they trained a model that is better or fine-tuned rather for looking at a previous generated context window of questions and thought processes. So the “reasoning model”. Which was likely trained from the same distilled data as GPT4o since this model actually only goes to oct 2023 and current 4o goes to December. But then they take the question, “reasoning tokens” are 4o or 4o mini variant that creates all kinds of prompts and potential solutions. Then this new model reasons “” about it as it’s designed to do based on training on looking at potential options and then tries to come up with a better solution. Hence the chain of thought here is really all they did and is something people built pretty much in the first 2 months of llama coming out and has been a concept since then that people have had high success with. So likely nothing truly special here. People had already reported smashing zero shot benchmarks with chain of thought on math and other stuff.
By the time I finish writing this comment, this model will already be outdated.
what if I dont finish reading your comment?
oh my god 😅😅
Didn't age well 😂😂
3:28 the Chain of Thought isn't hidden, you just have to click on it lol
That's not the raw chain of thought, just a summary. This is what OpenAI says "After weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring, we have decided not to show the raw chains of thought to users"
@@Fireship ohh ok gotcha
5:03 Did you try using language to explain to gpt that you wanted the original code provided but to look out for the errors? I've found you can often convince it to fix the error it made if proper language and goal-seeking is used
Correct me if I'm mistaken, but doesn't this new model just apply (invisible) chain-of-thought reasoning to any given prompt, which is what good prompters used to do themselves when the model gets stuck? It's still useful because the user doesn't need to know or take the time to craft their prompt out that way (I get it - I've only done COT once or twice myself out of laziness), but is it actually BETTER than the older models or is it just comparable to using optimized prompts for each question?
didnt expect nikocado cameo on Jeff Fireship's channel!
lol officer hardass with that image 😂
whos that?
@@officebatman9411 Officer Hardass
@@officebatman9411 someone who got fired for doing certain activities when she shouldnt have been
😂
@@ryzikx doing certain activities to the whole goddamn police dept
My prompt for the number of "r"s in the word "strawberry" got it right.
Mine didn't
@@Scrubzei interesting.
Even GPT-4 legacy got that one right for me.
@@PatrickHoodDaniel LLM's don't give consistent answers because 1) they're rate limited and the amount of compute spent changes the answer and 2) they have a 'temperature' parameter which is effectively just RNG when selecting from the top token candidates 3) every single character you type is a completely new input so something as simple as leaving out a question mark will potentially get a different answer
1:44 Devin... Devin... What happened to the open-source alternative Devika? I don't see any updates there anymore although it looked so good.
It's so fun to see code report ❤❤❤
74% might sound like a lot for a non-technical person, but for those who know what is SLA and how hard to go from 99.9 to 99.99, 74% is not even worth looking. Though I have doubts that LLM models will ever reach 99%
Right being 95 percent accurate in your compute is terrible for most things. Imagine 1/20 words you speak and interpret wrong while not even knowing they were wrong. Errors would compound all over.
@@peterhorton9063 I'm sure it wouldn't be too bad. I suspect that most people would probably understand you just pineapple.
@@egodreasFor people yeah, but condoms for sure need it to be accurate for them to work and solve problems.
Lil' bro. Real human workers ain't pulling 99.99 success rate. What are you yapping about