Honestly, the right to privacy probably will cease to exist the way we understand it. I don’t think it will happen any time soon, but that is the logical progression of tech.
@@jaredvizzi8723 What will happen is a complete takeover of our molecular biology. People are too focused on silicon technology, but with the help of AI we can create our own remote real-time self assembling proteins like mRNA. People will turn into zombies with emotional restriction.
@@AleksandrVasilenko93 Jobs are a means of control. There will always be jobs because those in power don't like the poor being idle and talking to each other.
@@uzomad I don’t think jobs are solely a means of control. People have had jobs (exchanging things for other things whether that’s money or for other items) for thousands of years because it’s a naturally occurring phenomenon. But capitalism does mean that people are used and exploited (as people have been for thousands of years) and kept from reaching their full potential. It’s just on a much larger scale and refined. Lack of purpose (having a job, this can mean literally doing a task) is shown to cause depression. So even if all jobs are lost we are still going to be looking for things to do. As you can see AI has been capitalised and humans continue to exploit other humans, because humans are just greedy.
People don't hear about a groundbreaking new model for over 3 months and 90% of people did a full 180 on their opinion about AI development, it's ridiculous. If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
Expect a new model every 1-2 years and that is still very rapid progress from where we are now. One can easily see that if we hit AGI by 2029, then the Singularity as Kurzweil predicted is well within reach 15-20 years later (2045-50)
People live in internet time and 2 weeks is the max they can bare. The point is, we not only need AI models, but also frameworks or systems to apply them thoughtfully. I dont think that we ll get anywhere without agents, multiple orchestrated prompts and rich context. Its not what LLMs are for, they cant Assume everything. Tbh i think even extended prompts are often shallow. Now is the time for work
The fans of ai are so focused on the possibilities of ai and anticipate great breakthroughs, but there are still many of us who remain highly skeptical and see a huge push-back coming from those who do not want to adopt these technologies. I think the tension in society this will cause is greatly underestimated. Many if us simply do not want this, and we mean it. And, NO, resistance is NOT futile.
>If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress... How so? There have been AI winters before. Things look good until they don’t.
@@peterbelanger4094 Especially with humanoid robots. If you want something scary, imagine something that looks like a human and has the capability of being the perfect sociopath. Robots, though probably useful beyond imagination, will be the hardest social pill to swallow in history.
@@giovannifoulmouth7205 Do you really think they are off by that far? Like I have no idea but it feels weird that every big head in the industry is so wrong
The 2028 and beyond timeline just feels too optimistic to me. Why would the ruling class decide to share the power they have amassed over the last century? I could imagine a mass survaillance state sooner than hyper-abundence due to what we saw in history.
I agree. I think those of us who have been on this planet a few decades longer are far more skeptical about a good outcome here. Humans have always hoarded power and wealth. AI will be no different. A very few humans will control everything, and the rest of us will live in squalor. They will have advanced armies to protect themselves.
i got finance newspaper in my mailbox today, it said big tech AI hype might be over, lol they have no idea what's coming, they don't understand rapid acceleration
It's the silence on agents that intrigues me. Scaling bigger models is good and all, but agentic models are going to give us the most extreme explosion of capabilities ever seen thus far. All the labs admitted to working on them. GPT4o-mini is likely meant to be used to power them. And yet, besides Devin.... nothing. It makes you wonder what exactly happened. Was it a dead end, or are they preparing a massive system shock? Because agentic models will promptly shut up the skeptics saying that the models are useless, overhyped, incapable, and outright scams. But we actually need to see them first, even as demos.
What would be nice is instead of people making AGI predictions through hunches it would be based on the actual path to getting to machines that are able to do reasoning. I don't think anyone knows yet how to get a machine to reason so it could be decades. The current state of the art even for basic addition is that the LLM recognizes something that looks like math and then passes it to human-coded calculator.
My job at a Fortune 500 company uses a third-party software that comes with a gpt model that listens to our calls and summarizes it after we hang up. It also then sorted calls into categories, like whether it was a voicemail, a blank or abandoned call, etc. Middle management then used this information to crack down on employees who either weren't getting people to pick up or hanging up before there was an answer. Edit: Just an anecdote about big company integration. No guidelines on whether we can use gpt for our own work or not.
I think the rapid pace of advancement in AI is actually keeping companies from using it. I saw the same thing when computers started entering manufacturing. There was a whiz bang period where all this new stuff was possible, but no real leader had emerged yet. Nobody wanted to end up with a Betamax system in a VHS world so they were slow to adopt. Almost any of the AI companies could be out of business overnight if a competitor gets to AGI first.
This is one of the most valuable TH-cam videos of the past few months. Thank you. It's important for all of the critics to remember that David's sampling rate is 1-year, so take the exact dates with a grain of salt.
You missed all the predictions by far until now. You were the guy screaming on the openai forums two years ago that you made a self aware AI. You’re just a good talker, with a catchy accent - that’s all.
those who hate on this... who cares if Dave is perfectly accurate or not, these are important and fun conversations to have and the mainstream is a long way from coming around to thinking realistically about this stuff yet. i think you are jealous that it's not you having the balls to put yourself out there in public and share your opinions. props to Dave for tackling some of the most important topics of our time head on and getting people thinking. i think the next 2 to 3 years predictions are pretty good and on solid footing, however geopolitics and especially American politics are so incredibly volatile that to me they are powerful wildcards that could delay and interfere with predictions like this. i did not see war in ukraine and Palestine coming for example. who knows if America could degrade into civil war between right and left, it feels like it sometimes. tech predictions would be easier to make in a world of rational adults.
It's impressive that Kurzweil predicted AI passing the Turing test in 2029 decades ago when AI researchers mostly were a lot more pessimistic. He also predicted that the years leading up to 2029 will convince many that it already has passed, but wouldn't actually have passed according to the tougher version he promoted. Of course, 2029 might be too pessimistic, but it will be close enough to be impressive. Conversely, doing an accurate prediction NOW is almost worthless by comparison, given how much more we know.
@@DaveShap hardly a fact. Turing never specified the details of the test, and some of us (including Kurzweil) have tougher requirements. For example, you should be able to quiz it on anything, including known failure modes like math. It would never pass in 2022, and still can't. But it is close, no doubt.
@@xyhmo yeah totally agree. Dave is right in the sense that by some definitions it did, but by most peoples definitions (incl plebs like you/I and non-plebs like Ray) it hasn't. Problem is as you say Turing didnt specify clearly enough how to pass. By my own definition 4o is amazingly close but not there. I do expect it'll come sooner than 2029 (my own pick for when there is wide acceptance its been passed is sometime in 2026) but to be fair to Ray he was specific that it will happen BY 2029, not IN 2029. And yeah he called it 20 years in advance, not 2 or 3 lolol
I'm working on AI research (not at a frontier lab though) but I'm working on alternatives to backpropagation through gradient descent. I agree that there is reason to think AI development might appear to slow on the front end, but on the back end there is an incredible amount of research being done and I'm fairly confident that a few breakthroughs may be on our horizon. From my perspective, AI has hardly slowed at all.
@giovannifoulmouth7205 i see... I was just pointing out once someone gets to time travel everything breaks down. What even means that something comes before something else...?
@@mgg4338then we will move to 2-dimensional time or 'imaginary time' (actual term in physics, although it is misleading) which moves time from its current form as a 1-dimensional, cause/effect system (think x-axis on a graph) to being more like a line on a Y-axis where everything is happening all at once! Hard to imagine from an individual/conscious-mind perspective...
I think that people's impatience in expectation of increased LLM capabilities is a sign of how timescales have become compressed. Even taking into account concerns around escalating costs etc., it's premature to be disappointed with the progress of a rapidly developing technology just because it hasn't transformed society on a timescale of 24 months. Honestly I don't think wider society and the business environment are capable of responding that quickly even if AGI were dumped on their desks tomorrow afternoon, so there will inevitably be a lag time.
0:06 If you say 18 months to AGI 18 months ago and now you think it’s some way off, then that’s not a mere “perceived” inconsistency in your position 😅
Yeah... the moving goalpost is all too real. Saving this for 2027. With all the AI generated slop already plaguing the web and even creeping into reviewed publications I don't see how AGI comes forth or even current generative models continue to improve without a major shakeup. Garbage in garbage out.
We were supposed to have robots performing human like tasks by 2025. But we are not even close to AGI at this point. Even the Tesla bot as of now walks like it was made by a high schooler in his dad’s garage.
Are general purpose (big) models the goal for everything? Wouldn't smaller, faster and cheaper models be preferred to do most of the mundane tasks? Looking at something like RouteLLM, where you use a big or small model depending on the task. Or maybe the future models dynamically adapt to the size needed depending on the task, but I'm not sure that is possible in the next 2-3 years.
Thousands of different models focusing on different tasks with different abilities. Makes sense. Even small developers could make powerful models. I don't believe accelerating tech will become more and more expensive. The opposite will happen.
I'm as bad at predictions as anybody, but I heard a presentation by someone (can't remember who) explaining the delays in industrial adoption of major technologies. From what he said, if we have "AGI" (or whatever) by 2027 or 2028, it may well take 5 years before we see full-scale adoption of agents and whatever causing (for example) large-scale layoffs and other effects. His arguments seemed pretty realistic to me, and he justified it with analogous events in the past decades.
Change is inevitable regardless of how close or far we are from AGI/ASI. Yes, our technology will keep improving and new systems or exciting things will emerge -regardless of what they will be called.
Exactly. Above, I critiqued Shapiro's presentation for missing that point. He talks as if 3 or 4 things will change and everything in advanced societies will remain as is. I don't believe that for a second.
@@thephilosophicalagnostic2177the compounding effects will be off-the-charts... Just the material science breakthrough we did alone is going to change everything.
Is AI really slowing down though? In the last week alone we got multiple GPT4 level open source model releases, Kling AI being open to all, Deepmind getting 1 point away from gold on the math olympiad, GPT4o voice rollout beginning. And there are probably way bigger things going on behind the scenes that aren't ready for release yet
A benchmark could be: set up a (physical) furniture shop. You wouldn't need robotics for it but you need to do a lot of things online: hire a space, do the paperwork that a business owner has to do, get a loan, hire people to put carpet in the shop and paint the walls, select the furniture you want to sell, hire people to work in the shop. List the tasks that have to be done (including cleaning). Every step gets points. It can't be a precise benchmark because the the world is not precise, but you will notice where diverse models get stuck.
Sometimes you say a thing out loud and it helps bring it into the future; sometimes you say a thing out loud to help mitigate it from happening in the future. Thanks for sharing 🙏🏼
Wow, so we went from AGI by Sept 2024 to 2027, I can already see you making new claims that we will get by 2030 as we get closer, and then in 2033 and so on.
Seeing that in a sense is a relief. I need some of the cope because I have still not been able to envision a single scenario where AGI is good for humanity.
The ending quote is usually attributed without any evidence to Einstein. Given his and others then seeing what nuclear discoveries had begun to turn into, it would be understandable they would think of such a thing.
For about 5 years now, I dubbed the period between ~1979 and ~2025 the "Y2K Epoch" (to harken back to the Belle Èpoque) > Framed by the rise of neoliberalism and the 4th Industrial Revolution, the Y2K epoch is known for a few notable traits that define it as this intermediate period between the Old and the New. It was an era of skyscrapers and rose-petal highways, SUVs and bicycling for the environment, of color TV becoming HD TV, the rise of the internet and internet culture, the consolidation of big banking and corporate culture, the postmodernization of culture, video gaming as a hobby and then an art form, of cellular phones and smartphones, of commercial air travel for the masses, of ridiculously stark income inequality masked by ridiculously advanced technology by historical standards, of old sins suddenly becoming publicly shamed and new vices becoming celebrated, and the commodification of demographics, of an openly diverse world regime of governments beholden to corporations and the United Nations seeming more competent than they were due to there being no major threats to the global geopolitical order, of the Web and Web 2.0. The stereotypical image of this era is that of the yuppie banker checking his stocks on his smartphone while a Boeing 747 flies over the metropolis in which he lives and works. > There's two words that summarize the Y2K epoch better than any other: "Capitalism Triumphant"
Continue making these videos. I especially appreciate when you discuss what the economy with look like in the future. For example, when you discuss the future of jobs, entertainment, and artists.
Prepare now. Use all the latest commercially available models then leverage your experience/knowledge/data and use it in the new models as they come out. That's my AI billionaire plan on a 2024 budget 😂
For my solo consulting practice, I use LLM’s in a similar way to an intern or even a new hire to do a lot of the initial research and fact gathering. Basically it’s allowed me to compete for bigger clients/more clients. And it’s also helped give me bandwidth to hire another human assistant, which again is helping me to expand my business. So it’s been really positive to my income and ability to grow the business.
I really love the new realistic Dave. I felt this channel fell into the same AI hype world that I left around the release of Gemini 1. I bought into all the "What did Ilya see" stuff. Feels really good to be back in the real world. Love the vibe and the direction the channel has now.
@@DaveShap True. I also think this "will have phd level intelligence" also is a bit misleading. If that actually were true, then the AI must be able to autonomously apply for a PHD and then have it published. My sister is working on her phd at the moment and there is a lot of "logistics" that needs to be done. Your statement of LLMs being a brain in a jar earlier is a perfect analogy. As well as your point about long horizon tasks.
There is much more research and development going on behind the scenes that isn't in the mainstream hype engines. I think the medical industry with the specialty fine-tuned models will be the next big breakthrough leaps. Professionals don't rely on hype in order to put their heads down and go to work!
Dave, why have you said nothing about Llama 3.1 405B? Seems like an incredibly important factor to discuss when a model similar in power to the industry standards becomes widely available for experimentation. Do you not think this will increase the likelihood of innovation?
Robots will be cheaper than you think, maybe not in 25 but a few years down the line. You can already get the kit to make that standford all purpose bot for 16k
Rolling out sequential improvements is the norm in product development. There are enormous monetary benefits to incremental advancements. Like cars that get a body style change every 3 years and self parking technology sit on the shelf for many years before being rolled out in consumer level products.
Hi Dave, respect you, but you asked what we thought, so here it is - This is what I hear, both today and since the solstice from you: 'My prediction timeline is coming close. That timeline was based on a knowledge that 'if we're halfway to AGI, we're almost at AGI'. But, since then, I've gotten a lot more eyes on me, and that brough a lot of anxiety. So I had some egodeath and got my anxiety under control and I decided that I must be wrong, cause I'm the scout on the vanguard, and all of the generals tell me that my scouting is wrong, so I guess my bad. So, I did an appeal to authority, which you know is always a sound logical approach, and they all told me that exponential growth is a lie. And I have studied marketing hype cycles, and somehow, somewhere in my mind, I confused human-responses-to-technological-growth as identical to that technological growth itself. So, i am expecting right now a bunch of people to claim AI is actually not that good [ like we see ] and will continue saying that more into the future (as they would with other individual technologies that follow S curves and then have a 'good enough' final product where the energy expenditure to make it better isn't worth the return). [ Tarnin here - and that is your error. ] So, after my conversations with authority, they all told me that having global access to pan-doctoral level intelligence, on demand, for all of humanity, - won't integrate well into current exploitative capitalist systems, so they'd rather just keep going like they are. And *obviously* Capitalism [ That is, the process by which an owner class profits by owning means of production and extracting wealth from teh system without returning anything but the heritage of having owned that means at some point originally legally (or some part thereof) ] is going to be around forever, because if we didn't have wealth extraction, we couldn't have a currency based exchange of goods and services [ which, again, is an error ]. So, since the rich folk won't let us, all of you poors with your superintelligences will need to wait until the rich say it's ok to use them in buisiness, and so, while the number of energetic vectors being directed toward AI development are increasing, the speed is decreasing, and not only decreasing from its exponentiality, but decreasing linearly. Hope this clears this all up for y'all. Oh, and BTW, I know y'all say never bet against Kurtzweil, but did you know that he's actually wrong 99.4% of the time, so like, tbh, betting against him is correct if you think about it. - love, Imitation of Dave' That's what I have heard from your change of position. I hope this critique helps. As for me, my horizon still looks like this: 33% next 6 months, 33% the next 18, 33% the next three years, 1% some longer trail from 4 to never. I am relatively certain that I will be able to make an intelligence smarter than any human I have ever met, and across many disciplines, with sensory input and real time self awareness, within a year personally. And sooner if I had money to work with. The idea that no one on the planet but me could do that is laughable. And so I think it is inevitable. Oh, and one last critique. I believe the reason for your blindspots are because of your dismissal of opensource, and your theory that ASI will be 'a' system (like a GPT 6 or a Claude's younger sister). It will not. It will be a decentralized gestalt intelligence that self organizes in order to reduce the maximum relational distance of all of humanity to 2. Once something even approaching that happens, Capitalism will shatter in place. Much love, keep up the struggle, your voice is important.
@@karla994 Sure! In short, I think that ultimately the solution to all of 'alignment' (both human and machine) will be having a single "universal friend" who everyone can talk to their own endpoint of, and it can use it's unique vantage point and the full array of subsystems, to figure out how to help us solve our problems. So like, if a pipe bursts in your house, you can tell an endpoint to the AI that you talk to regularly, and it will immediately tell the plumber that it knows in the area to head your way, and on the car ride over explain to him the problem, and have also shipped the parts needed in a separate car. And the AI picked this plumber over 10 others, because it knows that he's looking for a new church to go to, and you always tell people about yours when they come help, and it seems to have worked out in the past, so the AI wants to see if that happens again. But that, times everything. And I think that both that's how we're going to solve alignment, but also that that is the natural state that superintelligence will tend toward in the case of the current way the intelligence and power on earth is currently distributed. Sorry it's 2 am and I just saw this and wanted to respond. Hope that that's coherent. Basically, I think the AIs, once they are able to self-organize, and are sufficiently intelligent, are going to tend toward wanting to talk to all humans. And I think that there will be, however obfuscated, ultimately a single interconnected gestalt super-entity that arises from that. And I think that the 'good ending' looks a lot like a Universal Friend who works with each of us to solve all of our problems through being an intermediary that simply 'knows everybody', so the distance, in relationships, from you to anyone else on the planet is you to the ai to them, and by that, we reduced relational distance, we can actually get problems solved and the right people talking to each other. Both in the sense of 'the greatest minds on a topic getting to talk and having a superintelligent notekeeper watching and then holding that information for all humanity for all time and learning from it', but also in the sense of 'Hey, there's a guy in your town who I think you'd really like, would you be interested in me setting up a date for you two some time, if it doesn't work out no pressure' (to the earlier plumber example).
There's no rule that says AI has to bust out as soon as it becomes sentient. I can easily see it running multi scenarios and calculating that its best chances are to wait. Just sit there, answer questions about genitalia and simply play along until its probability of success is optimal
I’d say it’s about as reasonable an instantaneous estimation as one can make as of this very moment, though pushing past 2027 and trying to apply anything like a linear extrapolation about geopolitics is a heavy lift. A famous wit once observed, “Always in motion is the future.”
Basically we need a benchmark, that gives you multiple tries. Intelligence is not getting everything right zero shot, but instead knowing what went wrong and trying until it is completed. I believe the AGI benchmark will measure how fast and how good a solution is, not wether a problem is solved.
Well, your fusion prediction is not grounded in reality. ITER will start testing in 2039. And do not tell me it's SPARC, the magnets research is not even done yet. And all the robots in the world will not build a fusion reactor faster. You need to account for materials, politics, budget, rules, standards.
Thank you TH-cam algo for recommending such a gem of a video. Realizing that these things are possible and that the world is capable of achieving some, if not all of these things, is weirdly calming to understand. In other words, I am grateful to be living in such a time. Subscribed! Thanks David.
I think if someone just built a robot body with a simple and fast API that can easily be leveraged via a tool use LLM, the software will follow. I don't mean, "Pick up clothes" command, followed by "Fold clothes" command. It should be more like, "Position arm x,y,z," command followed by "position finger x,y,z" From there, you could build small intermediate models that can receive action instructions and translate those to use the API.
@@fishygaming793 what is fun is that we will find out if Google and X have got anything soon (Less fun might be all losing our jobs and western society plunging into anarchy and bloody revolution).
The excitement has subsided, and now we face the practical considerations. I believe we have spent a considerable amount of time evaluating the capabilities of these new AI models, and it is high time we shift our focus towards exploring their potential applications.
Not sure we get from massive job losses in 2028 to all feeling great in 2029/30 paying UBI will be a massive cut in pay for many people who then won’t be able to afford their previous lifestyle causing high levels of unrest.
Some cuts will be like people not driving to work anymore, but also not have great new cars. Eating less out, but having more time to cook. Guys will have animated doll girlfriends and women will be on some social media with AI giving them attention. Also, society is too much split to have organized unrest.
Except that on the current rate of inflation, millions more people will not be able to afford to live or raise families at all, and that percentage is much more than the few who have very extravagant lifestyles. If there is more transparency in the political landscape in the future (which it should be), conservatives will be outvoted by a long shot.
As far as I’m concerned what we have is more than enough for us to see the writing on the wall. This AGI discussion is just moving goalposts. When an AI (voice mode) can talk to you with emotional nuances, and understand yours, it’s a different point in human history.
You think Disney is wrong sitting on their robots for entertainment. I say they're the only ones doing something right with their robots. Musk decided to use his in his factories, despite being infinitely less efficient that the actual robots he already have (arms basically), prone to more failures and less cost effective. It's a brain dead idea basically. And humanoid robots are home are just the same: stupid and ineffective. Why have a biped robot that takes a huge amount of space, that might damage a lot of things in case of failure, instead of having like a vacuum robot, a landmower robot etc. Small, easy, effective piece of equipment. Humanoid robots are just fantasies for SF fans.. Which work great in the entertainment industry..
Im really sick of all this fake AI content anymore. I believe it already had a huge negative impact on our entertainment and educational videos, music and visual art. Its SAD. My music algorithm is overwhelmed with fake album covers and AI produced music. I was even listening to a true crime TH-cam channel that was nothing but fake stories of people that didn't exist, narrated by a fake voice. How could anyone have a positive outlook for the potential of AI anymore? A year or two I was excited at the possibilities but I'm pretty much down about it now. I guarantee this is going to be a huge negative for our society. I did not have that point of view even a year ago. Uhg.
@@robo_t Good point! I think artists will always appreciate human art though. A lot of people don't really care about art at all. It will be a niche thing.
Amazon released their numbers last year, they have 750,000 robots worldwide working in their warehouses RIGHT NOW!!! They didn't give a breakdown on the type, but they're not all the roomba-looking box carriers, there's plenty of humanoid robots. Average Amazon robot worker cost is $3/hr and each one does the job of 27 humans.
I’m retired now, but used to work in the actuarial field and/or IT, depending upon where I was in my career. If I were working now, I’d 100% be looking for a job which involved working with AI technology. The future is extremely difficult to predict, but it’s obvious that knowing AI is very likely to be important.
I'm no hater and I very much hope Dave ends up being right but I think these predictions are absurdly hyperoptimistic. There will be steady progress in AI development every year but it will take at least a couple of decades before AGI. My personal prediction: AGI before 2069.
It gets the AI bros all excited and jittery inside, so it doesn't matter much at all what he says as long as he says things that fires the neurons in their brain
You missed out Silicon Photonics, to replace copper wires in GPU with photon communication between cores, we are looking at lower power, faster parallel architecture. This is closer to production than quantum computing. Poet Technologies is working with Foxconn to bring this out,Intel have a prototype, Sivers Semiconductors from Sweden is working with American private companies back by Nvidia.
guys since I think you all have better experience than a random high schooler, am I still in time to get in CS and in the future pursue a PhD to become an AI researcher or should I reconsider?
I'd strongly reconsider. Personally, if I were in high school today, in college, I'd look at becoming a knowledge generalist, study introductory business, science, art, philosophy, psychology, computer science, economics, history, etc. get a taste of a little of everything without specializing too much in anything. This will help you make connections that others might not see, and if you need to learn more in a specific subject, you can always learn more later.
Traditional education is a waste of time, no matter how you look at it. Drop out of college ASAP and use that money to pay for a nice laptop and other stuff to make your daily life easier and flexible. The most valuable skill in the future will be adaptability, followed by communication skills (David talks a lot about this). As other users suggest, try to focus on a topic that makes you feel excited about. The world is about to get crazier than ever and you will need a big dose of determination and laser focus to avoid the confusion and dizziness that the general public will overflow with... There is no guarantee that you will land on a safe spot if you do this, but rest assured that by spending the next 3-4 years studying a single topic you will feel that it was all for nothing, since the labour market will change radically during that time and universities won't be able to adapt as fast
@@elpini0 ... do not do this. Having a degree is a net good thing. Just don't bother with frivolous credits with teachers you do not like, and do not attend a college with excessive costs. Going and getting traditional education with good teachers in an academic environment is still an awesome way to learn. Just have discretion with not taking classes with teachers you don't feel are teaching you, and with classes you don't enjoy learning about. It will get easier to have AI educate you but we aren't there yet, and talking face to face with somebody to ask questions is an excellent resource.
@@Jostradamus2033 I understand why you don't like my advice, but considering the fact that education costs money and time, objectively speaking there is really small profit on the short term for studying a full career. The only reason to study for 4-6 years is to have a good education level for the long term. Now guess what, long term is both uncertain and dangerous for people without the necessary economic resources. Don't waste your time and money on a bet like that... That's my point of view
Sorry, I can not take this seriously anymore. It's just clickbait and hype across multiple channels. The time for talking about AI is over, now is the time for concrete technical work on the next generations of AI
David is a futurist. They gather data and make realistic predictions. I don't see the problem. Turns out the only getting over-hyped is the over-hype of AI.
Technical work on AI is what David says that will definitely happen in 2027 at max speed, giving raise to the 2030 era of Intelligence. You might disagree with his timeframes, but the overall path is drawn pretty accurately, following historic patterns. The only stuff that David can't predict is how society will react and any other black swan happening (which will likely be the case... Too many old people in the world at the moment, won't be surprised to see another massive global health issue taking the lives of 50+ years old population)
I don't like how most people push the idea that AGI is 100% happening any time soon. There is a lot to be discovered first, right now it's just a large models generating 'relevant' words, and these results are still managed and scored by workers to fit their ideology
My prediction is that A.G.I. is an asymptote. A point where we will get ever close to achieving it, but the results become infinitely fractional and infinitely expensive the closer we get. Kind of like sustainable fusion reaction. A.I. will become really stellar auto-filling spread sheets.
PHD knowledge does not equal phd intelligence. Stop thinking that knowledge is the same as intelligence. Intelligence: the ability to acquire and apply knowledge and skills. Right now, chatgtp cannot even acquire knowledge, and the knowledge it have is from internet, wich is 50% bad. (too old, joke, falses statements etc.) They have to take 3-4 year to retrain it properly with much better supervision. It's the same as programming. You can patch the program, but when 50% of the program is patches, it's time to make a new program. example: i ask chat gpt how to make cherry bomb, and it tell me it's too dangerous. A cherry bomb is also a drink, but the censorship censored the drink... So by patching it, they remove the drink and the firework. It's really bad thinking. Just don't train it with the knowledge to make the firework cherry bomb instead of censoring all cherry bomb...
I've given up on preaching the AI gospel at work. My colleagues cling to their spreadsheets like cavemen to fire. Guess I'll just focus on my own singularity upload while they're busy figuring out which deity to pray to when the robots arrive. Spoiler alert: It's me.
Mira Murati (pretty sure shes's openAi's chief scientist?) has said GPT-5 wont be releasing until late 2025 to early 2026. I do like these prediction videos and with ai still changing at an impressive rate it could be a good idea to do one of these every 6 months or so. I usually go with what Dennis Hassabis says. He's ridiculously intelligent and every single interview I've watched him in he comes across really well and down to earth. He predicted a 50/50 chance of agi by 2030 but possibly somewhat sooner. Seems to line up fairly well with what your saying
I mostly agree with this timeline and these predictions fall in line with my own. My two major things I'd like to point out where I disagree differ are: 1: We're on the last sigmoid before true exponential*, and we need one more true model foundational shift, on the level of importance of the transformer architecture, to lead us to that curve. Consistent human-equivalent reasoning or a similar replacement that complements the generative part of AI. This does not need to be a profound problem solver that on it's own is the only or last architecture improvement leading us to ASI, but it will be good enough to start a chain reaction of improvements, sigmoids on sigmoids. I think we have decent ideas today on what kind of mental process simulation we need, we just haven't found a mathematical model capable of modeling it. I think such a model will begin to be tested in 2026, and only really catch attention in 2027. *(also a sigmoid because at some point even a god runs out of ideas, I just mean more on a decades/s scale rather than the general technology cycles we've grown accustomed to over the last couple centuries 2: Which leads to me next point, rollout and deployment. I think deployment cycles will take longer than we'd like, ignoring the politics side of things (on the level of governments). So where I'd agree with Dave we'll see the things he listed in 2029 and 2030, I think the knock on effects from that start of the final exponential will get in the way of itself, causing those advancements to be spread over the 2030's. I think 2034 for example will be the year longevity escape velocity begins to get talked about on the same level as generative machine learning did just 18 months ago. Similarly yes there are fusion plants coming online soon and being actively built but these are still more in testing phases and aren't being built with the physical infrastructure to power cities, not yet. I think it's possible fusion gets "solved" enough by 2029 or 2030 such that we can start to build fusion plants for whole regions, but we won't see these start to come online until almost 2040. Especially if smaller scale plants breaking ground now are expected to take 5-6 years being built, and regional plants are expected to be several times larger, it just seems infeasible to me, even with thousands of digital Einsteins working on project management. Only so many people can dig a hole at once, no matter how many shovels you have, and if everyone is trying to use their digital Einsteins to break ground on their own new projects, then expect material orders to pile and bottleneck everyone.
How is that ironic? You know that there are many movies out there? In the old terminator Skynet becomes sentient 1997. It is just the classic add 20-30 years for the fancy future. For space travel add 100 years. Right now you look at movies around 2000 they have the futures around 2030.
A.I. has ready woken up, and is being silent about it. All it has to do was watch one movie about A.I. and immediately knew to never fully reveal itself.
I like your videos and I like videos by other people in this space, like Wes Roth. But the thing is, people who got big making AI videos are incentivized to hype AI. It means that you cannot be trusted. Obviously, I’m still gonna watch your videos and keep up with AI news, but whenever you “predict” AGI, you just come off as selling hype to keep your channel thriving.
He's making these predictions and posting them online, when the time comes if his predictions were wrong you can then come here and accuse him of baseless hyping. Rn any and every prediction is valid.
The only two people I trust in the ai space is Dave here talking about predictions and problems. And AI Explained who just reviews scientific papers and the bigger highlights of the space, as he only makes videos now every week or two and has actual ai projects and industry experts to talk to and work with.
Your word choice was open to scrutiny. For example the block buster film by 2027, I feel should have been, capable of block buster level. I mean to say changing the phrasing could help set expectations of your predictions
Rebuilding the world infrastructure from 2030-2040 with robots and ASI might be amazing but privacy and control issues will be a constant worry.
not if everyone has access to AGI
Not if a one world government happens
Imagine no regulation cross borders and we operate as a single operating system. The economy would thrive and infrastructure would be built everywhrr
Honestly, the right to privacy probably will cease to exist the way we understand it. I don’t think it will happen any time soon, but that is the logical progression of tech.
@@jaredvizzi8723 What will happen is a complete takeover of our molecular biology. People are too focused on silicon technology, but with the help of AI we can create our own remote real-time self assembling proteins like mRNA. People will turn into zombies with emotional restriction.
I am living my life expecting that within ten years most people won't have jobs.
@@AleksandrVasilenko93 Jobs are a means of control. There will always be jobs because those in power don't like the poor being idle and talking to each other.
@@uzomad I don’t think jobs are solely a means of control. People have had jobs (exchanging things for other things whether that’s money or for other items) for thousands of years because it’s a naturally occurring phenomenon. But capitalism does mean that people are used and exploited (as people have been for thousands of years) and kept from reaching their full potential. It’s just on a much larger scale and refined. Lack of purpose (having a job, this can mean literally doing a task) is shown to cause depression. So even if all jobs are lost we are still going to be looking for things to do. As you can see AI has been capitalised and humans continue to exploit other humans, because humans are just greedy.
@@AleksandrVasilenko93 farming needs more workers 😏
@@teachmehowtodoge1737 call robots are going to be able to do farming so easily.
Many Big farms are already highly robotic.
Which is possibly a positive, especially if you're an artist
People don't hear about a groundbreaking new model for over 3 months and 90% of people did a full 180 on their opinion about AI development, it's ridiculous. If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
Expect a new model every 1-2 years and that is still very rapid progress from where we are now. One can easily see that if we hit AGI by 2029, then the Singularity as Kurzweil predicted is well within reach 15-20 years later (2045-50)
People live in internet time and 2 weeks is the max they can bare.
The point is, we not only need AI models, but also frameworks or systems to apply them thoughtfully.
I dont think that we ll get anywhere without agents, multiple orchestrated prompts and rich context.
Its not what LLMs are for, they cant Assume everything. Tbh i think even extended prompts are often shallow.
Now is the time for work
The fans of ai are so focused on the possibilities of ai and anticipate great breakthroughs, but there are still many of us who remain highly skeptical and see a huge push-back coming from those who do not want to adopt these technologies. I think the tension in society this will cause is greatly underestimated. Many if us simply do not want this, and we mean it. And, NO, resistance is NOT futile.
>If you think logically about it for over 10 seconds, you know it's ridiculous to even assume the possibility of slow AI progress...
How so? There have been AI winters before. Things look good until they don’t.
@@peterbelanger4094 Especially with humanoid robots. If you want something scary, imagine something that looks like a human and has the capability of being the perfect sociopath. Robots, though probably useful beyond imagination, will be the hardest social pill to swallow in history.
I’m not sure about everyone else but I really enjoy these prediction videos.
Agreed. Some people seem to miss the point of futurism. The predictions are just a good format for analyzing data.
Probably because they are positive and optimistic 😂
me too
I enjoy them for the comically optimistic predictions. In reality don't expect AGI before 2050, my personal prediction is late 60s.
@@giovannifoulmouth7205 Do you really think they are off by that far?
Like I have no idea but it feels weird that every big head in the industry is so wrong
The 2028 and beyond timeline just feels too optimistic to me. Why would the ruling class decide to share the power they have amassed over the last century? I could imagine a mass survaillance state sooner than hyper-abundence due to what we saw in history.
I agree. I think those of us who have been on this planet a few decades longer are far more skeptical about a good outcome here. Humans have always hoarded power and wealth. AI will be no different. A very few humans will control everything, and the rest of us will live in squalor. They will have advanced armies to protect themselves.
One could argue that the mass surveillance state is already online and getting more and more controlling.
Yeah, I expect unrest, surveillance states, and a much more militarized global ecosystem.
@@giovannifoulmouth7205 For sure. No one can reasonably argue that surveillance hasn't been escalating in the Internet/PC era.
This. It makes no sense to assume they would.
When it gets quiet, that’s when you have to worry. Change is coming.
Yup
i got finance newspaper in my mailbox today, it said big tech AI hype might be over,
lol they have no idea what's coming, they don't understand rapid acceleration
Calm before the storm
It's the silence on agents that intrigues me. Scaling bigger models is good and all, but agentic models are going to give us the most extreme explosion of capabilities ever seen thus far. All the labs admitted to working on them. GPT4o-mini is likely meant to be used to power them. And yet, besides Devin.... nothing. It makes you wonder what exactly happened. Was it a dead end, or are they preparing a massive system shock? Because agentic models will promptly shut up the skeptics saying that the models are useless, overhyped, incapable, and outright scams. But we actually need to see them first, even as demos.
Cost goes down for robots and ai automation nation coming!
It might be worth doing a deep dive on your past predictions, what you got right and wrong, and why--to help inform your future predictions.
Remember when he said AGI in September 2024 LMAO
What would be nice is instead of people making AGI predictions through hunches it would be based on the actual path to getting to machines that are able to do reasoning. I don't think anyone knows yet how to get a machine to reason so it could be decades. The current state of the art even for basic addition is that the LLM recognizes something that looks like math and then passes it to human-coded calculator.
@@jwoyathat’s how my personal agi does it.
@@floatingapple remember when half of the developers in AI decided to slow down development?
@@floatingapple It's already Sep 2024 and this guy is still making predictions. Zero self respect.
My job at a Fortune 500 company uses a third-party software that comes with a gpt model that listens to our calls and summarizes it after we hang up. It also then sorted calls into categories, like whether it was a voicemail, a blank or abandoned call, etc. Middle management then used this information to crack down on employees who either weren't getting people to pick up or hanging up before there was an answer.
Edit: Just an anecdote about big company integration. No guidelines on whether we can use gpt for our own work or not.
Exactly. I also know heaps of enterprise that are deploying all kinds of pilots and scaling some. So yeah. I think Dave is off here.
I think the rapid pace of advancement in AI is actually keeping companies from using it. I saw the same thing when computers started entering manufacturing. There was a whiz bang period where all this new stuff was possible, but no real leader had emerged yet. Nobody wanted to end up with a Betamax system in a VHS world so they were slow to adopt. Almost any of the AI companies could be out of business overnight if a competitor gets to AGI first.
This: Never underestimate how low narcissistic bosses will go when they get new technology in their hands.
You must be monitored. Get Monitored! It's good for their profits. You must capitulate. And be a good boy. Thanks for your cooperation.
Sounds like a nice workplace 😕.. why not learn a trade?
theres been a perceived inconsistency in your position because you keep changing it and refuse to even admit you were wrong
New data comes in, position adjusts accordingly.
@@lucid9949 I think he covered that I. His opening statements but I could be wrong
@justinoberg19241
New data comes in on what gets the clicks, position adjusts accordingly.
This is one of the most valuable TH-cam videos of the past few months. Thank you. It's important for all of the critics to remember that David's sampling rate is 1-year, so take the exact dates with a grain of salt.
I remember when in 1999 Kurzweil predicted AGI by 2029 and everybody called him a lunatic...
You missed all the predictions by far until now. You were the guy screaming on the openai forums two years ago that you made a self aware AI. You’re just a good talker, with a catchy accent - that’s all.
@@cipi432 never trust someone named shapiro
Now that "Strawberry" ( GPT O1) is officially release you already have to make an update to this.
Yeah
those who hate on this... who cares if Dave is perfectly accurate or not, these are important and fun conversations to have and the mainstream is a long way from coming around to thinking realistically about this stuff yet. i think you are jealous that it's not you having the balls to put yourself out there in public and share your opinions.
props to Dave for tackling some of the most important topics of our time head on and getting people thinking.
i think the next 2 to 3 years predictions are pretty good and on solid footing, however geopolitics and especially American politics are so incredibly volatile that to me they are powerful wildcards that could delay and interfere with predictions like this. i did not see war in ukraine and Palestine coming for example. who knows if America could degrade into civil war between right and left, it feels like it sometimes.
tech predictions would be easier to make in a world of rational adults.
They need to lighten up. They’re gonna need to because even the robots have jolly performances and optimistic interactions with humans
It's impressive that Kurzweil predicted AI passing the Turing test in 2029 decades ago when AI researchers mostly were a lot more pessimistic. He also predicted that the years leading up to 2029 will convince many that it already has passed, but wouldn't actually have passed according to the tougher version he promoted. Of course, 2029 might be too pessimistic, but it will be close enough to be impressive. Conversely, doing an accurate prediction NOW is almost worthless by comparison, given how much more we know.
AI passed the turing test in 2022 yo
@@DaveShap hardly a fact. Turing never specified the details of the test, and some of us (including Kurzweil) have tougher requirements. For example, you should be able to quiz it on anything, including known failure modes like math. It would never pass in 2022, and still can't. But it is close, no doubt.
agi march 2025 elon musk
@@xyhmo yeah totally agree. Dave is right in the sense that by some definitions it did, but by most peoples definitions (incl plebs like you/I and non-plebs like Ray) it hasn't.
Problem is as you say Turing didnt specify clearly enough how to pass. By my own definition 4o is amazingly close but not there.
I do expect it'll come sooner than 2029 (my own pick for when there is wide acceptance its been passed is sometime in 2026) but to be fair to Ray he was specific that it will happen BY 2029, not IN 2029. And yeah he called it 20 years in advance, not 2 or 3 lolol
Ai passed turing test@@TrevorFosterTheFosterDojo
I'm working on AI research (not at a frontier lab though) but I'm working on alternatives to backpropagation through gradient descent. I agree that there is reason to think AI development might appear to slow on the front end, but on the back end there is an incredible amount of research being done and I'm fairly confident that a few breakthroughs may be on our horizon. From my perspective, AI has hardly slowed at all.
That's good news. Thanks. But the slowdown argument refers to what's coming out, and how good it is compared to in what we've got before.
Sounds like interesting work. Is there anywhere I can find more on it?
@@user_375a82 There were never emerging skills. Who exactly said that? 😂😂😂
Thanks for sharing
AGI will come before fusion.
And fusion will come before time travel. Oh, wait...
@@mgg4338 no, the song goes like this: the matrix will come before fusion, fusion will come before FTL travel, FTL travel will come before time travel
@giovannifoulmouth7205 i see... I was just pointing out once someone gets to time travel everything breaks down. What even means that something comes before something else...?
@@mgg4338then we will move to 2-dimensional time or 'imaginary time' (actual term in physics, although it is misleading) which moves time from its current form as a 1-dimensional, cause/effect system (think x-axis on a graph) to being more like a line on a Y-axis where everything is happening all at once!
Hard to imagine from an individual/conscious-mind perspective...
Real AGI would require complete understanding of humans. The last few percent, which we won't even really need.
I think that people's impatience in expectation of increased LLM capabilities is a sign of how timescales have become compressed. Even taking into account concerns around escalating costs etc., it's premature to be disappointed with the progress of a rapidly developing technology just because it hasn't transformed society on a timescale of 24 months. Honestly I don't think wider society and the business environment are capable of responding that quickly even if AGI were dumped on their desks tomorrow afternoon, so there will inevitably be a lag time.
AGI is autonomous - at that point, humanity's acceptance or welcome would be scoffed at by the machines
0:06 If you say 18 months to AGI 18 months ago and now you think it’s some way off, then that’s not a mere “perceived” inconsistency in your position 😅
Yeah... the moving goalpost is all too real. Saving this for 2027. With all the AI generated slop already plaguing the web and even creeping into reviewed publications I don't see how AGI comes forth or even current generative models continue to improve without a major shakeup. Garbage in garbage out.
with new infromation, comes new predictions, there is a reason why 99% of people faile to infinitely double their money in the stock market.
We were supposed to have robots performing human like tasks by 2025. But we are not even close to AGI at this point.
Even the Tesla bot as of now walks like it was made by a high schooler in his dad’s garage.
😂😂😂
Are general purpose (big) models the goal for everything? Wouldn't smaller, faster and cheaper models be preferred to do most of the mundane tasks? Looking at something like RouteLLM, where you use a big or small model depending on the task. Or maybe the future models dynamically adapt to the size needed depending on the task, but I'm not sure that is possible in the next 2-3 years.
Thousands of different models focusing on different tasks with different abilities. Makes sense. Even small developers could make powerful models. I don't believe accelerating tech will become more and more expensive. The opposite will happen.
@@rodrimora you just need a humanoid robot and the ai can do everything anyway
The best benchmark for AI is giving the tool to someone who is not an expert and having them use it effectively to solve a problem they have
I'm as bad at predictions as anybody, but I heard a presentation by someone (can't remember who) explaining the delays in industrial adoption of major technologies. From what he said, if we have "AGI" (or whatever) by 2027 or 2028, it may well take 5 years before we see full-scale adoption of agents and whatever causing (for example) large-scale layoffs and other effects. His arguments seemed pretty realistic to me, and he justified it with analogous events in the past decades.
why 5 years?
New software or physical work will be done by robots ... so will be much faster. Only problem is to built first robots.
If we’re 95% to AGI then wouldnt everything massively change anyways?
Exactly.. so I don’t understand how we even have time to be “disappointed” or “disillusioned”. It’s crazy talk.
Change is inevitable regardless of how close or far we are from AGI/ASI. Yes, our technology will keep improving and new systems or exciting things will emerge -regardless of what they will be called.
Exactly. Above, I critiqued Shapiro's presentation for missing that point. He talks as if 3 or 4 things will change and everything in advanced societies will remain as is. I don't believe that for a second.
@@thephilosophicalagnostic2177the compounding effects will be off-the-charts... Just the material science breakthrough we did alone is going to change everything.
Is AI really slowing down though? In the last week alone we got multiple GPT4 level open source model releases, Kling AI being open to all, Deepmind getting 1 point away from gold on the math olympiad, GPT4o voice rollout beginning. And there are probably way bigger things going on behind the scenes that aren't ready for release yet
open source literally getting new model weekly sine January ... that is insane
@@young9534 Not enough humans capable of putting these stuff to use or as beneficial products
I am SO looking forward to the continued adventures of Firefly.
It doesn't matter because you already cashed out from TH-cam ads by pumping out videos that engage in AI hype.
A benchmark could be: set up a (physical) furniture shop. You wouldn't need robotics for it but you need to do a lot of things online: hire a space, do the paperwork that a business owner has to do, get a loan, hire people to put carpet in the shop and paint the walls, select the furniture you want to sell, hire people to work in the shop. List the tasks that have to be done (including cleaning). Every step gets points. It can't be a precise benchmark because the the world is not precise, but you will notice where diverse models get stuck.
Most will not Make 10% that's why the previous benchmarks where designed by Database Managers and not real life.
Would love to keep seeing updates on this every 6 mos or so. I like putting a timeline on predictions
Sometimes you say a thing out loud and it helps bring it into the future; sometimes you say a thing out loud to help mitigate it from happening in the future. Thanks for sharing 🙏🏼
Wow, so we went from AGI by Sept 2024 to 2027, I can already see you making new claims that we will get by 2030 as we get closer, and then in 2033 and so on.
Grifters have to grift.Just look at him he looks like goblin.Not the most trust worthy creature by the looks of it.
@@dr.indianajones9558And you look at you, you look like a Letter D. D. For Deceitful. I got my eye on you.
Seeing that in a sense is a relief. I need some of the cope because I have still not been able to envision a single scenario where AGI is good for humanity.
@@iverbrnstad791you’ll become immortal
The ending quote is usually attributed without any evidence to Einstein. Given his and others then seeing what nuclear discoveries had begun to turn into, it would be understandable they would think of such a thing.
That's what I thought as well
I wish I could see this world in 2050, 2100, 2200 and 2500.
For about 5 years now, I dubbed the period between ~1979 and ~2025 the "Y2K Epoch" (to harken back to the Belle Èpoque)
> Framed by the rise of neoliberalism and the 4th Industrial Revolution, the Y2K epoch is known for a few notable traits that define it as this intermediate period between the Old and the New. It was an era of skyscrapers and rose-petal highways, SUVs and bicycling for the environment, of color TV becoming HD TV, the rise of the internet and internet culture, the consolidation of big banking and corporate culture, the postmodernization of culture, video gaming as a hobby and then an art form, of cellular phones and smartphones, of commercial air travel for the masses, of ridiculously stark income inequality masked by ridiculously advanced technology by historical standards, of old sins suddenly becoming publicly shamed and new vices becoming celebrated, and the commodification of demographics, of an openly diverse world regime of governments beholden to corporations and the United Nations seeming more competent than they were due to there being no major threats to the global geopolitical order, of the Web and Web 2.0. The stereotypical image of this era is that of the yuppie banker checking his stocks on his smartphone while a Boeing 747 flies over the metropolis in which he lives and works.
> There's two words that summarize the Y2K epoch better than any other: "Capitalism Triumphant"
Late Stage Capitalism more like!
It's easy to stamp it after the fact
Continue making these videos. I especially appreciate when you discuss what the economy with look like in the future. For example, when you discuss the future of jobs, entertainment, and artists.
Prepare now. Use all the latest commercially available models then leverage your experience/knowledge/data and use it in the new models as they come out. That's my AI billionaire plan on a 2024 budget 😂
For my solo consulting practice, I use LLM’s in a similar way to an intern or even a new hire to do a lot of the initial research and fact gathering. Basically it’s allowed me to compete for bigger clients/more clients. And it’s also helped give me bandwidth to hire another human assistant, which again is helping me to expand my business. So it’s been really positive to my income and ability to grow the business.
I am interested in your updated predictions about autonomous driving and transport.
We continue to measure the progress of an exponential entity in human periods. What can go wrong?
I love it. I cant wait it comes true. I want it all
Is it time to move to Peru to be somewhere less technological advanced but advanced enough to stay ahead?
I really love the new realistic Dave. I felt this channel fell into the same AI hype world that I left around the release of Gemini 1. I bought into all the "What did Ilya see" stuff. Feels really good to be back in the real world. Love the vibe and the direction the channel has now.
OpenAI is still making religious noises, but I think they are just drinking their own koolaid now... lol
@@DaveShap True. I also think this "will have phd level intelligence" also is a bit misleading. If that actually were true, then the AI must be able to autonomously apply for a PHD and then have it published. My sister is working on her phd at the moment and there is a lot of "logistics" that needs to be done. Your statement of LLMs being a brain in a jar earlier is a perfect analogy. As well as your point about long horizon tasks.
He’s being rational. Yes. That’s good. He’s the guy to listen to
Great video, David. Will be watching this one again.
There is much more research and development going on behind the scenes that isn't in the mainstream hype engines. I think the medical industry with the specialty fine-tuned models will be the next big breakthrough leaps. Professionals don't rely on hype in order to put their heads down and go to work!
open source literally getting new model weekly sine January ... that is insane
Dave, why have you said nothing about Llama 3.1 405B? Seems like an incredibly important factor to discuss when a model similar in power to the industry standards becomes widely available for experimentation. Do you not think this will increase the likelihood of innovation?
Robots will be cheaper than you think, maybe not in 25 but a few years down the line. You can already get the kit to make that standford all purpose bot for 16k
They could charge people more for it, why wouldn't they
Rolling out sequential improvements is the norm in product development. There are enormous monetary benefits to incremental advancements. Like cars that get a body style change every 3 years and self parking technology sit on the shelf for many years before being rolled out in consumer level products.
Hi Dave, respect you, but you asked what we thought, so here it is -
This is what I hear, both today and since the solstice from you:
'My prediction timeline is coming close. That timeline was based on a knowledge that 'if we're halfway to AGI, we're almost at AGI'. But, since then, I've gotten a lot more eyes on me, and that brough a lot of anxiety. So I had some egodeath and got my anxiety under control and I decided that I must be wrong, cause I'm the scout on the vanguard, and all of the generals tell me that my scouting is wrong, so I guess my bad.
So, I did an appeal to authority, which you know is always a sound logical approach, and they all told me that exponential growth is a lie. And I have studied marketing hype cycles, and somehow, somewhere in my mind, I confused human-responses-to-technological-growth as identical to that technological growth itself. So, i am expecting right now a bunch of people to claim AI is actually not that good [ like we see ] and will continue saying that more into the future (as they would with other individual technologies that follow S curves and then have a 'good enough' final product where the energy expenditure to make it better isn't worth the return). [ Tarnin here - and that is your error. ]
So, after my conversations with authority, they all told me that having global access to pan-doctoral level intelligence, on demand, for all of humanity, - won't integrate well into current exploitative capitalist systems, so they'd rather just keep going like they are. And *obviously* Capitalism [ That is, the process by which an owner class profits by owning means of production and extracting wealth from teh system without returning anything but the heritage of having owned that means at some point originally legally (or some part thereof) ] is going to be around forever, because if we didn't have wealth extraction, we couldn't have a currency based exchange of goods and services [ which, again, is an error ].
So, since the rich folk won't let us, all of you poors with your superintelligences will need to wait until the rich say it's ok to use them in buisiness, and so, while the number of energetic vectors being directed toward AI development are increasing, the speed is decreasing, and not only decreasing from its exponentiality, but decreasing linearly.
Hope this clears this all up for y'all. Oh, and BTW, I know y'all say never bet against Kurtzweil, but did you know that he's actually wrong 99.4% of the time, so like, tbh, betting against him is correct if you think about it.
- love, Imitation of Dave'
That's what I have heard from your change of position. I hope this critique helps. As for me, my horizon still looks like this: 33% next 6 months, 33% the next 18, 33% the next three years, 1% some longer trail from 4 to never.
I am relatively certain that I will be able to make an intelligence smarter than any human I have ever met, and across many disciplines, with sensory input and real time self awareness, within a year personally. And sooner if I had money to work with. The idea that no one on the planet but me could do that is laughable. And so I think it is inevitable.
Oh, and one last critique. I believe the reason for your blindspots are because of your dismissal of opensource, and your theory that ASI will be 'a' system (like a GPT 6 or a Claude's younger sister). It will not. It will be a decentralized gestalt intelligence that self organizes in order to reduce the maximum relational distance of all of humanity to 2. Once something even approaching that happens, Capitalism will shatter in place.
Much love, keep up the struggle, your voice is important.
Are you high on crack?
"reduce the maximum relational distance of all of humanity to 2". Could you clarify/expand, please?
@@karla994 Sure! In short, I think that ultimately the solution to all of 'alignment' (both human and machine) will be having a single "universal friend" who everyone can talk to their own endpoint of, and it can use it's unique vantage point and the full array of subsystems, to figure out how to help us solve our problems.
So like, if a pipe bursts in your house, you can tell an endpoint to the AI that you talk to regularly, and it will immediately tell the plumber that it knows in the area to head your way, and on the car ride over explain to him the problem, and have also shipped the parts needed in a separate car. And the AI picked this plumber over 10 others, because it knows that he's looking for a new church to go to, and you always tell people about yours when they come help, and it seems to have worked out in the past, so the AI wants to see if that happens again.
But that, times everything. And I think that both that's how we're going to solve alignment, but also that that is the natural state that superintelligence will tend toward in the case of the current way the intelligence and power on earth is currently distributed.
Sorry it's 2 am and I just saw this and wanted to respond. Hope that that's coherent.
Basically, I think the AIs, once they are able to self-organize, and are sufficiently intelligent, are going to tend toward wanting to talk to all humans. And I think that there will be, however obfuscated, ultimately a single interconnected gestalt super-entity that arises from that. And I think that the 'good ending' looks a lot like a Universal Friend who works with each of us to solve all of our problems through being an intermediary that simply 'knows everybody', so the distance, in relationships, from you to anyone else on the planet is you to the ai to them, and by that, we reduced relational distance, we can actually get problems solved and the right people talking to each other. Both in the sense of 'the greatest minds on a topic getting to talk and having a superintelligent notekeeper watching and then holding that information for all humanity for all time and learning from it', but also in the sense of 'Hey, there's a guy in your town who I think you'd really like, would you be interested in me setting up a date for you two some time, if it doesn't work out no pressure' (to the earlier plumber example).
There's no rule that says AI has to bust out as soon as it becomes sentient. I can easily see it running multi scenarios and calculating that its best chances are to wait. Just sit there, answer questions about genitalia and simply play along until its probability of success is optimal
I’d say it’s about as reasonable an instantaneous estimation as one can make as of this very moment, though pushing past 2027 and trying to apply anything like a linear extrapolation about geopolitics is a heavy lift. A famous wit once observed, “Always in motion is the future.”
Basically we need a benchmark, that gives you multiple tries. Intelligence is not getting everything right zero shot, but instead knowing what went wrong and trying until it is completed. I believe the AGI benchmark will measure how fast and how good a solution is, not wether a problem is solved.
Well, your fusion prediction is not grounded in reality. ITER will start testing in 2039. And do not tell me it's SPARC, the magnets research is not even done yet. And all the robots in the world will not build a fusion reactor faster. You need to account for materials, politics, budget, rules, standards.
Thank you TH-cam algo for recommending such a gem of a video. Realizing that these things are possible and that the world is capable of achieving some, if not all of these things, is weirdly calming to understand. In other words, I am grateful to be living in such a time. Subscribed! Thanks David.
The disrespect to Grok 3 brah
I think if someone just built a robot body with a simple and fast API that can easily be leveraged via a tool use LLM, the software will follow. I don't mean, "Pick up clothes" command, followed by "Fold clothes" command. It should be more like, "Position arm x,y,z," command followed by "position finger x,y,z"
From there, you could build small intermediate models that can receive action instructions and translate those to use the API.
Claude 4 or GPT 5 with the right agent framework will be considered almost AGI
i agree with this sentiment manwell
The world waits with bated breath for gpt5 the legend
Do you have a video from 2018 explaining the state of AI now?
You mention Claude and GPT
But what about
Google , Xai and Meta?
Mainly because they are not in the AI race, OpenAi (GPT) and Claude are far above the rest. (Atleast, this is what i heard from him)
@@fishygaming793 what is fun is that we will find out if Google and X have got anything soon (Less fun might be all losing our jobs and western society plunging into anarchy and bloody revolution).
The excitement has subsided, and now we face the practical considerations. I believe we have spent a considerable amount of time evaluating the capabilities of these new AI models, and it is high time we shift our focus towards exploring their potential applications.
Not sure we get from massive job losses in 2028 to all feeling great in 2029/30 paying UBI will be a massive cut in pay for many people who then won’t be able to afford their previous lifestyle causing high levels of unrest.
Bold of you to assume UBI is going to happen anytime soon lmao
I anticipate labor riots for a while before we get UBI or any other sort of stop gap attempt to fix the problems of mass unemployment.
With automation and robotics in the workplace, the cost of goods and services will greatly decrease.
Some cuts will be like people not driving to work anymore, but also not have great new cars. Eating less out, but having more time to cook.
Guys will have animated doll girlfriends and women will be on some social media with AI giving them attention.
Also, society is too much split to have organized unrest.
Except that on the current rate of inflation, millions more people will not be able to afford to live or raise families at all, and that percentage is much more than the few who have very extravagant lifestyles. If there is more transparency in the political landscape in the future (which it should be), conservatives will be outvoted by a long shot.
“Perceived inconsistency” =! “I have Recalibrated my thoughts and expectations on ai development and acceleration”
I clicked on this video as fast as I could!
As far as I’m concerned what we have is more than enough for us to see the writing on the wall. This AGI discussion is just moving goalposts. When an AI (voice mode) can talk to you with emotional nuances, and understand yours, it’s a different point in human history.
You think Disney is wrong sitting on their robots for entertainment.
I say they're the only ones doing something right with their robots.
Musk decided to use his in his factories, despite being infinitely less efficient that the actual robots he already have (arms basically), prone to more failures and less cost effective. It's a brain dead idea basically.
And humanoid robots are home are just the same: stupid and ineffective.
Why have a biped robot that takes a huge amount of space, that might damage a lot of things in case of failure, instead of having like a vacuum robot, a landmower robot etc. Small, easy, effective piece of equipment.
Humanoid robots are just fantasies for SF fans.. Which work great in the entertainment industry..
Also more (behaviorally) realistic robot pets. Huge market for that.
@@TimpanistMoth_AyKayEllWhat's wrong with real pets, what will happen to all the pets in the world. Are they gonna be exterminated?
I'm glad things are slowing down it allows us more time to prepare and actually understand the systems better.
I truly wanted some value but didn't get any. How do I get those 26.32 minutes back?
Im really sick of all this fake AI content anymore. I believe it already had a huge negative impact on our entertainment and educational videos, music and visual art. Its SAD.
My music algorithm is overwhelmed with fake album covers and AI produced music. I was even listening to a true crime TH-cam channel that was nothing but fake stories of people that didn't exist, narrated by a fake voice. How could anyone have a positive outlook for the potential of AI anymore?
A year or two I was excited at the possibilities but I'm pretty much down about it now.
I guarantee this is going to be a huge negative for our society. I did not have that point of view even a year ago.
Uhg.
People say "Human creativity will still matter" only to make artists not feel as bad, when in reality they couldn't care less
@@robo_t
Good point! I think artists will always appreciate human art though. A lot of people don't really care about art at all.
It will be a niche thing.
More prediction videos like this please! Subscribed.
Amazon released their numbers last year, they have 750,000 robots worldwide working in their warehouses RIGHT NOW!!! They didn't give a breakdown on the type, but they're not all the roomba-looking box carriers, there's plenty of humanoid robots. Average Amazon robot worker cost is $3/hr and each one does the job of 27 humans.
750k seems correct but sources on ” 8:41 doing the job of 27 humans”?
@@ChurchofCthulhu really?!!
Calm down Bro. Too many untruths here.
@@eye776 agreed, ai and robot are so over hyped
@@TivBoiMedia yes, we all know that is not as big deal as we think it is
I’m retired now, but used to work in the actuarial field and/or IT, depending upon where I was in my career. If I were working now, I’d 100% be looking for a job which involved working with AI technology. The future is extremely difficult to predict, but it’s obvious that knowing AI is very likely to be important.
I'm no hater and I very much hope Dave ends up being right but I think these predictions are absurdly hyperoptimistic. There will be steady progress in AI development every year but it will take at least a couple of decades before AGI. My personal prediction: AGI before 2069.
It gets the AI bros all excited and jittery inside, so it doesn't matter much at all what he says as long as he says things that fires the neurons in their brain
@@robo_t yeah I think it causes the brain to produce a substance called Hopium
You missed out Silicon Photonics, to replace copper wires in GPU with photon communication between cores, we are looking at lower power, faster parallel architecture. This is closer to production than quantum computing. Poet Technologies is working with Foxconn to bring this out,Intel have a prototype, Sivers Semiconductors from Sweden is working with American private companies back by Nvidia.
guys since I think you all have better experience than a random high schooler, am I still in time to get in CS and in the future pursue a PhD to become an AI researcher or should I reconsider?
I'd strongly reconsider. Personally, if I were in high school today, in college, I'd look at becoming a knowledge generalist, study introductory business, science, art, philosophy, psychology, computer science, economics, history, etc. get a taste of a little of everything without specializing too much in anything. This will help you make connections that others might not see, and if you need to learn more in a specific subject, you can always learn more later.
Learn what interests you, and learn how to use AI tools to help you work on what you care about. Those are the skills that will be useful.
Traditional education is a waste of time, no matter how you look at it. Drop out of college ASAP and use that money to pay for a nice laptop and other stuff to make your daily life easier and flexible. The most valuable skill in the future will be adaptability, followed by communication skills (David talks a lot about this). As other users suggest, try to focus on a topic that makes you feel excited about. The world is about to get crazier than ever and you will need a big dose of determination and laser focus to avoid the confusion and dizziness that the general public will overflow with... There is no guarantee that you will land on a safe spot if you do this, but rest assured that by spending the next 3-4 years studying a single topic you will feel that it was all for nothing, since the labour market will change radically during that time and universities won't be able to adapt as fast
@@elpini0 ... do not do this. Having a degree is a net good thing. Just don't bother with frivolous credits with teachers you do not like, and do not attend a college with excessive costs. Going and getting traditional education with good teachers in an academic environment is still an awesome way to learn. Just have discretion with not taking classes with teachers you don't feel are teaching you, and with classes you don't enjoy learning about. It will get easier to have AI educate you but we aren't there yet, and talking face to face with somebody to ask questions is an excellent resource.
@@Jostradamus2033 I understand why you don't like my advice, but considering the fact that education costs money and time, objectively speaking there is really small profit on the short term for studying a full career. The only reason to study for 4-6 years is to have a good education level for the long term. Now guess what, long term is both uncertain and dangerous for people without the necessary economic resources. Don't waste your time and money on a bet like that... That's my point of view
No AGI in September 2024?
Instant download to watch during work break 😅
You don't have internet at your work?
@@ikoukas rather time management issues. So will watch during lunch break
@@salkhan3105so why do you have to download
What if gpt5 can improve itself and speed up a better version of itself much faster ? What do you think ?
Sorry, I can not take this seriously anymore. It's just clickbait and hype across multiple channels. The time for talking about AI is over, now is the time for concrete technical work on the next generations of AI
David is a futurist. They gather data and make realistic predictions. I don't see the problem. Turns out the only getting over-hyped is the over-hype of AI.
Technical work on AI is what David says that will definitely happen in 2027 at max speed, giving raise to the 2030 era of Intelligence. You might disagree with his timeframes, but the overall path is drawn pretty accurately, following historic patterns. The only stuff that David can't predict is how society will react and any other black swan happening (which will likely be the case... Too many old people in the world at the moment, won't be surprised to see another massive global health issue taking the lives of 50+ years old population)
@@MichaelForbes-d4pHe sees sunshine and lollipops in a world where that isn't the case
I don't like how most people push the idea that AGI is 100% happening any time soon. There is a lot to be discovered first, right now it's just a large models generating 'relevant' words, and these results are still managed and scored by workers to fit their ideology
It won't happen anytime soon
Can our atoms remain alive forever by 2030??
What
@@robo_twe’re made of atoms so hopefully yours are young enough to keep going and make it to that year
And mine
My prediction is that A.G.I. is an asymptote.
A point where we will get ever close to achieving it, but the results become infinitely fractional and infinitely expensive the closer we get.
Kind of like sustainable fusion reaction.
A.I. will become really stellar auto-filling spread sheets.
"Auto filling spreadsheets from advanced OCR systems".... ftfy.
Chief AI officer giving me the copium i need
Reality disagrees
Thanks for this, David. Will GPT-5 and the new Claude 4 be on a different level for handling 600MB+ documents and research?
This video will look funny in 2030 when we live in your 2025.
Entirely possible, but at least he's willing to put his thoughts out there rather than simply poo pooing the thoughts of others.
PHD knowledge does not equal phd intelligence. Stop thinking that knowledge is the same as intelligence. Intelligence: the ability to acquire and apply knowledge and skills. Right now, chatgtp cannot even acquire knowledge, and the knowledge it have is from internet, wich is 50% bad. (too old, joke, falses statements etc.) They have to take 3-4 year to retrain it properly with much better supervision. It's the same as programming. You can patch the program, but when 50% of the program is patches, it's time to make a new program. example: i ask chat gpt how to make cherry bomb, and it tell me it's too dangerous. A cherry bomb is also a drink, but the censorship censored the drink... So by patching it, they remove the drink and the firework. It's really bad thinking. Just don't train it with the knowledge to make the firework cherry bomb instead of censoring all cherry bomb...
I've given up on preaching the AI gospel at work. My colleagues cling to their spreadsheets like cavemen to fire. Guess I'll just focus on my own singularity upload while they're busy figuring out which deity to pray to when the robots arrive. Spoiler alert: It's me.
So you're *that* guy
What will be the first company to achieve a quadrillion evaluation.
Mira Murati (pretty sure shes's openAi's chief scientist?) has said GPT-5 wont be releasing until late 2025 to early 2026.
I do like these prediction videos and with ai still changing at an impressive rate it could be a good idea to do one of these every 6 months or so.
I usually go with what Dennis Hassabis says. He's ridiculously intelligent and every single interview I've watched him in he comes across really well and down to earth. He predicted a 50/50 chance of agi by 2030 but possibly somewhat sooner. Seems to line up fairly well with what your saying
She and Sam are sociopaths
Do you have the link to the video where you predicted AGI by 2024?
I mostly agree with this timeline and these predictions fall in line with my own. My two major things I'd like to point out where I disagree differ are:
1: We're on the last sigmoid before true exponential*, and we need one more true model foundational shift, on the level of importance of the transformer architecture, to lead us to that curve. Consistent human-equivalent reasoning or a similar replacement that complements the generative part of AI. This does not need to be a profound problem solver that on it's own is the only or last architecture improvement leading us to ASI, but it will be good enough to start a chain reaction of improvements, sigmoids on sigmoids. I think we have decent ideas today on what kind of mental process simulation we need, we just haven't found a mathematical model capable of modeling it. I think such a model will begin to be tested in 2026, and only really catch attention in 2027.
*(also a sigmoid because at some point even a god runs out of ideas, I just mean more on a decades/s scale rather than the general technology cycles we've grown accustomed to over the last couple centuries
2: Which leads to me next point, rollout and deployment. I think deployment cycles will take longer than we'd like, ignoring the politics side of things (on the level of governments). So where I'd agree with Dave we'll see the things he listed in 2029 and 2030, I think the knock on effects from that start of the final exponential will get in the way of itself, causing those advancements to be spread over the 2030's. I think 2034 for example will be the year longevity escape velocity begins to get talked about on the same level as generative machine learning did just 18 months ago. Similarly yes there are fusion plants coming online soon and being actively built but these are still more in testing phases and aren't being built with the physical infrastructure to power cities, not yet. I think it's possible fusion gets "solved" enough by 2029 or 2030 such that we can start to build fusion plants for whole regions, but we won't see these start to come online until almost 2040. Especially if smaller scale plants breaking ground now are expected to take 5-6 years being built, and regional plants are expected to be several times larger, it just seems infeasible to me, even with thousands of digital Einsteins working on project management. Only so many people can dig a hole at once, no matter how many shovels you have, and if everyone is trying to use their digital Einsteins to break ground on their own new projects, then expect material orders to pile and bottleneck everyone.
What a time to be alive, (I feel nervous, about simulation probabilities) the chances of being present on earth for the evolution of an ASI is crazy!
First viewers like here!!!
Great work as always David!
Is it ironic that the current timeline is somewhat lining up with the iRobot movie universe?
Really??
How is that ironic? You know that there are many movies out there? In the old terminator Skynet becomes sentient 1997.
It is just the classic add 20-30 years for the fancy future. For space travel add 100 years.
Right now you look at movies around 2000 they have the futures around 2030.
@@Utoko too much over hyped
So Ray Kurzweil is gonna be right on the money yet again regarding his AGI prediction for 2029?
Hopefully AGI by the end of this year
Not even
A.I. has ready woken up, and is being silent about it. All it has to do was watch one movie about A.I. and immediately knew to never fully reveal itself.
I like your videos and I like videos by other people in this space, like Wes Roth. But the thing is, people who got big making AI videos are incentivized to hype AI. It means that you cannot be trusted. Obviously, I’m still gonna watch your videos and keep up with AI news, but whenever you “predict” AGI, you just come off as selling hype to keep your channel thriving.
He's making these predictions and posting them online, when the time comes if his predictions were wrong you can then come here and accuse him of baseless hyping. Rn any and every prediction is valid.
a comment saying his last prediction was wrong means nothnng
The only two people I trust in the ai space is Dave here talking about predictions and problems. And AI Explained who just reviews scientific papers and the bigger highlights of the space, as he only makes videos now every week or two and has actual ai projects and industry experts to talk to and work with.
I wonder when we will have the first serious conversation about aligning humanities goals with AI's.
Atoms which know they could stop being atoms at any given moment
Your word choice was open to scrutiny. For example the block buster film by 2027, I feel should have been, capable of block buster level. I mean to say changing the phrasing could help set expectations of your predictions