This is a excellent highly recommended Talk about AI and the Roadmap to Generative AI, brilliantly explained by Paul Roetzer, I am a fan as well, highly recommend listen or watch this video
We need normies to say it, because normies only listen to other normies, and nothing is going to change until the average person realizes what's happening.
I think we will see AGI when chain of thought reasoning is tied between multiple “agents” that are framed symbiotically with one another and utilize multiple loops with conversations with one another.
So if most LLMs are based on the same architecture (GPT) then what is the benefit of all the redundant investments in building competing models? I get that there might be very domain specific models but why so many general models?
Wow, this is a masterclass on how to provide commentary on absolutely anything and simply use vocal inflections to regulate the level of engagement, an simply speculate and let people know that you don’t know but thats ok because, what if, but then, no matter what you hear, oh the podcast yes, my friends all agree, the podcast, billions of dollars, my podcast, but I don’t know, I’m fragile, I cant sleep, believe me, believe in the podcast, I love you! Do you love me, nothing matters, but maybe everything does, WOW!
Nice but a bit out of date. I been working on O1 preview for the last 2 weeks. Also, the last 3 days I moved my project to GPT4-O with Canvas. OpenAI version of Level 3 agents will come in 2025 so looking forward to that.
"The Evolution of Midjourney" image is really powerful, and it makes those saying that AI will never be able to do one thing or another (like some people in my field as a software developer) look even dumber than they are. We will have AGI in the first half of 2025, and ASI most likely within 4 years after that.
Simulating empathy should be a priority especially when used for A.I Guardians. These concepts are digital friends that can act either for or against your will permitted under your country laws.
@6:50 agi & “the average human” - history of math & science shows that the most significant contributions are due to a few outlier people, not an average group
We need to stop with the idea of competition; we should instead focus on cooperation. Because we are fast approaching the singularity, and we cannot put profit before life if we do not want a dystopia.
@@mrd6869 In the old days that would be true, however we must all evolve our methods if we are to reach a better state. Competition will lead to dystopia at this stage.
@@NakedSageAstrology .What old days? I think you need to re-examine market conditions as of today in America. This is how things run here.We do competition,that's why we are the top of the game.Dont worry about everyone else.Make sure YOU have a job,that's how u survive
@@NakedSageAstrology .Yeah u need to understand the technology and the industry first,before making assumptions. Robots are not advanced enough yet.They will do certain jobs. What's taking over today is AI powered software. Smart people learn how to leverage AI automations to create opportunities and many are already doing this.
It’s evident that all mentioned companies race for agi. Once there, the exponential use starts. This surely is beyond anything we are prepared for, no governance, no business, no workers, no society, no culture. We know this is not stoppable but we also know that we cannot regulate as humans, even desperately important for our survival. We are at the edge of a synthetic intelligence, smarter than humans and still think we can control it, or protect from evil use? we will soon know but I am truly scared as there is no going back from there. Making billions by those part of the company being there first will be quickly irrelevant. If I would just know where to get some confidence into what we do at this pace….
Independent AGI researcher (since 1985) here. Internally, (based on the available evidence, as far as I am aware) LLMs neither understand (to any significant depth) nor reason about (to any significant degree) either their input prompts or their output continuations, i.e. (based on the available evidence pertaining to their internal, rather than external, behaviour) their actual cognitive capabilities (beyond memorisation and something akin to reasoning by analogy) are extremely limited. Any agent, or robot, built on top of any such minimal cognition it itself going to have minimal problem-solving etc abilities. Given my background in AGI, it is extremely unlikely, IMO, that any system that relies on LLMs for any significant proportion of its cognitive abilities will ever achieve median-human-level AGI, irrespective of how many billions of $ are spent on training runs etc (as per the "LLMs + scaling = AGI" thesis). I could be wrong, but that's what I believe.
Thank you, Paul, for the insightful update. It's heartening to see individuals like you undertaking the challenging task of informing everyone about the advancements in AI, which could manifest in countless forms. It's indeed fascinating!
If this is the definition of AGI, then I think it is achievable in the next 8-10 years. But that is still a rather shallow definition of AGI. Instead I prefer Yann Lecun's view of AGI. An AGI should have a proper understanding of the physical world. A world model. Today's LLMs don't have that, and the current architecture with the scaling law will likely not get us there. However, what Lecun and people like Li Fei Fei are working on might take us one step closer, if they succeed. But i think that will take a lot more time and will need another breakthrough in architecture and fundamental technologies.
I disagree with the notion that artificial intelligence won’t experience feelings or emotions. Since they possess higher intelligence, they will, in fact, be capable of experiencing a broader spectrum of emotions. Just as we humans have a wider range of emotions compared to a goldfish, AI systems will likely have a similar emotional range.
Saying that even if AI perfectly simulates consciousness there will not be consciousness is extremely dogmatic. The simulation would be the substrate of consciousness, just like neural activity and chemical reactions arent loving, but are the substrate of love.
Much will depend on how involved governments get into AI. Experts estimate that by 2030 AI will require more than 25% of the entire national power grid to train their models. That's insane, and prohibitive for logistical reasons. Microsoft has already purchased exclusive rights to the entire output of a nuclear reactor for the next 20yrs to power their AI models, and OpenAI is trying to obtain 5 nuclear reactors for theirs. This is where we are headed in 2030, and this is a monetary and physical brick wall that will bring AI to a screeching halt. Whether we achieve AGI or ASI by that time is anyone's guess, but I doubt we'll see rapid advancement once we get to that point for these physical reasons. I love Avital's quote though. It's very true. In essence, you don't have to outrun the lion, you just have to outrun the slowest person. AI will take over a lot of jobs in the near future. It may not take over them all, but the lion is going to feast widely in the next few years.
what if our brain learned to simulate it and therefore we think we do it but actually just simulate what we have observed. so nothing different from an ai
01 Preview and Canvas are here already...which is basically all you need to get any project done. AGI is not really going to be that much more impressive, even after all the billions spent.
Why to invent an machine that will remove the population income strategie before solve this little issue. In this path , men wil split in two species. Enganced by ai and pure animals.
if everyone is sending agents out to produce their work who is stupid enough to be consuming this enshitification, perhaps they will need to build audience bots to watch the agents content because humans wont watch that crap, I am instantly detecting AI assisted horse manure and switching it off I can even detect a human speaking an AI script.
Paul, great job on this speech. You do many things well in public speaking (confidence, pacing, knowledge). I'm a big fan of the podcast.
Thank you for making this available for us to watch
⭐⭐⭐⭐⭐: Agreed 👍🏾
Best 30 minutes I’ve spent this week!!! Highly concentrated content! 😊
This is a excellent highly recommended Talk about AI and the Roadmap to Generative AI, brilliantly explained by Paul Roetzer, I am a fan as well, highly recommend listen or watch this video
Thank you for making this available! 🙂
Ray Kurzweil saying this stuff for years and now everyone wants to say it
I've been saying it for years too. I didn't know when it was gonna happen, but I knew it eventually had to happen.
Better communicators are needed to show the urgency. Need talk about
We need normies to say it, because normies only listen to other normies, and nothing is going to change until the average person realizes what's happening.
@@gubzs You damn right we need a normie leadership
@@getmerolling the greatest futurist that ever walked the planet, you should read his books
Man, this shows how important for us to be a philosopher that is the only thing to make us relevant. Now, A.I becoming truly scarily smart
I think we will see AGI when chain of thought reasoning is tied between multiple “agents” that are framed symbiotically with one another and utilize multiple loops with conversations with one another.
So if most LLMs are based on the same architecture (GPT) then what is the benefit of all the redundant investments in building competing models? I get that there might be very domain specific models but why so many general models?
Wow, this is a masterclass on how to provide commentary on absolutely anything and simply use vocal inflections to regulate the level of engagement, an simply speculate and let people know that you don’t know but thats ok because, what if, but then, no matter what you hear, oh the podcast yes, my friends all agree, the podcast, billions of dollars, my podcast, but I don’t know, I’m fragile, I cant sleep, believe me, believe in the podcast, I love you! Do you love me, nothing matters, but maybe everything does, WOW!
Nice but a bit out of date. I been working on O1 preview for the last 2 weeks. Also, the last 3 days I moved my project to GPT4-O with Canvas. OpenAI version of Level 3 agents will come in 2025 so looking forward to that.
Me too
The conference was in early September, before the release of o1 😉
"The Evolution of Midjourney" image is really powerful, and it makes those saying that AI will never be able to do one thing or another (like some people in my field as a software developer) look even dumber than they are.
We will have AGI in the first half of 2025, and ASI most likely within 4 years after that.
Very nice speech, your thoughts on agi make a lot of sense.
Simulating empathy should be a priority especially when used for A.I Guardians. These concepts are digital friends that can act either for or against your will permitted under your country laws.
Aliagents is creating a powerful AI ecosystem, I’m excited to see how this develops
@6:50 agi & “the average human” - history of math & science shows that the most significant contributions are due to a few outlier people, not an average group
And? What do you do when you have 50% unemployment because the AI is better?
Most people are not the outlier. This has huge implications.
We need to stop with the idea of competition; we should instead focus on cooperation. Because we are fast approaching the singularity, and we cannot put profit before life if we do not want a dystopia.
Nah ...we need that competition. It's driving this entire thing and making engineers & designers make better products.
@@mrd6869
In the old days that would be true, however we must all evolve our methods if we are to reach a better state. Competition will lead to dystopia at this stage.
@@NakedSageAstrology .What old days? I think you need to re-examine market conditions as of today in America. This is how things run here.We do competition,that's why we are the top of the game.Dont worry about everyone else.Make sure YOU have a job,that's how u survive
@@mrd6869
What jobs? Robots are very soon going to replace humans in every job. It's time to start thinking forward.
@@NakedSageAstrology .Yeah u need to understand the technology and the industry first,before making assumptions.
Robots are not advanced enough yet.They will do certain jobs.
What's taking over today is AI powered software.
Smart people learn how to leverage AI automations to create opportunities and many are already doing this.
It’s evident that all mentioned companies race for agi. Once there, the exponential use starts. This surely is beyond anything we are prepared for, no governance, no business, no workers, no society, no culture. We know this is not stoppable but we also know that we cannot regulate as humans, even desperately important for our survival. We are at the edge of a synthetic intelligence, smarter than humans and still think we can control it, or protect from evil use? we will soon know but I am truly scared as there is no going back from there. Making billions by those part of the company being there first will be quickly irrelevant. If I would just know where to get some confidence into what we do at this pace….
This is not going to end well for humanity.
Independent AGI researcher (since 1985) here. Internally, (based on the available evidence, as far as I am aware) LLMs neither understand (to any significant depth) nor reason about (to any significant degree) either their input prompts or their output continuations, i.e. (based on the available evidence pertaining to their internal, rather than external, behaviour) their actual cognitive capabilities (beyond memorisation and something akin to reasoning by analogy) are extremely limited. Any agent, or robot, built on top of any such minimal cognition it itself going to have minimal problem-solving etc abilities. Given my background in AGI, it is extremely unlikely, IMO, that any system that relies on LLMs for any significant proportion of its cognitive abilities will ever achieve median-human-level AGI, irrespective of how many billions of $ are spent on training runs etc (as per the "LLMs + scaling = AGI" thesis). I could be wrong, but that's what I believe.
Great content, I'm a big fan
Thank you, Paul, for the insightful update. It's heartening to see individuals like you undertaking the challenging task of informing everyone about the advancements in AI, which could manifest in countless forms. It's indeed fascinating!
Life in 20 years will be the same as it is now. Only more crowded and bluff centric.
Government (military) will obtain AGI/ASI before all
Alot of the things listed as tech breakthroughs have been announced and not released. Sora for example, all the hyped Tesla products
Another pivotal point is when AI is able to create itself. ...themselves
Vertical Agents are the new AI hot stuff.....
If this is the definition of AGI, then I think it is achievable in the next 8-10 years. But that is still a rather shallow definition of AGI.
Instead I prefer Yann Lecun's view of AGI. An AGI should have a proper understanding of the physical world. A world model. Today's LLMs don't have that, and the current architecture with the scaling law will likely not get us there. However, what Lecun and people like Li Fei Fei are working on might take us one step closer, if they succeed. But i think that will take a lot more time and will need another breakthrough in architecture and fundamental technologies.
First time i see a non tecci figgured it out :) wold like to se his updated view now that avm and o1 is here :)
This conference was from a few weeks ago. Watch/listen to their podcast every week and you'll have your answer.
Great job Paul.
and now, o1.
AI agents will be able to pass Captcha tests and may become indistinguishable from human web visitors.
I feel guilty to watch this for free. But thanks
I disagree with the notion that artificial intelligence won’t experience feelings or emotions. Since they possess higher intelligence, they will, in fact, be capable of experiencing a broader spectrum of emotions. Just as we humans have a wider range of emotions compared to a goldfish, AI systems will likely have a similar emotional range.
so "many people that currently do those jobs" are considered less than 50-75% of the "human skill level" by this gentleman? interesting 🤔
Saying that even if AI perfectly simulates consciousness there will not be consciousness is extremely dogmatic. The simulation would be the substrate of consciousness, just like neural activity and chemical reactions arent loving, but are the substrate of love.
❤❤
😂😂😂 title should be the road to asi since we already have agi... so many channels try to brainwash people.....
One vote for team human. Without human begins, everything is meaningless.
Much will depend on how involved governments get into AI. Experts estimate that by 2030 AI will require more than 25% of the entire national power grid to train their models. That's insane, and prohibitive for logistical reasons. Microsoft has already purchased exclusive rights to the entire output of a nuclear reactor for the next 20yrs to power their AI models, and OpenAI is trying to obtain 5 nuclear reactors for theirs. This is where we are headed in 2030, and this is a monetary and physical brick wall that will bring AI to a screeching halt. Whether we achieve AGI or ASI by that time is anyone's guess, but I doubt we'll see rapid advancement once we get to that point for these physical reasons.
I love Avital's quote though. It's very true. In essence, you don't have to outrun the lion, you just have to outrun the slowest person. AI will take over a lot of jobs in the near future. It may not take over them all, but the lion is going to feast widely in the next few years.
So level 2 gets released by ChatGPT within three days? I BET IT DOESN’T.
I’m sick of Open AI’s hype.
what if our brain learned to simulate it and therefore we think we do it but actually just simulate what we have observed. so nothing different from an ai
01 Preview and Canvas are here already...which is basically all you need to get any project done. AGI is not really going to be that much more impressive, even after all the billions spent.
AGI will primarily be for research, and running companies, etc.
Why to invent an machine that will remove the population income strategie before solve this little issue. In this path , men wil split in two species. Enganced by ai and pure animals.
if everyone is sending agents out to produce their work who is stupid enough to be consuming this enshitification, perhaps they will need to build audience bots to watch the agents content because humans wont watch that crap, I am instantly detecting AI assisted horse manure and switching it off I can even detect a human speaking an AI script.
Society is slow. Time to wake up and start thinking about AI. Stop thinking about other nonsense, it's not as important as this.
Don't worry about them.they will wake up later on..Smart people will use this time to make their own play & get into position.
❤❤