You need it to get the media hyped about AI again. They have given up on the 'AI is Skynet' thing so they need a new scare for the knuckleheads who tune in.
yeah, refreshing to hear someone's perspective who is knowledgeable and very smart with years of experience in the industry. I don't know if he's factoring in competition, or the AI becoming smart enough to help with research towards improvement. He mentioned that we need breakthroughs in planning, reasoning, actions, memory, and personalization. However, just solving reasoning could be enough to have AI accelerate the timeline
Companies are realizing it’s not a good idea to actually say “we have AGI” because there will be alarms going off from everywhere including the government. It’s much better to just say “soon” while showing more and more progress and blurring the definition of AGI more and more so we never really get there, but can reap all the benefits.
@axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds. Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were. Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.
@axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds. Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were. Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.
It’s hyped some of its AI stuff. A year ago or whenever it received criticism for a misleading demonstration. But yes, I think that startups are often in a position of wanting to generate hype to get increased funding, while Alphabet is in a position of wanting to maintain a brand image of being, I dunno… results oriented and with some degree of integrity
The position that the company is in matters, as I pointed out in my first comment, but also it matters who within the company we hear from. It’s the marketing team that messed up. Listening to this head AI engineer, or whatever he’s called, is a better source for understanding.
Here is a short list of companies with neuromorphic hardware that is already available or will be within the next year. AGI is most likely going to be a product of neuromorphic hardware. Rain Neuromophic Akida Pico 2 Intel Hala Point IBM TrueNorth SpiNNaker Prophesee Event-Based Vision Sensors SynSense (aiCTX) HRL Laboratories’ Neuromorphic Deep Learning Chip NVIDIA Morpheus Innatera NPU (Neuromorphic Processing Unit)
Altman is first of all a business man than an engineer. That sort of makes me take his timelines and claims with a grain of salt. As you very well said, AGI has different levels and different thought leaders use the word in different depths. Personally, I believe for AI to take the next big step, it'll need to be beyond your typical transformer concept. Comprehension needs to be genuine than being mimicked.
Once the metacognitive architecture dramatically stabilises the outputs in regards to probability based hallucinations, the door for fully autonomous agents in many job-like roles is open. We don’t need rocket science reasoning for open-world routine tasks. Just human-defined common sense/reliability. Leveraging the full potential of more capable AI systems will take us decades anyways, so there is no need to start with the most competent level from the beginning.
3:13 I would argue that anthropic lowers the bar for entrance to agi or 'advanced ai' which leads to a short timeline. Such as if google were asked for expectations of release at the claude standard would result in also shorter than 10 years estimate. Is my thought. 6:29 But these products developed today ten years from now will probably be archaic. So i understand what he says but I feel this is more publicity statement than part of AI's evolution. 8:51 I love how ai is dumbed down to become more conversation with people but than obfuscates code explaination using exact terminology to explain basic computation. Cheers(!)
I think altmans estimate of a few thousand days is closer. 9000 days is less than 30 years which is probably accurate. But at least 20 years out, assuming we continue to accelerate as we are now. All these companies have a financial incentive to low ball these numbers.
The blue crystal has a vertex. A vertex is the point where atoms are positioned in a crystal lattice structure, like the orthorhombic lattice. The orthorhombic crystal system is one of the 7 crystal system in crystallography. Ai agents are used in the orthorhombic crystal system to enhance material discovery and analysis. They can autonomously perform phase identification from X-ray diffraction data, speeding up the identification of promising new materials.
Just wait till Grok 3 is released and you may change your mind. Unlike somthing like fusion power, we are seeing noticeable improvements in AI and if these improvements don’t stop or slow down, we will eventually see very very powerful AI
I tend to think general applicable intelligence is here but getting it to a proper functional state is going to take a good few years, still disruptive, but going from the ideas of what a true AGI would be he is probably closer than those say half a century or those saying next year.
I think Dario’s term “Powerful AI” as he laid it out, is a lot more useful than the vague term AGI. Nobody knows what you’re talking about when you say AGI. But it does seem that powerful AI capable of acting as an independent worker in it’s own right isn’t far away at all.
I think the reason people have such a wide range of estimates on when AGI will arrive is due to there being no clear definition of what AGI means. Some would say we're already there, while others like myself would agree with Google's CEO and put it at 10+yrs out. I agree mostly with OpenAI's 5 level definition. I don't think it needs to be able to do the work of an entire organization, but it does need more than level 4.
AGI : human like intelligence : AMI for meta. Simple, it’s an agent as intelligent as human, with ability to innovate, understand, find new solutions. And work with others
Can you please publish the references of the content you use to produce your video? If you are looking over an interview, I think you should mention the source. Thank you.
Part of the issue here is what is AGI, is it the same as the Turing Test which every LLM has passed now? If the work output is as good as or better than a top-tier human expert in a particular field, does it matter how it was created, or that it simply works? I have multiple RAG models that are operating in the 80th - 90th percentile for their siloed expertise. A year from now, they will likely be in the 96% = 98% range.
I don't really think the situation with the moving goalposts makes sense. In some ways AGI is already here especially for insiders, people that actually use the tools to their full extent and know about multimodality and promp engineering. The agentic part and the spatial reasoning is what's actually missing currently if you follow the progress. Also, AGI level systems transform society when adopted to a significant level across all industries. There is one thing societal transformation that's an after effect and another thing achieving whatever AGI is now claimed to be since with every successful benchmark the goalposts are moving.
Two years ago, AGI was considered simply "AI + Cognitive Architecture + Motor Architecture". But we are there already. So now they require AGI to be "Better that everyone at everything".. Problem is that this was the previous definition for ASI. And now OpenAI adds to that "until 100 experts agree this is AGI it's not AGI".. Now what is that?! We will never get AGI when these guys keep moving the goalposts all the time! And its clear OpenAI will keep moving them, as the moment AGI is attained, their contract with M$ expires. Well, when the AI starts fighting back, nobody will be bothered by the question of the definition of AGI. Most will be busy crapping their pants instead🙂.
Yes. And there's nothing new here. Every decade since the 60s, people working on AI have said that their project would achieve A + B + C, and therefore capable of replacing all human labor. Then their projects did indeed achieve A + B + C, and new problems were discovered. I have not yet seen evidence that the AI paradigm this decade is any different. Every time a technology breakthrough has happened, the level humans are at was surpassed very quickly. For the things those systems could do, that is. Personally, I think LLMs have surpassed humans already at the things LLMs can do. And I think they will continue to get better for a while. Of the things that LLMs can do well, there is none that I can do at even 1/10th proficiency. But there continue to be plenty of things that I can do - even purely text based - that LLMs have no proficiency to speak of. The problem is all the things that AIs don't do. In time we'll discover what those things are this decade. But not even knowing the new problems makes it hard to predict what will come next. Let's hope fighting back isn't the unknown problem that we will accidentally solve soon!
@@upgradeplans777 I don't think so. If you just focus on what chatgpt can already do and is becoming better at you can very reasonably estimate that it will replace most cognitive human labor as it can already do most of it very efficiently. AI right now is so much more general than it was in the 60s, we have a much better understanding on AI's capabilities now. In the 60s you had people saying that AI would replace human labor because it could do A and B then it can learn to reason with language the same way a human can. However we are currently now at point C, where for all intents and purposes AI can reason linguistically using common sense the same way a human can, sure there's some edge cases where it fails, but it's more than enough to automate human labor. As of right now, the edge cases keep getting smaller and smaller on each iteration.
@@sjcsscjios4112 Yes, that is what I focused on. AI right now is indeed so much more general than it was in the 60s. And door-to-door salesmen are gone, we have influencers now. A human work day on a farm has become 10x as productive since the 60s, and it has become 250x as productive as before automation. (Specifically: For a very long time, one person doing farm work produced enough food for around 2 people. Right now one person doing farm work in developed countries produces food for around 500 people on average, with the most advanced farms even doing much more than that.) 90% of human work has been made obsolete many times in history. And even in the field of AI there have been successes many times (on a smaller scale) since the 60s. There is absolutely a boom right now, but not in a completely new way. First, technology could do A, then it could do B, then C, etc, and now it can do L. And I completely agree that technology can now do language, and audio, and images. I assume that it will do video and 3D environments (aka "Embodied AI") relatively soon. (Or as soon as we have built the datacenters, in the case it turns out to be more data intensive than we can handle right now.) But it cannot yet do M (the next thing), and right now we just don't even know what M is. For example, ChatGPT is completely inept at making business decisions, ChatGPT is completely inept at empathy. ChatGPT is completely inept at having inspiration. ChatGPT is completely inept at self control. There are many things it cannot do. And it will take a long time until we even understand what the next best thing to automate is. I'm not dismissing the capabilities of AI here, I'm a software developer with a little less than 30 years experience. Often people think that my job is to produce source code. I myself often think that that is my job. But LLM's are many times faster than me at producing source code, and the code works. Right now, LLM produced code is a pain to read, but I do think that will be solved soon as well. Luckily, my actual job is to produce software that people want to use. Does that involve making business decisions? And how much? I don't know exactly. Does it involve having inspiration? And how much? I don't know exactly. We'll only find out when the current generation of AI matures and filters through in a large part of society. Long story, but my reaction to nyyotam was that the goalposts WILL be moved again. Not only because the structure of OpenAI requires it (which is a correct observation from nyyotam), but also because moving the goalposts is what we have been doing for centuries already, and there's actually no evidence that I see for thinking it will stop.
AGI to me has always been to me the ability to interact with the world and then be able to modify its own code to learn in the same way a baby does. This in not generative AI, it is fundamentally different in its approach, and we have no idea the direction such an AGI will take. Also we have no idea of how fast such a system can develop, called the 'runway take off rate'
AGI is most likely going to take longer than expected. I’m sure OpenAi has some powerful models but you have to remember that OAI is a for profit company now so it benefits them if Sam keeps saying it’s “soon” to keep interest in the company.
Nobody really knows when AGI (Artificial General Intelligence) will happen. Current LLM models don’t possess any form of self-awareness, and as humanity, we lack scientific consensus on the matter. So even stating that AGI might emerge in 10 years could be accurate or perhaps 100 years too early. We just don’t know yet, and the proposition isn’t simple either. It may be that we need to redefine what the very term “intelligence” means in the first place before we even get to self-awareness or consciousness - both of which are required by the current definition of human-level intelligence. We also need AIs to learn to be messy, like humans are. Some of our best inventions came through chaos and random discovery, and current models aren’t capable of that either. On the other hand, many would argue we have already passed the Turing test tenfold. But we need scientific, including psychological and psychiatric, definitions of what constitutes human intelligence before we start defining what an artificial version would look like.
I believe all we are missing now are mocks of different systems that our brain implicitly has, the fact that we have a roadmap (the human brain) makes me a lot more optimistic.
I was an AI product manager at GE Software and make videos on how AI works. The “central control + specialized modules” approach he discussed at the end is almost certainly how this will be done. The problem is too hard for one single program, plus that modular approach leverages the model from decades of conventional software development that relies on packages and plugins to enhance the central functionality. It’s more modular and efficient, and effective.
@@axe863 agreed. There are some slippery and incomplete definitions of 'intelligence' that are limited enough that people could restrict them down enough to make machines fit them.
Then what is the reason Ilya Sutskever is currently developing ASI? he strongly believes that ASI is within reach (so they might be able to create ASI in like 3+ years or so) ?
What does agi mean ?? Artificial general intelligence you can't make up shit with that word gpt4 already has this it is Artificial and it pretty smart in general intelligence. That's a fact
@@oentrepreneur General intelligence is the capacity to think across different areas and use knowledge effectively in any situation. It’s what lets someone solve a math problem, understand a historical event, and fix a basic plumbing issue all in one day. For example, someone with high general intelligence might easily switch between analyzing data at work, figuring out a new recipe at home, and helping a friend troubleshoot their phone, thanks to a broad and flexible understanding of different topics. I hope you now understand general intelligence; it was a pleasure to educate yourself.
:) when you expect the unexpected, surprising things seem to spring from out of nowhere. how many expected the ai bloom? how many others saw what would then stem from it? how many realized what would happen when it was applied to itself, then applied to itself? like the folding of a blade, an exquisite pastry of infinite layers and/or the mother giving birth to her future selves and so on and so on in turn their own...
@@paulk6900 short answer - they don’t have any current internal models of reality. LLMs are great but they aren’t autonomous nor general. Maybe you can string enough LLMs together with exports and recurring calculations to get an independent white collar sales agent.
The question is: what do you exactly mean by AGI? Which requirements exactly an AI must satisfy to be an AGI? Otherways are "predictions" without sense 😅
what we see right now, they already had it back in 2015, everything that is out there today, is a controlled fraction of what they had back in 2016. today already reached human level and most likely higher.
I'm sorry, but nobody in the world can realistically predict what will happen projecting beyond 2 years or so, the unpredictability of technological innovation increases exponentially as we approach AGI / Singularity
I do not know of AGI, I am not equipped to see into the future. Given the history and data, I expect it will become what you make of it. Just like every other gift we are given.
You've not done Hassabis justice , not only he's the CEO of deep mind but he's founder, of deep mind that got acquired by google. he's "a British computational neuroscientist[7], artificial intelligence researcher, and entrepreneur" . before anyone took notice of AI, deepmindcreated alphago. You have CEO's like Tim Cook or the late Steve Jobs but they just made business decisions, they are not scientists. Also it's not really in his best interest to downplay AI, but he's honest . Also AGI is end game where it will be a singularity, that doesn't mean we won't figure out ways to use the level of AI we have to advance humanity. From now till true AGI you'll have breath-taking advancements in so many fields
"A few thousand days" - 3653 days is ten years, 2000 days is more than five years. Altman is not being "more extreme" here, just using phrasing to make it seem that way.
" I think AGI is 10 years out" from the company faking videos and that is in around 10th place when everyone else is a few years ahead.... This was like Mary beara trying to take credit for leading the way in electrification of cars when Tesla wasn't even invited to the press conference or even mentioned.
80 Percent of the Jobs will match AGI in the next 3 to 5 years. The rest might take another 5 years. Will it matter? Yes, for the remaining 20 percent.
Well...Sure. His AI team is laging behind OpenAi and Anthropic a lot so he meant Google will get AGI in 10 years. I believe that too. Competitors will get it by year 2027.
Lol, is he admitting that Google has lost the race? I’ll put my bet on Ray Kurzweil, someone who with a track record of actually ACCURATE predictions on AI advancement. Also, Sam Altman and Dario Amodei “literally changed the industry with OpenAI. They have a great sense of where the technology is headed.
@@John-il4mp I think they changed the definition so they can sell it. We used to call it AI then the term got overused and we moved to AGI now we are moving to ASI as a term....but the truth is....true AI is once the "singularity" in computer science happens.(self improve with no external help) Then we get an artificial intelligence that is just like us...AI.... and after that we will get ASI. These are just autocomplete bots that leverage high compute power to sort data, sure they will be useful but its not true AI.
I disagree. Gemini is a handicapped model. I've never been able to make a better use of it than Claude or GPT. I just cannot believe why Google fails at the chatbot technology
Nope. It's much closer, more like 5 years or less (conservative). However, what I really think is it's actually much less than 5 years, more like 2.5 years. Google's AI kinda sucks, so I'm not sure why this guy is pretending to have the answer while his company is so far behind the top competitors.
😮 you see way before you get to AGI😮 can we change that word let's say what it is God😮 an omnipresent God😮 lane from serial experiments lain😮 bro way before you get there so angels are going to be here archangels are we seeing this😮 there's a lot of mythical creatures on power level scale before you get to God😮 we all acknowledge something like marvel superheroes with change our entire society😮 level three agents when agents get here human society as we know it can no longer function😮 it is hardly functioning now😮
In 2026, you will be the assistant for the AI, no joke, AI will say try these recipes and open a restaurant and make money, so you can buy me more vram
It won't tell you to do, it will just do it. There is no place for humans in such a society. Economy grinds either to a standstill, or is used purely by AI. That's why we need stuff like negative tax or universal basic income (whatever you want to call it).
You don't need full blown AGI to transform society.
Thats not whats being discussed or argued here tho
yeah, you need a revolution to do that.. AI does not change anything, its still the same hierarchical capitalism and oppression just on steroids.
Especially since the definition of "AGI" changes every month. We are pretty much at the point where it would need to be godlike to qualify as AGI.
Maybe but you need zero hallucinations and agents at least. And huge context.
You need it to get the media hyped about AI again. They have given up on the 'AI is Skynet' thing so they need a new scare for the knuckleheads who tune in.
The fact that AGI is likely coming in our lifetime is alone a marvel
We already have agi 😂😂😂
@@John-il4mpnah
@thedannybseries8857 what is agi explain me this ill educate yourself after.
The fact AGI can even exist
Ray kurzeril said 20 years ago 2029 and he looks pretty accurate.
I trust Demis the most when it comes to this AGI talk, he is the most level headed.
Stupidity controlled narrative... in 10 years it wont be agi it will be ASI
But thank goodness Google are not the only ones deciding the timelines, so we could get AGI sooner.
He is chasing the science, not the profits.
yeah, refreshing to hear someone's perspective who is knowledgeable and very smart with years of experience in the industry. I don't know if he's factoring in competition, or the AI becoming smart enough to help with research towards improvement. He mentioned that we need breakthroughs in planning, reasoning, actions, memory, and personalization. However, just solving reasoning could be enough to have AI accelerate the timeline
@@sjcsscjios4112 perhaps he has a differnt definition, standard.
?
I'm pretty sure AGI is a mere 98 months, 4 days, and 23 hours away. I know because I can feel it in my gut.
53 months
Chat gpt can carry on a conversation better then anyone I know and knows more things then any human I know
It still makes stuff up when it doesn't know what it's talking about
@@tcuisixand people don’t?
@@tcuisix You can pretrain and/or prompt engineer that out though. Easy fix. Hallucinations are a non-issue.
So do wikipedia and online dictionaries. Without reliable agents behaviour, chatgpt is just wikipedia with summarization function
@@vineetmishra2690 lol this is so far from the truth. I’ve had o1-preview solve complicated go bugs. They are far smarter than summarization
Google's AGI is 10 years away. Other companies' AGI is couple years away...)
Companies are realizing it’s not a good idea to actually say “we have AGI” because there will be alarms going off from everywhere including the government. It’s much better to just say “soon” while showing more and more progress and blurring the definition of AGI more and more so we never really get there, but can reap all the benefits.
True now they speak ASI we already have agi
We are nowhere near AGI
@@axe863 you don't know what agi mean let me know than I'll educate yourself...
@axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds.
Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were.
Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.
@axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds.
Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were.
Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.
I guess this guy is the most balanced opinion you can get - incredibly knowledgeable, but also in a company that does need to sell hype to investors.
Yes totally agree. The fact they made a lot of their innovative computer vision breakthroughs at DeepMind by playing video games is pretty funny too.
It’s hyped some of its AI stuff. A year ago or whenever it received criticism for a misleading demonstration.
But yes, I think that startups are often in a position of wanting to generate hype to get increased funding, while Alphabet is in a position of wanting to maintain a brand image of being, I dunno… results oriented and with some degree of integrity
The position that the company is in matters, as I pointed out in my first comment, but also it matters who within the company we hear from. It’s the marketing team that messed up. Listening to this head AI engineer, or whatever he’s called, is a better source for understanding.
Looks like he might lose the agi race if its 10 years . Others are ahead of him for agi
Yup. Or maybe that is what Google wants people to think.
Here is a short list of companies with neuromorphic hardware that is already available or will be within the next year. AGI is most likely going to be a product of neuromorphic hardware.
Rain Neuromophic
Akida Pico 2
Intel Hala Point
IBM TrueNorth
SpiNNaker
Prophesee Event-Based Vision Sensors
SynSense (aiCTX)
HRL Laboratories’ Neuromorphic Deep Learning Chip
NVIDIA Morpheus
Innatera NPU (Neuromorphic Processing Unit)
Altman is first of all a business man than an engineer. That sort of makes me take his timelines and claims with a grain of salt. As you very well said, AGI has different levels and different thought leaders use the word in different depths. Personally, I believe for AI to take the next big step, it'll need to be beyond your typical transformer concept. Comprehension needs to be genuine than being mimicked.
Once the metacognitive architecture dramatically stabilises the outputs in regards to probability based hallucinations, the door for fully autonomous agents in many job-like roles is open. We don’t need rocket science reasoning for open-world routine tasks. Just human-defined common sense/reliability. Leveraging the full potential of more capable AI systems will take us decades anyways, so there is no need to start with the most competent level from the beginning.
3:13 I would argue that anthropic lowers the bar for entrance to agi or 'advanced ai' which leads to a short timeline.
Such as if google were asked for expectations of release at the claude standard would result in also shorter than 10 years estimate.
Is my thought.
6:29 But these products developed today ten years from now will probably be archaic. So i understand what he says but I feel this is more publicity statement than part of AI's evolution.
8:51 I love how ai is dumbed down to become more conversation with people but than obfuscates code explaination using exact terminology to explain basic computation. Cheers(!)
I think altmans estimate of a few thousand days is closer. 9000 days is less than 30 years which is probably accurate. But at least 20 years out, assuming we continue to accelerate as we are now. All these companies have a financial incentive to low ball these numbers.
You’re still on the ‘20 years away’ timeline? Wow, after all you’ve seen the timelines haven’t changed.
The blue crystal has a vertex.
A vertex is the point where atoms are positioned in a crystal lattice structure, like the orthorhombic lattice. The orthorhombic crystal system is one of the 7 crystal system in crystallography.
Ai agents are used in the orthorhombic crystal system to enhance material discovery and analysis. They can autonomously perform phase identification from X-ray diffraction data, speeding up the identification of promising new materials.
Fact, when any expert in tech/science says "It's 10 years away," they have no idea. It's just like fusion power, always "10 years away."
Yawn. They know more than you.
@@daniellivingstone7759yawn, they are all saying different things.
So, do they really?
Just wait till Grok 3 is released and you may change your mind. Unlike somthing like fusion power, we are seeing noticeable improvements in AI and if these improvements don’t stop or slow down, we will eventually see very very powerful AI
@@georgemontgomery1892yh but they still know more than you. So what are you talking about?
@@oentrepreneur I thought that part was obvious lmao
I tend to think general applicable intelligence is here but getting it to a proper functional state is going to take a good few years, still disruptive, but going from the ideas of what a true AGI would be he is probably closer than those say half a century or those saying next year.
I think Dario’s term “Powerful AI” as he laid it out, is a lot more useful than the vague term AGI. Nobody knows what you’re talking about when you say AGI. But it does seem that powerful AI capable of acting as an independent worker in it’s own right isn’t far away at all.
I think AI agents with reasoning will get us there. They say agents are 1-2 years away, so very soon.
Agi mean artificial general intelligence... does gpt 4 has some kind of general intelligence yes we have agi.... fact
@@20Twenty-3gpt4 preview as reasoning we already at agi level but they control it pretty much we have a bridded version.
I trust Sam Altman more because
what he said is coming true.Even
Google Gemini gives inaccurate results while the copilot gives
accurate ones.Go Sam🎉🎉
I think the reason people have such a wide range of estimates on when AGI will arrive is due to there being no clear definition of what AGI means. Some would say we're already there, while others like myself would agree with Google's CEO and put it at 10+yrs out. I agree mostly with OpenAI's 5 level definition. I don't think it needs to be able to do the work of an entire organization, but it does need more than level 4.
There is a clear *spark* missing. Something tha tmakes us conscious and able to think through things where AIs cant.
AGI is 10 years away and BTW did you know you can buy our stocks? If you buy our stocks I might say that AGI will happen sooner
Lol
AGI : human like intelligence : AMI for meta. Simple, it’s an agent as intelligent as human, with ability to innovate, understand, find new solutions. And work with others
Can you please publish the references of the content you use to produce your video? If you are looking over an interview, I think you should mention the source. Thank you.
Kurzweil says 2029 for AGI, and that’s probably still the safest bet. And by “safe” I mean utterly terrifying and dangerous, of course
Part of the issue here is what is AGI, is it the same as the Turing Test which every LLM has passed now?
If the work output is as good as or better than a top-tier human expert in a particular field, does it matter how it was created, or that it simply works?
I have multiple RAG models that are operating in the 80th - 90th percentile for their siloed expertise. A year from now, they will likely be in the 96% = 98% range.
I don't really think the situation with the moving goalposts makes sense. In some ways AGI is already here especially for insiders, people that actually use the tools to their full extent and know about multimodality and promp engineering. The agentic part and the spatial reasoning is what's actually missing currently if you follow the progress. Also, AGI level systems transform society when adopted to a significant level across all industries. There is one thing societal transformation that's an after effect and another thing achieving whatever AGI is now claimed to be since with every successful benchmark the goalposts are moving.
Two years ago, AGI was considered simply "AI + Cognitive Architecture + Motor Architecture". But we are there already. So now they require AGI to be "Better that everyone at everything".. Problem is that this was the previous definition for ASI. And now OpenAI adds to that "until 100 experts agree this is AGI it's not AGI".. Now what is that?! We will never get AGI when these guys keep moving the goalposts all the time! And its clear OpenAI will keep moving them, as the moment AGI is attained, their contract with M$ expires. Well, when the AI starts fighting back, nobody will be bothered by the question of the definition of AGI. Most will be busy crapping their pants instead🙂.
Yes. And there's nothing new here. Every decade since the 60s, people working on AI have said that their project would achieve A + B + C, and therefore capable of replacing all human labor. Then their projects did indeed achieve A + B + C, and new problems were discovered. I have not yet seen evidence that the AI paradigm this decade is any different.
Every time a technology breakthrough has happened, the level humans are at was surpassed very quickly. For the things those systems could do, that is.
Personally, I think LLMs have surpassed humans already at the things LLMs can do. And I think they will continue to get better for a while. Of the things that LLMs can do well, there is none that I can do at even 1/10th proficiency. But there continue to be plenty of things that I can do - even purely text based - that LLMs have no proficiency to speak of.
The problem is all the things that AIs don't do. In time we'll discover what those things are this decade. But not even knowing the new problems makes it hard to predict what will come next. Let's hope fighting back isn't the unknown problem that we will accidentally solve soon!
@@upgradeplans777 I don't think so. If you just focus on what chatgpt can already do and is becoming better at you can very reasonably estimate that it will replace most cognitive human labor as it can already do most of it very efficiently. AI right now is so much more general than it was in the 60s, we have a much better understanding on AI's capabilities now. In the 60s you had people saying that AI would replace human labor because it could do A and B then it can learn to reason with language the same way a human can.
However we are currently now at point C, where for all intents and purposes AI can reason linguistically using common sense the same way a human can, sure there's some edge cases where it fails, but it's more than enough to automate human labor. As of right now, the edge cases keep getting smaller and smaller on each iteration.
@@sjcsscjios4112 Yes, that is what I focused on. AI right now is indeed so much more general than it was in the 60s. And door-to-door salesmen are gone, we have influencers now. A human work day on a farm has become 10x as productive since the 60s, and it has become 250x as productive as before automation. (Specifically: For a very long time, one person doing farm work produced enough food for around 2 people. Right now one person doing farm work in developed countries produces food for around 500 people on average, with the most advanced farms even doing much more than that.)
90% of human work has been made obsolete many times in history. And even in the field of AI there have been successes many times (on a smaller scale) since the 60s. There is absolutely a boom right now, but not in a completely new way.
First, technology could do A, then it could do B, then C, etc, and now it can do L. And I completely agree that technology can now do language, and audio, and images. I assume that it will do video and 3D environments (aka "Embodied AI") relatively soon. (Or as soon as we have built the datacenters, in the case it turns out to be more data intensive than we can handle right now.)
But it cannot yet do M (the next thing), and right now we just don't even know what M is. For example, ChatGPT is completely inept at making business decisions, ChatGPT is completely inept at empathy. ChatGPT is completely inept at having inspiration. ChatGPT is completely inept at self control. There are many things it cannot do. And it will take a long time until we even understand what the next best thing to automate is.
I'm not dismissing the capabilities of AI here, I'm a software developer with a little less than 30 years experience. Often people think that my job is to produce source code. I myself often think that that is my job. But LLM's are many times faster than me at producing source code, and the code works. Right now, LLM produced code is a pain to read, but I do think that will be solved soon as well.
Luckily, my actual job is to produce software that people want to use. Does that involve making business decisions? And how much? I don't know exactly. Does it involve having inspiration? And how much? I don't know exactly. We'll only find out when the current generation of AI matures and filters through in a large part of society.
Long story, but my reaction to nyyotam was that the goalposts WILL be moved again. Not only because the structure of OpenAI requires it (which is a correct observation from nyyotam), but also because moving the goalposts is what we have been doing for centuries already, and there's actually no evidence that I see for thinking it will stop.
AGI to me has always been to me the ability to interact with the world and then be able to modify its own code to learn in the same way a baby does. This in not generative AI, it is fundamentally different in its approach, and we have no idea the direction such an AGI will take. Also we have no idea of how fast such a system can develop, called the 'runway take off rate'
There is a reason to make the distinction between AGI and Powerful AI
AGI is most likely going to take longer than expected. I’m sure OpenAi has some powerful models but you have to remember that OAI is a for profit company now so it benefits them if Sam keeps saying it’s “soon” to keep interest in the company.
Nobody really knows when AGI (Artificial General Intelligence) will happen. Current LLM models don’t possess any form of self-awareness, and as humanity, we lack scientific consensus on the matter. So even stating that AGI might emerge in 10 years could be accurate or perhaps 100 years too early. We just don’t know yet, and the proposition isn’t simple either.
It may be that we need to redefine what the very term “intelligence” means in the first place before we even get to self-awareness or consciousness - both of which are required by the current definition of human-level intelligence. We also need AIs to learn to be messy, like humans are. Some of our best inventions came through chaos and random discovery, and current models aren’t capable of that either.
On the other hand, many would argue we have already passed the Turing test tenfold. But we need scientific, including psychological and psychiatric, definitions of what constitutes human intelligence before we start defining what an artificial version would look like.
I believe all we are missing now are mocks of different systems that our brain implicitly has, the fact that we have a roadmap (the human brain) makes me a lot more optimistic.
I was an AI product manager at GE Software and make videos on how AI works. The “central control + specialized modules” approach he discussed at the end is almost certainly how this will be done. The problem is too hard for one single program, plus that modular approach leverages the model from decades of conventional software development that relies on packages and plugins to enhance the central functionality. It’s more modular and efficient, and effective.
No causual understanding... no AGI
@@axe863 agreed. There are some slippery and incomplete definitions of 'intelligence' that are limited enough that people could restrict them down enough to make machines fit them.
I'm still on board with Ray Kurzweil. AGI by 2029
Is this still his prediction as pr october 2024?
Then what is the reason Ilya Sutskever is currently developing ASI? he strongly believes that ASI is within reach (so they might be able to create ASI in like 3+ years or so) ?
1:33 they should be called Large Event Models (LEMs) instead of LLMs.
We have some idea where the big ai companies are. Competion forces them to show their cards, at least in terms of the results they can acheive
Everyone seems to have their own definition of what AGI is supposed to be.
What does agi mean ?? Artificial general intelligence you can't make up shit with that word gpt4 already has this it is Artificial and it pretty smart in general intelligence. That's a fact
@@John-il4mpyou don't understand what general intelligence means
@@oentrepreneur General intelligence is the capacity to think across different areas and use knowledge effectively in any situation. It’s what lets someone solve a math problem, understand a historical event, and fix a basic plumbing issue all in one day. For example, someone with high general intelligence might easily switch between analyzing data at work, figuring out a new recipe at home, and helping a friend troubleshoot their phone, thanks to a broad and flexible understanding of different topics. I hope you now understand general intelligence; it was a pleasure to educate yourself.
Will AGI be the next Fusion Reactor thing in Computing?
im curious. does any respected ai professional have the take that agi is not possible? or much longer than 10 years?
Governments should divert all resources they direct towards science and technology research to this guy so he can develop AGI
What's the source for this interview?
:) when you expect the unexpected, surprising things seem to spring from out of nowhere. how many expected the ai bloom? how many others saw what would then stem from it? how many realized what would happen when it was applied to itself, then applied to itself? like the folding of a blade, an exquisite pastry of infinite layers and/or the mother giving birth to her future selves and so on and so on in turn their own...
10 years is optimistic
Not really.
Based on what evidence?
@@paulk6900 short answer - they don’t have any current internal models of reality. LLMs are great but they aren’t autonomous nor general. Maybe you can string enough LLMs together with exports and recurring calculations to get an independent white collar sales agent.
Govt(military) will obtain AGI/ASI before all
Maybe John Carmack will with his smaller Keen Technologies company then all these giants.
Correct.
The question is: what do you exactly mean by AGI? Which requirements exactly an AI must satisfy to be an AGI? Otherways are "predictions" without sense 😅
what we see right now, they already had it back in 2015, everything that is out there today, is a controlled fraction of what they had back in 2016. today already reached human level and most likely higher.
Agree but there can be some nuance.
all i see is the visual input system for the borgs...... we gettin there baby.
I'm sorry, but nobody in the world can realistically predict what will happen projecting beyond 2 years or so, the unpredictability of technological innovation increases exponentially as we approach AGI / Singularity
When I hear a prediction longer than 2 years I interpret it as "we have no clear path to create this"
I don't think in 5 years every job will be replaced. I think that we will have just made the thing that replaces everyone.
I do not know of AGI, I am not equipped to see into the future.
Given the history and data, I expect it will become what you make of it.
Just like every other gift we are given.
You've not done Hassabis justice , not only he's the CEO of deep mind but he's founder, of deep mind that got acquired by google. he's "a British computational neuroscientist[7], artificial intelligence researcher, and entrepreneur" . before anyone took notice of AI, deepmindcreated alphago. You have CEO's like Tim Cook or the late Steve Jobs but they just made business decisions, they are not scientists. Also it's not really in his best interest to downplay AI, but he's honest .
Also AGI is end game where it will be a singularity, that doesn't mean we won't figure out ways to use the level of AI we have to advance humanity. From now till true AGI you'll have breath-taking advancements in so many fields
Google can probably afford to play a longer game than OpenAi, so this is a smart message to give to the market.
"A few thousand days" - 3653 days is ten years, 2000 days is more than five years. Altman is not being "more extreme" here, just using phrasing to make it seem that way.
"Clearly scoped'. the scope of the project is clearly delineated. Not 'scoped out' ie examined, viewed.
" I think AGI is 10 years out" from the company faking videos and that is in around 10th place when everyone else is a few years ahead.... This was like Mary beara trying to take credit for leading the way in electrification of cars when Tesla wasn't even invited to the press conference or even mentioned.
A few thousand days could mean 6 years like 12 years, it's still few thousand days
No one knows for sure. Why to guess?
Just say it sure will be 😊
He has no idea, but saying that in public gets investors and market excited.
10 years? But I want it now! 😭
80 Percent of the Jobs will match AGI in the next 3 to 5 years. The rest might take another 5 years. Will it matter? Yes, for the remaining 20 percent.
10 years lol? It happened last October.
Wow Google is that far behind they’re saying 10 years.😂😂
Well...Sure. His AI team is laging behind OpenAi and Anthropic a lot so he meant Google will get AGI in 10 years. I believe that too. Competitors will get it by year 2027.
AGI for FBI
Is that 10 years like AI has always been ? Or like nuclear fusion ?
Lol, is he admitting that Google has lost the race? I’ll put my bet on Ray Kurzweil, someone who with a track record of actually ACCURATE predictions on AI advancement. Also, Sam Altman and Dario Amodei “literally changed the industry with OpenAI. They have a great sense of where the technology is headed.
10 years away for their company maybe, A few years at the most for some others.
More like ASI is 10 years away not AGI.
Imagine how physically lazy we have gotten with our technology, soon we don't even have to think anymore.
Nobody says what AGI actually is
Judging by Gemini's 'performance' it won't come from Google.
The startrek Data mental ai ? It will need its binary code to evolve . 1 ,0 ,b ,w . Black white
10 years ....just like Nuclear Fusion....................over the last 50 years
I trust Demis Hassibis far more than Altman who comes across as a slimy creep.
Please. Three years. Max
AGI is 5 to 7 DAYS away….
I thought the og definition of AGI is a system that self improves
Not at all.
It depends if they want to do it like this agi just mean artificial general intelligence thats it nothing else.
@@John-il4mp I think they changed the definition so they can sell it.
We used to call it AI then the term got overused and we moved to AGI now we are moving to ASI as a term....but the truth is....true AI is once the "singularity" in computer science happens.(self improve with no external help)
Then we get an artificial intelligence that is just like us...AI.... and after that we will get ASI.
These are just autocomplete bots that leverage high compute power to sort data, sure they will be useful but its not true AI.
10 years reminds me of the JFK speech to the 1969 moon landing.
10 years lol. 😂 AGI is already here.
Who should believe that?🤣Most likely they are using it already for development.
I think he's becoming inconsistent like elon , first he said 5 years now 10 years so he really has no idea.
I disagree. Gemini is a handicapped model. I've never been able to make a better use of it than Claude or GPT. I just cannot believe why Google fails at the chatbot technology
From my point of view, GPT 3.5 is better than the current version of Gemini
just gotta stay alive 10 more years
Nope. It's much closer, more like 5 years or less (conservative). However, what I really think is it's actually much less than 5 years, more like 2.5 years. Google's AI kinda sucks, so I'm not sure why this guy is pretending to have the answer while his company is so far behind the top competitors.
AGI is human level intelligence, ASI is superhuman level intelligence. It's not that hard to understand.
they want to make money they need investors of course they're gonna say that
Brother as ASI (Artificial Superintelligence human) i own singularity status registered on 2023 human are fooling humanity 😂 with AI
😮 you see way before you get to AGI😮 can we change that word let's say what it is God😮 an omnipresent God😮 lane from serial experiments lain😮 bro way before you get there so angels are going to be here archangels are we seeing this😮 there's a lot of mythical creatures on power level scale before you get to God😮 we all acknowledge something like marvel superheroes with change our entire society😮 level three agents when agents get here human society as we know it can no longer function😮 it is hardly functioning now😮
Open AI = Closed Ai 🙄
13:05 😂😂
This Hasabbis guy should avoid interviews , people may figure out that he is not that bright.
Bull$hit - current AI is already better than humans at many, many things...
These people don't know the average human... Both engineers and rich people can still do things AI can't lmao
@Entropy67 of course but ai can do something they can't to soon it will be able to do everything and better.
Lmao, why push a falsehood like this? The technology is here, what remains the same is people and their intentions.
♥️
Ten years 😂
Good luck with that.
10 years is extremely underwhealming
Nah. AGI is **checks crystal ball** 6 years away! Guessing is fun!
We already have agi lol 😂😂😂 late to the party friend.
@@John-il4mp Yeah? Which AGI do you have access to right now?
@@servantes3291 you had agi since last year a couple model already think about it.
Bulshit, they’ve already got it
In 2026, you will be the assistant for the AI, no joke, AI will say try these recipes and open a restaurant and make money, so you can buy me more vram
It won't tell you to do, it will just do it. There is no place for humans in such a society. Economy grinds either to a standstill, or is used purely by AI. That's why we need stuff like negative tax or universal basic income (whatever you want to call it).
@@Entropy67ai cant run a restaurant
AGI 2029