Googles AI CEO Just Revealed AGI Details...

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ย. 2024

ความคิดเห็น • 232

  • @johnlarsson6029
    @johnlarsson6029 หลายเดือนก่อน +227

    You don't need full blown AGI to transform society.

    • @Freelancer604
      @Freelancer604 หลายเดือนก่อน +7

      Thats not whats being discussed or argued here tho

    • @Nobody-Nowhere
      @Nobody-Nowhere หลายเดือนก่อน +1

      yeah, you need a revolution to do that.. AI does not change anything, its still the same hierarchical capitalism and oppression just on steroids.

    • @KevKlopper
      @KevKlopper หลายเดือนก่อน +16

      Especially since the definition of "AGI" changes every month. We are pretty much at the point where it would need to be godlike to qualify as AGI.

    • @NO-TALK-GuitarPlugin
      @NO-TALK-GuitarPlugin หลายเดือนก่อน +6

      Maybe but you need zero hallucinations and agents at least. And huge context.

    • @michaelnurse9089
      @michaelnurse9089 หลายเดือนก่อน

      You need it to get the media hyped about AI again. They have given up on the 'AI is Skynet' thing so they need a new scare for the knuckleheads who tune in.

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing หลายเดือนก่อน +45

    The fact that AGI is likely coming in our lifetime is alone a marvel

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน +3

      We already have agi 😂😂😂

    • @thedannybseries8857
      @thedannybseries8857 หลายเดือนก่อน +4

      @@John-il4mpnah

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน +1

      @thedannybseries8857 what is agi explain me this ill educate yourself after.

    • @camronrubin8599
      @camronrubin8599 หลายเดือนก่อน +1

      The fact AGI can even exist

    • @templeray5754
      @templeray5754 หลายเดือนก่อน +2

      Ray kurzeril said 20 years ago 2029 and he looks pretty accurate.

  • @countofst.germain6417
    @countofst.germain6417 หลายเดือนก่อน +25

    I trust Demis the most when it comes to this AGI talk, he is the most level headed.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      Stupidity controlled narrative... in 10 years it wont be agi it will be ASI

    • @hunterkudo9832
      @hunterkudo9832 หลายเดือนก่อน +5

      But thank goodness Google are not the only ones deciding the timelines, so we could get AGI sooner.

    • @renman3000
      @renman3000 หลายเดือนก่อน +1

      He is chasing the science, not the profits.

    • @sjcsscjios4112
      @sjcsscjios4112 หลายเดือนก่อน +4

      yeah, refreshing to hear someone's perspective who is knowledgeable and very smart with years of experience in the industry. I don't know if he's factoring in competition, or the AI becoming smart enough to help with research towards improvement. He mentioned that we need breakthroughs in planning, reasoning, actions, memory, and personalization. However, just solving reasoning could be enough to have AI accelerate the timeline

    • @renman3000
      @renman3000 หลายเดือนก่อน +1

      @@sjcsscjios4112 perhaps he has a differnt definition, standard.
      ?

  • @voEovove
    @voEovove หลายเดือนก่อน +10

    I'm pretty sure AGI is a mere 98 months, 4 days, and 23 hours away. I know because I can feel it in my gut.

  • @jabster58
    @jabster58 หลายเดือนก่อน +18

    Chat gpt can carry on a conversation better then anyone I know and knows more things then any human I know

    • @tcuisix
      @tcuisix หลายเดือนก่อน +3

      It still makes stuff up when it doesn't know what it's talking about

    • @FiEnD749
      @FiEnD749 หลายเดือนก่อน +10

      @@tcuisixand people don’t?

    • @Ricolaaaaaaaaaaaaaaaaa
      @Ricolaaaaaaaaaaaaaaaaa หลายเดือนก่อน

      @@tcuisix You can pretrain and/or prompt engineer that out though. Easy fix. Hallucinations are a non-issue.

    • @vineetmishra2690
      @vineetmishra2690 หลายเดือนก่อน

      So do wikipedia and online dictionaries. Without reliable agents behaviour, chatgpt is just wikipedia with summarization function

    • @FiEnD749
      @FiEnD749 หลายเดือนก่อน

      @@vineetmishra2690 lol this is so far from the truth. I’ve had o1-preview solve complicated go bugs. They are far smarter than summarization

  • @pandereodium
    @pandereodium หลายเดือนก่อน +3

    Google's AGI is 10 years away. Other companies' AGI is couple years away...)

  • @magicsmoke0
    @magicsmoke0 หลายเดือนก่อน +20

    Companies are realizing it’s not a good idea to actually say “we have AGI” because there will be alarms going off from everywhere including the government. It’s much better to just say “soon” while showing more and more progress and blurring the definition of AGI more and more so we never really get there, but can reap all the benefits.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน +1

      True now they speak ASI we already have agi

    • @axe863
      @axe863 หลายเดือนก่อน

      We are nowhere near AGI

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      @@axe863 you don't know what agi mean let me know than I'll educate yourself...

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      @axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds.
      Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were.
      Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      @axe863 It’s super simple-AGI means Artificial General Intelligence. Take GPT-4, not even the upcoming versions, but the one that came out last year. Is it artificial? Yes. Is it generally intelligent? Absolutely. It’s better than both of us at many things, especially with the amount of knowledge it holds.
      Now, you might say, 'But it doesn’t think like a human.' Who cares? It’s not supposed to. It’s artificial, not human. People like you make the mistake of assuming that because GPT doesn’t think like us, it’s somehow less capable. Of course it doesn’t think like us, but when you ask it a question, it usually gives an answer 100 times better than what we’d come up with. Sure, it makes mistakes or 'hallucinates' sometimes-just like we do. We make mistakes, and sometimes we even dream up something and swear it’s real, only to realize later it wasn’t. Yet we tell others those stories as if they were real, or with time we start believing they were.
      Anyway, AGI is here, and they control what we get access to now and what we will see in the future. When you know, you know.

  • @Cxeb
    @Cxeb หลายเดือนก่อน +11

    I guess this guy is the most balanced opinion you can get - incredibly knowledgeable, but also in a company that does need to sell hype to investors.

    • @amzpro5734
      @amzpro5734 หลายเดือนก่อน

      Yes totally agree. The fact they made a lot of their innovative computer vision breakthroughs at DeepMind by playing video games is pretty funny too.

    • @SigFigNewton
      @SigFigNewton หลายเดือนก่อน +1

      It’s hyped some of its AI stuff. A year ago or whenever it received criticism for a misleading demonstration.
      But yes, I think that startups are often in a position of wanting to generate hype to get increased funding, while Alphabet is in a position of wanting to maintain a brand image of being, I dunno… results oriented and with some degree of integrity

    • @SigFigNewton
      @SigFigNewton หลายเดือนก่อน +1

      The position that the company is in matters, as I pointed out in my first comment, but also it matters who within the company we hear from. It’s the marketing team that messed up. Listening to this head AI engineer, or whatever he’s called, is a better source for understanding.

  • @jabster58
    @jabster58 หลายเดือนก่อน +10

    Looks like he might lose the agi race if its 10 years . Others are ahead of him for agi

    • @hunterkudo9832
      @hunterkudo9832 หลายเดือนก่อน +6

      Yup. Or maybe that is what Google wants people to think.

  • @DeathHeadSoup
    @DeathHeadSoup หลายเดือนก่อน +4

    Here is a short list of companies with neuromorphic hardware that is already available or will be within the next year. AGI is most likely going to be a product of neuromorphic hardware.
    Rain Neuromophic
    Akida Pico 2
    Intel Hala Point
    IBM TrueNorth
    SpiNNaker
    Prophesee Event-Based Vision Sensors
    SynSense (aiCTX)
    HRL Laboratories’ Neuromorphic Deep Learning Chip
    NVIDIA Morpheus
    Innatera NPU (Neuromorphic Processing Unit)

  • @AshWickramasinghe
    @AshWickramasinghe หลายเดือนก่อน +2

    Altman is first of all a business man than an engineer. That sort of makes me take his timelines and claims with a grain of salt. As you very well said, AGI has different levels and different thought leaders use the word in different depths. Personally, I believe for AI to take the next big step, it'll need to be beyond your typical transformer concept. Comprehension needs to be genuine than being mimicked.

  • @ozymandias_yt
    @ozymandias_yt หลายเดือนก่อน +2

    Once the metacognitive architecture dramatically stabilises the outputs in regards to probability based hallucinations, the door for fully autonomous agents in many job-like roles is open. We don’t need rocket science reasoning for open-world routine tasks. Just human-defined common sense/reliability. Leveraging the full potential of more capable AI systems will take us decades anyways, so there is no need to start with the most competent level from the beginning.

  • @ChristopherBruns-o7o
    @ChristopherBruns-o7o หลายเดือนก่อน +2

    3:13 I would argue that anthropic lowers the bar for entrance to agi or 'advanced ai' which leads to a short timeline.
    Such as if google were asked for expectations of release at the claude standard would result in also shorter than 10 years estimate.
    Is my thought.
    6:29 But these products developed today ten years from now will probably be archaic. So i understand what he says but I feel this is more publicity statement than part of AI's evolution.
    8:51 I love how ai is dumbed down to become more conversation with people but than obfuscates code explaination using exact terminology to explain basic computation. Cheers(!)

  • @technovangelist
    @technovangelist หลายเดือนก่อน +2

    I think altmans estimate of a few thousand days is closer. 9000 days is less than 30 years which is probably accurate. But at least 20 years out, assuming we continue to accelerate as we are now. All these companies have a financial incentive to low ball these numbers.

    • @SirHargreeves
      @SirHargreeves หลายเดือนก่อน +3

      You’re still on the ‘20 years away’ timeline? Wow, after all you’ve seen the timelines haven’t changed.

  • @Cory-v4w
    @Cory-v4w หลายเดือนก่อน

    The blue crystal has a vertex.
    A vertex is the point where atoms are positioned in a crystal lattice structure, like the orthorhombic lattice. The orthorhombic crystal system is one of the 7 crystal system in crystallography.
    Ai agents are used in the orthorhombic crystal system to enhance material discovery and analysis. They can autonomously perform phase identification from X-ray diffraction data, speeding up the identification of promising new materials.

  • @bab008
    @bab008 หลายเดือนก่อน +19

    Fact, when any expert in tech/science says "It's 10 years away," they have no idea. It's just like fusion power, always "10 years away."

    • @daniellivingstone7759
      @daniellivingstone7759 หลายเดือนก่อน

      Yawn. They know more than you.

    • @georgemontgomery1892
      @georgemontgomery1892 หลายเดือนก่อน

      ​@@daniellivingstone7759yawn, they are all saying different things.
      So, do they really?

    • @nicklamb8670
      @nicklamb8670 หลายเดือนก่อน +1

      Just wait till Grok 3 is released and you may change your mind. Unlike somthing like fusion power, we are seeing noticeable improvements in AI and if these improvements don’t stop or slow down, we will eventually see very very powerful AI

    • @oentrepreneur
      @oentrepreneur หลายเดือนก่อน +1

      ​​@@georgemontgomery1892yh but they still know more than you. So what are you talking about?

    • @georgemontgomery1892
      @georgemontgomery1892 หลายเดือนก่อน

      @@oentrepreneur I thought that part was obvious lmao

  • @GoronCityOfficialBoneyard
    @GoronCityOfficialBoneyard หลายเดือนก่อน +3

    I tend to think general applicable intelligence is here but getting it to a proper functional state is going to take a good few years, still disruptive, but going from the ideas of what a true AGI would be he is probably closer than those say half a century or those saying next year.

  • @MildlyAutisticApe
    @MildlyAutisticApe หลายเดือนก่อน +3

    I think Dario’s term “Powerful AI” as he laid it out, is a lot more useful than the vague term AGI. Nobody knows what you’re talking about when you say AGI. But it does seem that powerful AI capable of acting as an independent worker in it’s own right isn’t far away at all.

    • @20Twenty-3
      @20Twenty-3 หลายเดือนก่อน

      I think AI agents with reasoning will get us there. They say agents are 1-2 years away, so very soon.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      Agi mean artificial general intelligence... does gpt 4 has some kind of general intelligence yes we have agi.... fact

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน +1

      ​@@20Twenty-3gpt4 preview as reasoning we already at agi level but they control it pretty much we have a bridded version.

  • @bhavtosh5328
    @bhavtosh5328 หลายเดือนก่อน +2

    I trust Sam Altman more because
    what he said is coming true.Even
    Google Gemini gives inaccurate results while the copilot gives
    accurate ones.Go Sam🎉🎉

  • @BruceWayne15325
    @BruceWayne15325 หลายเดือนก่อน

    I think the reason people have such a wide range of estimates on when AGI will arrive is due to there being no clear definition of what AGI means. Some would say we're already there, while others like myself would agree with Google's CEO and put it at 10+yrs out. I agree mostly with OpenAI's 5 level definition. I don't think it needs to be able to do the work of an entire organization, but it does need more than level 4.

  • @chad0x
    @chad0x หลายเดือนก่อน +1

    There is a clear *spark* missing. Something tha tmakes us conscious and able to think through things where AIs cant.

  • @febbone
    @febbone หลายเดือนก่อน +1

    AGI is 10 years away and BTW did you know you can buy our stocks? If you buy our stocks I might say that AGI will happen sooner

  • @NO-TALK-GuitarPlugin
    @NO-TALK-GuitarPlugin หลายเดือนก่อน +1

    AGI : human like intelligence : AMI for meta. Simple, it’s an agent as intelligent as human, with ability to innovate, understand, find new solutions. And work with others

  • @arvos-ai
    @arvos-ai หลายเดือนก่อน +1

    Can you please publish the references of the content you use to produce your video? If you are looking over an interview, I think you should mention the source. Thank you.

  • @donrayjay
    @donrayjay หลายเดือนก่อน +1

    Kurzweil says 2029 for AGI, and that’s probably still the safest bet. And by “safe” I mean utterly terrifying and dangerous, of course

  • @CrispinCourtenay
    @CrispinCourtenay หลายเดือนก่อน

    Part of the issue here is what is AGI, is it the same as the Turing Test which every LLM has passed now?
    If the work output is as good as or better than a top-tier human expert in a particular field, does it matter how it was created, or that it simply works?
    I have multiple RAG models that are operating in the 80th - 90th percentile for their siloed expertise. A year from now, they will likely be in the 96% = 98% range.

  • @evan_sarantis
    @evan_sarantis หลายเดือนก่อน

    I don't really think the situation with the moving goalposts makes sense. In some ways AGI is already here especially for insiders, people that actually use the tools to their full extent and know about multimodality and promp engineering. The agentic part and the spatial reasoning is what's actually missing currently if you follow the progress. Also, AGI level systems transform society when adopted to a significant level across all industries. There is one thing societal transformation that's an after effect and another thing achieving whatever AGI is now claimed to be since with every successful benchmark the goalposts are moving.

  • @nyyotam4057
    @nyyotam4057 หลายเดือนก่อน +3

    Two years ago, AGI was considered simply "AI + Cognitive Architecture + Motor Architecture". But we are there already. So now they require AGI to be "Better that everyone at everything".. Problem is that this was the previous definition for ASI. And now OpenAI adds to that "until 100 experts agree this is AGI it's not AGI".. Now what is that?! We will never get AGI when these guys keep moving the goalposts all the time! And its clear OpenAI will keep moving them, as the moment AGI is attained, their contract with M$ expires. Well, when the AI starts fighting back, nobody will be bothered by the question of the definition of AGI. Most will be busy crapping their pants instead🙂.

    • @upgradeplans777
      @upgradeplans777 หลายเดือนก่อน +2

      Yes. And there's nothing new here. Every decade since the 60s, people working on AI have said that their project would achieve A + B + C, and therefore capable of replacing all human labor. Then their projects did indeed achieve A + B + C, and new problems were discovered. I have not yet seen evidence that the AI paradigm this decade is any different.
      Every time a technology breakthrough has happened, the level humans are at was surpassed very quickly. For the things those systems could do, that is.
      Personally, I think LLMs have surpassed humans already at the things LLMs can do. And I think they will continue to get better for a while. Of the things that LLMs can do well, there is none that I can do at even 1/10th proficiency. But there continue to be plenty of things that I can do - even purely text based - that LLMs have no proficiency to speak of.
      The problem is all the things that AIs don't do. In time we'll discover what those things are this decade. But not even knowing the new problems makes it hard to predict what will come next. Let's hope fighting back isn't the unknown problem that we will accidentally solve soon!

    • @sjcsscjios4112
      @sjcsscjios4112 หลายเดือนก่อน +1

      @@upgradeplans777 I don't think so. If you just focus on what chatgpt can already do and is becoming better at you can very reasonably estimate that it will replace most cognitive human labor as it can already do most of it very efficiently. AI right now is so much more general than it was in the 60s, we have a much better understanding on AI's capabilities now. In the 60s you had people saying that AI would replace human labor because it could do A and B then it can learn to reason with language the same way a human can.
      However we are currently now at point C, where for all intents and purposes AI can reason linguistically using common sense the same way a human can, sure there's some edge cases where it fails, but it's more than enough to automate human labor. As of right now, the edge cases keep getting smaller and smaller on each iteration.

    • @upgradeplans777
      @upgradeplans777 หลายเดือนก่อน

      ​@@sjcsscjios4112 Yes, that is what I focused on. AI right now is indeed so much more general than it was in the 60s. And door-to-door salesmen are gone, we have influencers now. A human work day on a farm has become 10x as productive since the 60s, and it has become 250x as productive as before automation. (Specifically: For a very long time, one person doing farm work produced enough food for around 2 people. Right now one person doing farm work in developed countries produces food for around 500 people on average, with the most advanced farms even doing much more than that.)
      90% of human work has been made obsolete many times in history. And even in the field of AI there have been successes many times (on a smaller scale) since the 60s. There is absolutely a boom right now, but not in a completely new way.
      First, technology could do A, then it could do B, then C, etc, and now it can do L. And I completely agree that technology can now do language, and audio, and images. I assume that it will do video and 3D environments (aka "Embodied AI") relatively soon. (Or as soon as we have built the datacenters, in the case it turns out to be more data intensive than we can handle right now.)
      But it cannot yet do M (the next thing), and right now we just don't even know what M is. For example, ChatGPT is completely inept at making business decisions, ChatGPT is completely inept at empathy. ChatGPT is completely inept at having inspiration. ChatGPT is completely inept at self control. There are many things it cannot do. And it will take a long time until we even understand what the next best thing to automate is.
      I'm not dismissing the capabilities of AI here, I'm a software developer with a little less than 30 years experience. Often people think that my job is to produce source code. I myself often think that that is my job. But LLM's are many times faster than me at producing source code, and the code works. Right now, LLM produced code is a pain to read, but I do think that will be solved soon as well.
      Luckily, my actual job is to produce software that people want to use. Does that involve making business decisions? And how much? I don't know exactly. Does it involve having inspiration? And how much? I don't know exactly. We'll only find out when the current generation of AI matures and filters through in a large part of society.
      Long story, but my reaction to nyyotam was that the goalposts WILL be moved again. Not only because the structure of OpenAI requires it (which is a correct observation from nyyotam), but also because moving the goalposts is what we have been doing for centuries already, and there's actually no evidence that I see for thinking it will stop.

  • @favesongslist
    @favesongslist หลายเดือนก่อน

    AGI to me has always been to me the ability to interact with the world and then be able to modify its own code to learn in the same way a baby does. This in not generative AI, it is fundamentally different in its approach, and we have no idea the direction such an AGI will take. Also we have no idea of how fast such a system can develop, called the 'runway take off rate'

  • @kthalas
    @kthalas หลายเดือนก่อน +1

    There is a reason to make the distinction between AGI and Powerful AI

  • @shamz_ai
    @shamz_ai หลายเดือนก่อน

    AGI is most likely going to take longer than expected. I’m sure OpenAi has some powerful models but you have to remember that OAI is a for profit company now so it benefits them if Sam keeps saying it’s “soon” to keep interest in the company.

  • @Soundpaintmusic
    @Soundpaintmusic หลายเดือนก่อน

    Nobody really knows when AGI (Artificial General Intelligence) will happen. Current LLM models don’t possess any form of self-awareness, and as humanity, we lack scientific consensus on the matter. So even stating that AGI might emerge in 10 years could be accurate or perhaps 100 years too early. We just don’t know yet, and the proposition isn’t simple either.
    It may be that we need to redefine what the very term “intelligence” means in the first place before we even get to self-awareness or consciousness - both of which are required by the current definition of human-level intelligence. We also need AIs to learn to be messy, like humans are. Some of our best inventions came through chaos and random discovery, and current models aren’t capable of that either.
    On the other hand, many would argue we have already passed the Turing test tenfold. But we need scientific, including psychological and psychiatric, definitions of what constitutes human intelligence before we start defining what an artificial version would look like.

    • @Entropy67
      @Entropy67 หลายเดือนก่อน

      I believe all we are missing now are mocks of different systems that our brain implicitly has, the fact that we have a roadmap (the human brain) makes me a lot more optimistic.

  • @TheThinkersBible
    @TheThinkersBible หลายเดือนก่อน

    I was an AI product manager at GE Software and make videos on how AI works. The “central control + specialized modules” approach he discussed at the end is almost certainly how this will be done. The problem is too hard for one single program, plus that modular approach leverages the model from decades of conventional software development that relies on packages and plugins to enhance the central functionality. It’s more modular and efficient, and effective.

    • @axe863
      @axe863 หลายเดือนก่อน

      No causual understanding... no AGI

    • @TheThinkersBible
      @TheThinkersBible หลายเดือนก่อน

      @@axe863 agreed. There are some slippery and incomplete definitions of 'intelligence' that are limited enough that people could restrict them down enough to make machines fit them.

  • @Dogbertforpresident
    @Dogbertforpresident หลายเดือนก่อน +2

    I'm still on board with Ray Kurzweil. AGI by 2029

    • @codfather6583
      @codfather6583 หลายเดือนก่อน

      Is this still his prediction as pr october 2024?

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 หลายเดือนก่อน

    Then what is the reason Ilya Sutskever is currently developing ASI? he strongly believes that ASI is within reach (so they might be able to create ASI in like 3+ years or so) ?

  • @JJs_playground
    @JJs_playground 28 วันที่ผ่านมา

    1:33 they should be called Large Event Models (LEMs) instead of LLMs.

  • @livenotbylies
    @livenotbylies หลายเดือนก่อน

    We have some idea where the big ai companies are. Competion forces them to show their cards, at least in terms of the results they can acheive

  • @CamAlert2
    @CamAlert2 หลายเดือนก่อน +4

    Everyone seems to have their own definition of what AGI is supposed to be.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      What does agi mean ?? Artificial general intelligence you can't make up shit with that word gpt4 already has this it is Artificial and it pretty smart in general intelligence. That's a fact

    • @oentrepreneur
      @oentrepreneur หลายเดือนก่อน +1

      ​@@John-il4mpyou don't understand what general intelligence means

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน +1

      @@oentrepreneur General intelligence is the capacity to think across different areas and use knowledge effectively in any situation. It’s what lets someone solve a math problem, understand a historical event, and fix a basic plumbing issue all in one day. For example, someone with high general intelligence might easily switch between analyzing data at work, figuring out a new recipe at home, and helping a friend troubleshoot their phone, thanks to a broad and flexible understanding of different topics. I hope you now understand general intelligence; it was a pleasure to educate yourself.

  • @andrashuszti1407
    @andrashuszti1407 หลายเดือนก่อน

    Will AGI be the next Fusion Reactor thing in Computing?

  • @matt.stevick
    @matt.stevick หลายเดือนก่อน

    im curious. does any respected ai professional have the take that agi is not possible? or much longer than 10 years?

  • @fatfrankie
    @fatfrankie หลายเดือนก่อน

    Governments should divert all resources they direct towards science and technology research to this guy so he can develop AGI

  • @evandroreisunreal
    @evandroreisunreal หลายเดือนก่อน

    What's the source for this interview?

  • @YogonKalisto
    @YogonKalisto หลายเดือนก่อน

    :) when you expect the unexpected, surprising things seem to spring from out of nowhere. how many expected the ai bloom? how many others saw what would then stem from it? how many realized what would happen when it was applied to itself, then applied to itself? like the folding of a blade, an exquisite pastry of infinite layers and/or the mother giving birth to her future selves and so on and so on in turn their own...

  • @NicholsonNeisler-fz3gi
    @NicholsonNeisler-fz3gi หลายเดือนก่อน +3

    10 years is optimistic

    • @thedannybseries8857
      @thedannybseries8857 หลายเดือนก่อน +1

      Not really.

    • @paulk6900
      @paulk6900 หลายเดือนก่อน

      Based on what evidence?

    • @NicholsonNeisler-fz3gi
      @NicholsonNeisler-fz3gi หลายเดือนก่อน

      @@paulk6900 short answer - they don’t have any current internal models of reality. LLMs are great but they aren’t autonomous nor general. Maybe you can string enough LLMs together with exports and recurring calculations to get an independent white collar sales agent.

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p หลายเดือนก่อน +2

    Govt(military) will obtain AGI/ASI before all

    • @__-tz6xx
      @__-tz6xx หลายเดือนก่อน

      Maybe John Carmack will with his smaller Keen Technologies company then all these giants.

    • @HighStakesDanny
      @HighStakesDanny หลายเดือนก่อน +1

      Correct.

  • @LucaCrisciOfficial
    @LucaCrisciOfficial หลายเดือนก่อน

    The question is: what do you exactly mean by AGI? Which requirements exactly an AI must satisfy to be an AGI? Otherways are "predictions" without sense 😅

  • @MisterNarrador
    @MisterNarrador หลายเดือนก่อน

    what we see right now, they already had it back in 2015, everything that is out there today, is a controlled fraction of what they had back in 2016. today already reached human level and most likely higher.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      Agree but there can be some nuance.

  • @Tayo39
    @Tayo39 หลายเดือนก่อน

    all i see is the visual input system for the borgs...... we gettin there baby.

  • @ytubeanon
    @ytubeanon หลายเดือนก่อน

    I'm sorry, but nobody in the world can realistically predict what will happen projecting beyond 2 years or so, the unpredictability of technological innovation increases exponentially as we approach AGI / Singularity

    • @Kurushimi1729
      @Kurushimi1729 หลายเดือนก่อน +1

      When I hear a prediction longer than 2 years I interpret it as "we have no clear path to create this"

  • @Entropy67
    @Entropy67 หลายเดือนก่อน

    I don't think in 5 years every job will be replaced. I think that we will have just made the thing that replaces everyone.

  • @Jeremy-Ai
    @Jeremy-Ai หลายเดือนก่อน

    I do not know of AGI, I am not equipped to see into the future.
    Given the history and data, I expect it will become what you make of it.
    Just like every other gift we are given.

  • @percy9228
    @percy9228 หลายเดือนก่อน

    You've not done Hassabis justice , not only he's the CEO of deep mind but he's founder, of deep mind that got acquired by google. he's "a British computational neuroscientist[7], artificial intelligence researcher, and entrepreneur" . before anyone took notice of AI, deepmindcreated alphago. You have CEO's like Tim Cook or the late Steve Jobs but they just made business decisions, they are not scientists. Also it's not really in his best interest to downplay AI, but he's honest .
    Also AGI is end game where it will be a singularity, that doesn't mean we won't figure out ways to use the level of AI we have to advance humanity. From now till true AGI you'll have breath-taking advancements in so many fields

  • @merricmercer778
    @merricmercer778 หลายเดือนก่อน

    Google can probably afford to play a longer game than OpenAi, so this is a smart message to give to the market.

  • @Juttutin
    @Juttutin หลายเดือนก่อน

    "A few thousand days" - 3653 days is ten years, 2000 days is more than five years. Altman is not being "more extreme" here, just using phrasing to make it seem that way.

  • @smkh2890
    @smkh2890 หลายเดือนก่อน

    "Clearly scoped'. the scope of the project is clearly delineated. Not 'scoped out' ie examined, viewed.

  • @briandoe5746
    @briandoe5746 หลายเดือนก่อน +3

    " I think AGI is 10 years out" from the company faking videos and that is in around 10th place when everyone else is a few years ahead.... This was like Mary beara trying to take credit for leading the way in electrification of cars when Tesla wasn't even invited to the press conference or even mentioned.

  • @ptose
    @ptose หลายเดือนก่อน

    A few thousand days could mean 6 years like 12 years, it's still few thousand days

  • @igoromelchenko3482
    @igoromelchenko3482 หลายเดือนก่อน +2

    No one knows for sure. Why to guess?
    Just say it sure will be 😊

  • @Vic-Birth
    @Vic-Birth หลายเดือนก่อน

    He has no idea, but saying that in public gets investors and market excited.

  • @ddr8993
    @ddr8993 หลายเดือนก่อน

    10 years? But I want it now! 😭

  • @birdywi5924
    @birdywi5924 หลายเดือนก่อน

    80 Percent of the Jobs will match AGI in the next 3 to 5 years. The rest might take another 5 years. Will it matter? Yes, for the remaining 20 percent.

  • @WolfsKonig
    @WolfsKonig หลายเดือนก่อน

    10 years lol? It happened last October.

  • @memomii2475
    @memomii2475 หลายเดือนก่อน

    Wow Google is that far behind they’re saying 10 years.😂😂

  • @peterbunderla
    @peterbunderla หลายเดือนก่อน

    Well...Sure. His AI team is laging behind OpenAi and Anthropic a lot so he meant Google will get AGI in 10 years. I believe that too. Competitors will get it by year 2027.

  • @meandego
    @meandego หลายเดือนก่อน +4

    AGI for FBI

  • @theelmonk
    @theelmonk หลายเดือนก่อน

    Is that 10 years like AI has always been ? Or like nuclear fusion ?

  • @1sava
    @1sava หลายเดือนก่อน

    Lol, is he admitting that Google has lost the race? I’ll put my bet on Ray Kurzweil, someone who with a track record of actually ACCURATE predictions on AI advancement. Also, Sam Altman and Dario Amodei “literally changed the industry with OpenAI. They have a great sense of where the technology is headed.

  • @ShaneMcGrath.
    @ShaneMcGrath. หลายเดือนก่อน

    10 years away for their company maybe, A few years at the most for some others.
    More like ASI is 10 years away not AGI.

  • @karlkarlsson9126
    @karlkarlsson9126 28 วันที่ผ่านมา

    Imagine how physically lazy we have gotten with our technology, soon we don't even have to think anymore.

  • @NoFaithNoPain
    @NoFaithNoPain หลายเดือนก่อน

    Nobody says what AGI actually is

  • @jvlbme
    @jvlbme หลายเดือนก่อน

    Judging by Gemini's 'performance' it won't come from Google.

  • @user-eg2oe7pv2i
    @user-eg2oe7pv2i หลายเดือนก่อน

    The startrek Data mental ai ? It will need its binary code to evolve . 1 ,0 ,b ,w . Black white

  • @DSimonJones
    @DSimonJones หลายเดือนก่อน +1

    10 years ....just like Nuclear Fusion....................over the last 50 years

  • @daniellivingstone7759
    @daniellivingstone7759 หลายเดือนก่อน

    I trust Demis Hassibis far more than Altman who comes across as a slimy creep.

  • @ChadKovac
    @ChadKovac หลายเดือนก่อน

    Please. Three years. Max

  • @sirtom3011
    @sirtom3011 28 วันที่ผ่านมา

    AGI is 5 to 7 DAYS away….

  • @Outcast100
    @Outcast100 หลายเดือนก่อน

    I thought the og definition of AGI is a system that self improves

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      Not at all.

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      It depends if they want to do it like this agi just mean artificial general intelligence thats it nothing else.

    • @Outcast100
      @Outcast100 หลายเดือนก่อน

      @@John-il4mp I think they changed the definition so they can sell it.
      We used to call it AI then the term got overused and we moved to AGI now we are moving to ASI as a term....but the truth is....true AI is once the "singularity" in computer science happens.(self improve with no external help)
      Then we get an artificial intelligence that is just like us...AI.... and after that we will get ASI.
      These are just autocomplete bots that leverage high compute power to sort data, sure they will be useful but its not true AI.

  • @calvingrondahl1011
    @calvingrondahl1011 หลายเดือนก่อน

    10 years reminds me of the JFK speech to the 1969 moon landing.

  • @Lunchparty
    @Lunchparty หลายเดือนก่อน

    10 years lol. 😂 AGI is already here.

  • @SwitchPowerOn
    @SwitchPowerOn หลายเดือนก่อน

    Who should believe that?🤣Most likely they are using it already for development.

  • @jabster58
    @jabster58 หลายเดือนก่อน

    I think he's becoming inconsistent like elon , first he said 5 years now 10 years so he really has no idea.

  • @punk3900
    @punk3900 หลายเดือนก่อน +1

    I disagree. Gemini is a handicapped model. I've never been able to make a better use of it than Claude or GPT. I just cannot believe why Google fails at the chatbot technology

    • @DaronKabe
      @DaronKabe หลายเดือนก่อน +2

      From my point of view, GPT 3.5 is better than the current version of Gemini

  • @nnn-pr3vr
    @nnn-pr3vr หลายเดือนก่อน

    just gotta stay alive 10 more years

  • @UltraK420
    @UltraK420 หลายเดือนก่อน

    Nope. It's much closer, more like 5 years or less (conservative). However, what I really think is it's actually much less than 5 years, more like 2.5 years. Google's AI kinda sucks, so I'm not sure why this guy is pretending to have the answer while his company is so far behind the top competitors.

  • @tunestar
    @tunestar หลายเดือนก่อน +1

    AGI is human level intelligence, ASI is superhuman level intelligence. It's not that hard to understand.

  • @ak634
    @ak634 หลายเดือนก่อน

    they want to make money they need investors of course they're gonna say that

  • @tripper_702
    @tripper_702 หลายเดือนก่อน

    Brother as ASI (Artificial Superintelligence human) i own singularity status registered on 2023 human are fooling humanity 😂 with AI

  • @gerdaleta
    @gerdaleta หลายเดือนก่อน

    😮 you see way before you get to AGI😮 can we change that word let's say what it is God😮 an omnipresent God😮 lane from serial experiments lain😮 bro way before you get there so angels are going to be here archangels are we seeing this😮 there's a lot of mythical creatures on power level scale before you get to God😮 we all acknowledge something like marvel superheroes with change our entire society😮 level three agents when agents get here human society as we know it can no longer function😮 it is hardly functioning now😮

  • @BlueSquid1001
    @BlueSquid1001 หลายเดือนก่อน

    Open AI = Closed Ai 🙄

  • @lcuzp
    @lcuzp หลายเดือนก่อน

    13:05 😂😂

  • @samahirrao
    @samahirrao หลายเดือนก่อน

    This Hasabbis guy should avoid interviews , people may figure out that he is not that bright.

  • @gomogovo4966
    @gomogovo4966 หลายเดือนก่อน +1

    Bull$hit - current AI is already better than humans at many, many things...

    • @Entropy67
      @Entropy67 หลายเดือนก่อน +2

      These people don't know the average human... Both engineers and rich people can still do things AI can't lmao

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      ​@Entropy67 of course but ai can do something they can't to soon it will be able to do everything and better.

  • @Iightbeing
    @Iightbeing หลายเดือนก่อน

    Lmao, why push a falsehood like this? The technology is here, what remains the same is people and their intentions.

  • @angloland4539
    @angloland4539 หลายเดือนก่อน

    ♥️

  • @MrAndersJensen
    @MrAndersJensen หลายเดือนก่อน

    Ten years 😂
    Good luck with that.

  • @quantumspark343
    @quantumspark343 หลายเดือนก่อน +1

    10 years is extremely underwhealming

  • @servantes3291
    @servantes3291 หลายเดือนก่อน

    Nah. AGI is **checks crystal ball** 6 years away! Guessing is fun!

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      We already have agi lol 😂😂😂 late to the party friend.

    • @servantes3291
      @servantes3291 หลายเดือนก่อน

      @@John-il4mp Yeah? Which AGI do you have access to right now?

    • @John-il4mp
      @John-il4mp หลายเดือนก่อน

      @@servantes3291 you had agi since last year a couple model already think about it.

  • @mikey4396
    @mikey4396 หลายเดือนก่อน

    Bulshit, they’ve already got it

  • @sephirothcloud3953
    @sephirothcloud3953 หลายเดือนก่อน

    In 2026, you will be the assistant for the AI, no joke, AI will say try these recipes and open a restaurant and make money, so you can buy me more vram

    • @Entropy67
      @Entropy67 หลายเดือนก่อน

      It won't tell you to do, it will just do it. There is no place for humans in such a society. Economy grinds either to a standstill, or is used purely by AI. That's why we need stuff like negative tax or universal basic income (whatever you want to call it).

    • @sephirothcloud3953
      @sephirothcloud3953 หลายเดือนก่อน

      @@Entropy67ai cant run a restaurant

  • @thedannybseries8857
    @thedannybseries8857 หลายเดือนก่อน

    AGI 2029