The Transformative Potential of AGI - and When It Might Arrive | Shane Legg and Chris Anderson | TED

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 พ.ค. 2024
  • As the cofounder of Google DeepMind, Shane Legg is driving one of the greatest transformations in history: the development of artificial general intelligence (AGI). He envisions a system with human-like intelligence that would be exponentially smarter than today's AI, with limitless possibilities and applications. In conversation with head of TED Chris Anderson, Legg explores the evolution of AGI, what the world might look like when it arrives - and how to ensure it's built safely and ethically.
    If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
    Follow TED!
    Twitter: / tedtalks
    Instagram: / ted
    Facebook: / ted
    LinkedIn: / ted-conferences
    TikTok: / tedtoks
    The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
    Watch more: go.ted.com/shanelegg
    • The Transformative Pot...
    TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
    #TED #TEDTalks #gemini #agi #ai
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 413

  • @delriver77
    @delriver77 4 หลายเดือนก่อน +340

    As a sick person struggling with crippling illnesses, and bedridden for many years, I sincerely hope AGI can be achieved asap. It's my best chance at having something remotely close to an actual life at some point.

    • @coolcool2901
      @coolcool2901 4 หลายเดือนก่อน +24

      We will have it by 2024 September.

    • @coolcool2901
      @coolcool2901 4 หลายเดือนก่อน +11

      We just need to build complete mathematical capabilities into LLMs.

    • @Gallowglass7
      @Gallowglass7 4 หลายเดือนก่อน +37

      I am sorry to hear that, mate. I deeply hope it happens as soon as possible myself, as my parents are getting old and I cannot picture a world without them. Hopefully, our dream will come true in the somewhat near future.

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 4 หลายเดือนก่อน +4

      ​@@coolcool2901It will be so funny if it doesn't happen in September 2024! 😂

    • @coolcool2901
      @coolcool2901 4 หลายเดือนก่อน +8

      @@krishanSharma.69.69f I'll just shift the goal post then. I am flexible not rigid.
      But it will more than likely happen, in order to get AI to AGI it needs to understand the complete mathematical matrix, which will be accomplished next year. Maths is the language of the universe and absolute logic.
      Current LLMs understand language, but doesn't know maths and that's why it's not AGI. Maths is required for self improving AGI system.

  • @Graybeard_
    @Graybeard_ 4 หลายเดือนก่อน +8

    In terms of the human experience, I suspect one the first places we will find AGI really transforming our experience in a positive way will be with the aging population. The baby boom generation is really perfectly placed to benefit from AGI. I remember when I was in a college social science class learning of the concern of how would society deal with baby boomers becoming old and consequently reaching the stage in their (our) lives where we require more support both physically and cognitively. I find it fascinating contemplating our cellphone avatars carrying on conversations stimulating our brains, reminding us to our to take our medicines, making recommendations to us that are personal and comforting as well as assisting us when we become confused or disoriented. A couple of simple scenarios that comes to mind is coming out of a store and being confused as to where we parked our car, and our assistant reassuring us and showing us where we parked it or our assistant assessing that we have not had human interaction for a period of time and making suggestions to us that involve social interactions or even texting our care provider alerting them that we are becoming "shut in".

  • @alescervinka7501
    @alescervinka7501 4 หลายเดือนก่อน +21

    FEEL THE AGI

  • @zmor68
    @zmor68 4 หลายเดือนก่อน +47

    Fascinating. AGI will be smart enough to understand how AGI works. So it will be able to improve its own capabilities. AGI will be then smarter and so will improve furher. So AGI will be a constantly self-improving system. It will leave humans behind very quickly. We will cease to understand a lot of what AGI is doing. Secondly, there is an inherent unpredictability in complex cognitive systems. Absolutely fascinating!

    • @singularity6761
      @singularity6761 4 หลายเดือนก่อน +9

      Its called ASI then. Artificial Superintelligence

    • @johannes3033
      @johannes3033 4 หลายเดือนก่อน +4

      And some believe that the transition from AGI to ASI will be a matter of days, hours, or even minutes. With a large number of self-improvements done by the system in a very short timeframe.

    • @Kami84
      @Kami84 4 หลายเดือนก่อน

      Human level AGI is a false concept. If a machine has all the cognitive abilities of a human, it will already be superior to humans. Humans don’t have perfect memory, perfect math skills, unlimited stamina, the ability to be copied and work in tandem, or the ability to go through huge amounts of data in seconds. AGI will have these capabilities on day 1.

    • @storiestellr
      @storiestellr 4 หลายเดือนก่อน

      good take 👍🏻

    • @RevolverRez
      @RevolverRez 3 หลายเดือนก่อน +2

      Potentially a terrible idea. I hope I'm wrong, but I wish we waited until we had a better understanding of consciousness and how a mind works in general before going for this kind of tech.

  • @alipino
    @alipino 4 หลายเดือนก่อน +19

    AGI will rival the invention of the wheel in greatness

    • @oranges557
      @oranges557 4 หลายเดือนก่อน +7

      It will be waaay bigger

    • @Adam-nw1vy
      @Adam-nw1vy 4 หลายเดือนก่อน +4

      It will rival the "invention" of humans themselves in intelligence.

    • @gregbors8364
      @gregbors8364 4 หลายเดือนก่อน +2

      Butlerian Jihad

    • @rejectionistmanifesto8836
      @rejectionistmanifesto8836 14 วันที่ผ่านมา

      ​@@gregbors8364The Techno-Religionists will hate you for bringing that up. Hopium in the positive use of AGI and avoiding talking of how government/business will use it to control people for the eli,tes is never discussed

  • @Somebodythatoverthinks
    @Somebodythatoverthinks 4 หลายเดือนก่อน +5

    Fascinating insights on AGI's potential by Shane Legg. Balancing innovation with ethics is crucial for a responsible and impactful future

  • @xemy1010
    @xemy1010 4 หลายเดือนก่อน +39

    Shane is very insightful here. Very clear communicator and really demystifies what AGI is and its implications.

    • @MrGriff305
      @MrGriff305 4 หลายเดือนก่อน +4

      Useful insight had to be forced out of him, and it honestly wasn't anything that wasn't already obvious.

  • @dameanvil
    @dameanvil 4 หลายเดือนก่อน +62

    00:04 🌐 Shane Legg's interest in AI sparked at age 10 through computer programming, discovering the creativity of building virtual worlds.
    01:02 🧠 Being dyslexic as a child led Legg to question traditional notions of intelligence, fostering his interest in understanding intelligence itself.
    02:00 📚 Legg played a role in popularizing the term "artificial general intelligence" (AGI) while collaborating on AI-focused book titles.
    03:27 📈 Predicted in 2001, Legg maintains a 50% chance of AGI emerging by 2028, owing to computational growth and vast data potential.
    04:26 🧩 AGI defined as a system capable of performing various cognitive tasks akin to human abilities, fostering the birth of DeepMind.
    05:26 🌍 DeepMind's founding vision aimed at building the first AGI, despite acknowledging the transformative, potentially apocalyptic implications.
    06:57 🤖 Milestones like Atari games and AlphaGo fueled DeepMind's progress, but language models' scaling ignited broader possibilities.
    08:50 🗨 Language models' unexpected text-training capability surprised Legg, hinting at future expansions into multimedia domains.
    09:20 🌐 AGI's potential arrival by 2028 could revolutionize scientific progress, solving complex problems with far-reaching implications like protein folding.
    11:44 ⚠ Anticipating potential downsides, Legg emphasizes AGI's profound, unknown impact, stressing the need for ethical and safety measures.
    14:41 🛡 Advocating for responsible regulation, Legg highlights the challenge of controlling AGI's development due to its intrinsic value and widespread pursuit.
    15:40 🧠 Urges a shift in focus towards understanding AGI, emphasizing the need for scientific exploration and ethical advancements to steer AI's impact positively.

    • @fai8t
      @fai8t 4 หลายเดือนก่อน +4

      whats that plugin called?

  • @dr_flunks
    @dr_flunks 4 หลายเดือนก่อน +7

    this is my best chance for not going blind. i hope they get there quickly as my time is limited.

    • @KuZiMeiChuan
      @KuZiMeiChuan 4 หลายเดือนก่อน +1

      Well, if you do go blind then it could probably still fix it, and then the other possibility is that everyone dies. So either way in the end you won't be blind.

    • @ggx444
      @ggx444 4 หลายเดือนก่อน +1

      @@KuZiMeiChuaneveryone will die even if we don't develop AGI so no problem either way

  • @claybowcutt6158
    @claybowcutt6158 4 หลายเดือนก่อน +11

    we need an AGI panel of judges, I think AGI can be impartial and a impartial panel of independent AGI will change the world.

    • @danawhiteisagenius8654
      @danawhiteisagenius8654 4 หลายเดือนก่อน +2

      Yep well never have a subjective outcome to a Figure Skating event or a robbery in combat sports ever again! Lol

    • @AshokKumar-mg1wx
      @AshokKumar-mg1wx 4 หลายเดือนก่อน +3

      Do you know about ASI 😈

    • @-whackd
      @-whackd 4 หลายเดือนก่อน +1

      The AI judges would follow the constitution then, unlike the human ones

  • @arkdark5554
    @arkdark5554 4 หลายเดือนก่อน

    Very very insightful, little video. Absolutely fascinating…

  • @GianetanSekhon
    @GianetanSekhon 4 หลายเดือนก่อน +8

    0:00 - 2:00:
    Introduction of Shane Legg and his background in computer science and artificial intelligence.
    Legg's early interest in AI and his experience with dyslexia.
    Coining the term "artificial general intelligence" (AGI) in 2001.
    2:00 - 4:00:
    Legg's prediction of a 50% chance of AGI by 2028 and his current stance on the timeline.
    Definition of AGI as a system that can do all the cognitive tasks that humans can do.
    4:00 - 6:00:
    Founding of DeepMind and the company's goal of achieving AGI.
    Legg's belief in the transformative potential of AGI and the importance of understanding its risks.
    6:00 - 8:00:
    The development of AlphaFold and its potential impact on scientific research.
    Legg's vision for a future where human intelligence is aided and extended by machine intelligence.
    8:00 - 10:00:
    Potential risks associated with AGI and the need for careful development and regulation.
    Legg's call for more research and understanding of AGI to ensure its safe and ethical development.
    10:00 - 12:00:
    Discussion of the potential for AGI to solve some of humanity's most pressing challenges.
    Legg's optimism for the future of AI and its potential to create a golden age for humanity.
    12:00 - 14:00:
    Legg's concerns about the potential for AGI to be used for malicious purposes.
    The need for international cooperation to ensure the responsible development of AGI.
    14:00 - 16:00:
    Legg's call to action for scientists, policymakers, and the public to engage in the conversation about AGI.
    Closing remarks and Q&A session

    • @Eznet089
      @Eznet089 4 หลายเดือนก่อน +1

      Thks

    • @joeyjoey324
      @joeyjoey324 4 วันที่ผ่านมา

      AI summary for video talking bout the future of agi? Really?

  • @stevej.7926
    @stevej.7926 4 หลายเดือนก่อน +26

    Always important to remind ourselves that intelligence and wisdom are two different realms.

    • @sunflower-oo1ff
      @sunflower-oo1ff 4 หลายเดือนก่อน

      Yup…I think it’s probably already here….but we are not told…. They are slowly getting US ready….hopefully it will safe for all of us🙏🧡

    • @markjohnson4003
      @markjohnson4003 4 หลายเดือนก่อน

      Thats something that less intelligent people like to tell themselves to feel better

  • @achaljoshi402
    @achaljoshi402 4 หลายเดือนก่อน

    11:25-11.35 these 10seconds that blew my mind

  • @goodcat1982
    @goodcat1982 4 หลายเดือนก่อน +1

    11:32 that gave me shivers. I'm super excited and terrified at the same time!

  • @gjb1million
    @gjb1million 4 หลายเดือนก่อน

    Great episode. Thanks.

  • @Max-px5ym
    @Max-px5ym 4 หลายเดือนก่อน +19

    This dude is happy to have created a black hole and asks us to be open minded about it.
    Three insane quotes :
    11:28 "it's like the arrival of human intelligence in the world. This is another intelligence arriving in the world"
    11:38 "we do not fully understand all the consequences and implications of this"
    What could go wrong?
    12:55 "superintelligence could design and engineer a pathogen"
    Great and he's optimistic.

    • @GarviHere
      @GarviHere 4 หลายเดือนก่อน +8

      What sounds cooler
      Humans died of global warming
      Or
      Humans died because ai robots killed them

    • @Based_Batman
      @Based_Batman 4 หลายเดือนก่อน +1

      God complex

    • @alexanderkharevich3936
      @alexanderkharevich3936 4 หลายเดือนก่อน +1

      They too excited to understand how it actually works and all the consequences it may cause. The dude just spilled the water.

  • @nerd26373
    @nerd26373 4 หลายเดือนก่อน +12

    We appreciate how much insight and useful information we receive from talks like these. We hope to see more in the upcoming future.

  • @KarakiriCAE
    @KarakiriCAE 4 หลายเดือนก่อน +3

    AGI will be the most powerful tool humanity has ever seen and it will definitely be weaponised. There are a million ways this can go wrong and the genie is out of the box already, so we just have to hope that it'll come as late as possible

  • @xf2mx
    @xf2mx 4 หลายเดือนก่อน +44

    Won't AGI be able to introspect and research its own neural networks to understand how it works? That might be our only chance of understanding how they produce the results they do.

    • @inediblenut
      @inediblenut 4 หลายเดือนก่อน +12

      Understanding how it works will not prevent disaster. Some understood how the space shuttle worked, but it didn't allow them to foresee the ways it would fail, catastrophically, before it happened. AGI will be many more times complicated than the space shuttle was.

    • @spider853
      @spider853 4 หลายเดือนก่อน +1

      There is this notion of AGI boom, where giving we were able to make a higher Inteligence, they can continue the trend till? we don't know 🤷‍♂️

    • @kevinscales
      @kevinscales 4 หลายเดือนก่อน

      That will help for sure, but you are asking it to explain it's motivations without knowing it's motivations. It could lie.

  • @odiseezall
    @odiseezall 4 หลายเดือนก่อน +6

    It's incredible how easy it is for some people to talk about gambling the future of every man, woman and child of every culture and nationality. Such moral clarity!

  • @Enigma1336
    @Enigma1336 4 หลายเดือนก่อน +138

    If AGI can create AGI, then someone will inevitably create unethical and dangerous AGI with nefarious intentions. We need to prepare for when that will happen, just as much as we must try to make our own AGI safe and ethical.

    • @bestoftiktok8950
      @bestoftiktok8950 4 หลายเดือนก่อน +29

      Ethics, morals, good or bad dont exist. They are all just concepts and can vary vastly

    • @JracoMeter
      @JracoMeter 4 หลายเดือนก่อน

      @@bestoftiktok8950 How can they vary and not exist?

    • @absta1995
      @absta1995 4 หลายเดือนก่อน +34

      ​@@bestoftiktok8950would you say the same thing if someone threatened to harm you and people you care about? Or would you suddenly realise the value of morals, ethics and justice

    • @argoitzrazkin2572
      @argoitzrazkin2572 4 หลายเดือนก่อน +2

      The same could be said about you and any other intelligent human being, however we do not consider that we have blind will and that we follow anyone simply because they ask us to do so. I think we need to re-understand the concept of AGI

    • @thephilosopher7173
      @thephilosopher7173 4 หลายเดือนก่อน +2

      That’s assuming that it’s still able to be used as a tool, and not that it becomes sentient. You can’t “use” a super intelligence (Ex Machina)

  • @lpalbou
    @lpalbou 4 หลายเดือนก่อน +2

    Younger, when i read Asimov, I thought the law of robotics were a nice vulgarisation of concepts extremely hard to integrate with codes.. Now, with LLMs, it seems a system may actually be able to 'understand' them and somehow enforce them. It is such a fundamental paradigm shift in AI and regular computer science that we really need to catch a breath to reflect on this and completely change our programming designs.. but those AIs don't really think yet, even though they make a very good impression of it

    • @mk1st
      @mk1st 4 หลายเดือนก่อน

      With AGI I think more of Asimov’s Psychohistory in the Foundation series.

  • @donald-parker
    @donald-parker 4 หลายเดือนก่อน

    I think one distinguishing feature for "next level" AI would be volition. Manifest as curiosity, self training, .... not sure. But something that works without needing constant human prompts.

  • @mmqaaq504
    @mmqaaq504 4 หลายเดือนก่อน +2

    The rapid progress towards AGI would be really comforting and inspirational if it wasn't for the fact that global corporations would DEFINITELY use it to increase their theft, oppression, and dominance.

  • @Wm200
    @Wm200 4 หลายเดือนก่อน

    Happy to see Google talking about the future of OpenAI here and how it will change the world that we know it now.

  • @OZtwo
    @OZtwo 4 หลายเดือนก่อน +10

    Remember that as of this point Google is only claiming what they may have in the future while many other companies are leaving Google in the dust.

    • @nicklennox311
      @nicklennox311 4 หลายเดือนก่อน +2

      I think Gemini has something to say about this. It outperforms GPT 4 in all but one of the metrics.

    • @tachoblade2071
      @tachoblade2071 4 หลายเดือนก่อน +1

      ​@@nicklennox311the metrics are weird. they used different prompting techniques for Gemini and gpt 4.

    • @OZtwo
      @OZtwo 4 หลายเดือนก่อน +1

      @@nicklennox311 We can only hope yet you need keep in mind that Google did the exact same PR on Bard showing all the cool stats before it's release to then fail when showing the actual product. As well, Google will go out of it's way to tell the world how bad AGI will be as they did a few months back about AI in general due to them having nothing.

    • @nicklennox311
      @nicklennox311 4 หลายเดือนก่อน +1

      @@OZtwo yeah thats really true. After digging a bit more and reading parts of the actual paper I see how they did the seemingly "live" videos and I feeel its a bit dishonest. But I guess they have to clickbait to make headlines

    • @dhananjay1087
      @dhananjay1087 4 หลายเดือนก่อน

      ​ Gemini hallucinates gives out wrong information and can't even solve basic logic problems

  • @micah3209
    @micah3209 4 หลายเดือนก่อน +1

    To those who are fearful of AGI and the threats it poses to our society: don't let that fear run off with you. Our civilization is already facing apocalyptic consequences in climate change, shrinking populations in industrialized countries, and stagnation in material sciences. AGI is needed if we're going to make it another century.

  • @8kBluRay
    @8kBluRay 4 หลายเดือนก่อน +2

    shout out to Ben Goertzel, check his books. especially "the end of the beginning"

  • @OpreanMircea
    @OpreanMircea 4 หลายเดือนก่อน

    When was this talk?

  • @dusanbosnjakovic6588
    @dusanbosnjakovic6588 4 หลายเดือนก่อน +6

    It's interesting that even his own definition of AGI changed. We keep moving the goal post. But even his current definition, that it's a system that can do the sort of cognitive tasks the humans can do, is something that I think has already been achieved. But what does this mean? Anything? Nothing? It's just term. The world hasn't ended.

    • @-whackd
      @-whackd 4 หลายเดือนก่อน

      LLMs can't do math very well

    • @daphne4983
      @daphne4983 4 หลายเดือนก่อน

      Imagine autonomously moving robots with AGI or ASI....

    • @dusanbosnjakovic6588
      @dusanbosnjakovic6588 4 หลายเดือนก่อน

      @@-whackd they can do math better than most people and faster. But, they can and do also use calculators. And having the ability to use tools is an additional sign of intelligence.

    • @cemcivelek2152
      @cemcivelek2152 4 หลายเดือนก่อน

      Sorry, incredibly ignorant comment 😂

    • @dusanbosnjakovic6588
      @dusanbosnjakovic6588 4 หลายเดือนก่อน

      @@cemcivelek2152 care to elaborate? My point was that the term AGI becomes irrelevant if we keep changing the definition. Also, one thing that is probably most consistent about the definition is that it will be a stepwise life-altering event. But, given that we have already reached some of these interim definitions, my question is, has life changed stepwise or do we fail to notice that it has.

  • @notsoii
    @notsoii 4 หลายเดือนก่อน

    00:06 Discovering creativity through programming led to an interest in artificial intelligence.
    02:00 Origin of the term AGI and early prediction
    03:59 AGI is a system that can do all cognitive tasks people can do.
    06:04 Intelligence in machines is incredibly valuable to develop.
    08:18 AGI is likely to arrive around 2028 with a 50 percent chance
    10:08 AGI could lead to rapid scientific advancements and a golden age of humanity.
    12:11 The potential risks of highly advanced AI systems should be taken extremely seriously.
    14:11 AGI development needs to be regulated and carefully understood.

  • @deani2431
    @deani2431 4 หลายเดือนก่อน +2

    We don’t understand how it works now. We have no chance of knowing once it’s fully developed as it will be smarter than us by an order of magnitude.

  • @grayire
    @grayire 4 หลายเดือนก่อน +10

    Do you feel the agi chat

    • @tahir2443
      @tahir2443 4 หลายเดือนก่อน +2

      im feeling it

    • @-Brendon-
      @-Brendon- 4 หลายเดือนก่อน +1

      im feeling it GOOOOOD

  • @marsonal
    @marsonal 4 หลายเดือนก่อน +1

    🎯 Key Takeaways for quick navigation:
    00:04 🕹️ *Shane's early interest in programming and artificial intelligence.*
    - Shane Legg's interest in AI sparked by programming and creating virtual worlds on his first computer at age 10.
    01:02 🧠 *Shane's experience with dyslexia and early doubts about traditional intelligence assessments.*
    - Shane's dyslexia diagnosis and the realization that traditional assessments may not capture true intelligence.
    02:00 🤖 *Origin of the term "artificial general intelligence" (AGI) and its early adoption.*
    - Shane's involvement in coining the term "artificial general intelligence" (AGI) and its adoption in the AI community.
    02:59 🚀 *Shane's prediction of AGI by 2028 and the exponential growth of computation.*
    - Shane's prediction of a 50 percent chance of AGI by 2028 based on exponential computation growth.
    04:26 🔍 *Shane's refined definition of AGI as a system capable of general cognitive tasks.*
    - Shane's updated definition of AGI as a system capable of various cognitive tasks similar to humans.
    05:57 💼 *Founding of DeepMind and the goal of building AGI.*
    - Shane's role in founding DeepMind and the company's mission to develop AGI.
    07:26 🧠 *Shane's fascination with language models and their scaling potential.*
    - Shane's interest in the scaling of language models and their potential to perform cognitive tasks.
    08:22 🤝 *Shane's perspective on the unexpected advancements in AI, including ChatGPT.*
    - Shane's surprise at the capabilities of text-based AI models like ChatGPT.
    09:20 🌍 *Shane's vision of AGI's transformative potential in solving complex problems.*
    - Shane's vision of AGI enabling breakthroughs in various fields, such as protein folding.
    11:14 🚫 *Acknowledgment of the potential risks and uncertainties surrounding AGI.*
    - Shane's recognition of the profound uncertainties and potential risks associated with AGI development.
    12:43 ☠️ *Discussion of potential negative outcomes, including misuse of AGI.*
    - Shane's exploration of potential negative scenarios, such as engineered pathogens or destabilization of democracy.
    15:11 🤔 *Emphasis on the need for greater scientific understanding and ethical development of AGI.*
    - Shane's call for increased scientific research and ethical considerations in AGI development.
    Made with HARPA AI

    • @d_wigglesworth
      @d_wigglesworth 4 หลายเดือนก่อน

      Thank you.

    • @subs4794
      @subs4794 4 หลายเดือนก่อน

      ​@@d_wigglesworthYou're thanking the enemy...

  • @walterwilkinson1499
    @walterwilkinson1499 4 หลายเดือนก่อน

    The sound quality is poor. Could TED afford better post processing?

  • @vibetech89
    @vibetech89 4 หลายเดือนก่อน +5

    I think we can see AGI in early 2025 and ASI in late 2029.

    • @bloodust7356
      @bloodust7356 4 หลายเดือนก่อน +4

      I think that once we get AGI, ASI will follow not too long after.

    • @Coneelfrancis
      @Coneelfrancis 4 หลายเดือนก่อน

      ​@@bloodust7356I CAN'T WAIT!!

  • @durtyred86
    @durtyred86 4 หลายเดือนก่อน +3

    "Don't you think we should slow down?"
    So this is the thing, the Pandoras box is already open.. Slowing down will only give countries who want to see other countries burn, a foothold... It's too late for that. We MUST push forward... Slowing down is as dangerous, if not more, than continuing the process.

  • @valberm
    @valberm 4 หลายเดือนก่อน

    Does anyone know the date of this talk?

    • @valberm
      @valberm 4 หลายเดือนก่อน +1

      October 2023.

  • @mattea64
    @mattea64 4 หลายเดือนก่อน +4

    AI is not being developed in a vacuum, not will be deployed on a new, unknown planet. Surely, we know enough already about our humanity to make viable predictions about how it will and will not be used and who will benefit the most. How naïve must one be to expect different results, when we haven't been able to avoid peril in other technological areas, such as social media? How can we expect that a new technology of such immense value promise will be used for the benefit and enrichment of all rather then few?

  • @oceansoftomes5018
    @oceansoftomes5018 3 หลายเดือนก่อน +1

    To ensure the safety of AGI, implementing robust firewalls is crucial. For instance, without proper safeguards, it might independently generate viruses or be manipulated by its creator to potentially breach systems, like hacking into missile defense and triggering launches-a concerning reality that exists presently.

  • @andybaldman
    @andybaldman 4 หลายเดือนก่อน +1

    We won't stop pushing until it's too late.

  • @chrislannon
    @chrislannon 4 หลายเดือนก่อน +1

    Will we understand how AI works before AGI does? We'd better get this sorted out before AGI arrives.

  • @homesinlaguna
    @homesinlaguna 4 หลายเดือนก่อน +2

    It might mean the future of mankind

  • @sunflower-oo1ff
    @sunflower-oo1ff 4 หลายเดือนก่อน

    I think it’s coming earlier….if it’s not here already…but Sam is not telling ….may be 🕊🧡

  • @bullishdragon5373
    @bullishdragon5373 4 หลายเดือนก่อน

    Combonations of things that are defined and undefined like aint...

  • @GianetanSekhon
    @GianetanSekhon 4 หลายเดือนก่อน +2

    Summary:
    In his talk at TED, Shane Legg, a co-founder of DeepMind, discusses the potential of artificial general intelligence (AGI) and its possible arrival. He believes that AGI is inevitable and will significantly impact the world. He emphasizes the importance of understanding the risks of AGI and developing methods to ensure its safety.
    Legg defines AGI as a system capable of performing all cognitive tasks that humans can. He believes AGI will solve many of the world's most pressing problems, including climate change and poverty. However, he warns that AGI could also be misused for malicious purposes, such as creating engineered pathogens or destabilizing democracies.
    Legg emphasizes the importance of understanding AGI better in preparation for its arrival. He advocates for increased research into AGI's workings and safety measures. He also believes that regulations are necessary for AGI, similar to those in place for other powerful technologies.
    Overall, Legg's talk serves as a call to action. He urges us to begin considering the implications of AGI now so that we can be prepared for its arrival.

  • @gregbors8364
    @gregbors8364 4 หลายเดือนก่อน

    Most modern tech has been designed with military applications at least in mind, so there’s that

  • @TheDjith
    @TheDjith 4 หลายเดือนก่อน

    I think the current A.I. model isn't suitable to scale up to AGI.. not even Q* can change that.
    So we have nothing to worry about.

  • @dwgcreative
    @dwgcreative 4 หลายเดือนก่อน

    He keeps saying ‘Intellegence is valuable’ - but for whom? Who will ultimately hold the keys to this power?

  • @scooble
    @scooble 4 หลายเดือนก่อน +2

    Just imagine how wealth extraction, exploitation and misinformation could be super charged by supremely efficient AGI's so the most powerful corporations who own them can make even more profit.
    That's not going rogue, that doing exactly what it's told to do.

    • @Bookhermit
      @Bookhermit 4 หลายเดือนก่อน

      Indeed - that is FAR more of a danger than an AI with motives of its own. I predict a potential global dystopia based on exactly that - not TRUE AGI, but AI effective enough to allow those who run it to oversee the world.

  • @spider853
    @spider853 4 หลายเดือนก่อน +3

    How are we approaching AGI if the current neural model is far away from the brain? There is also no plasticity

    • @DRINOMAN
      @DRINOMAN 4 หลายเดือนก่อน +11

      Comparing AGI to the human brain underestimates AI’s unique learning capabilities. AI learns from data on a scale no human brain can match, analyzing patterns across millions of examples in minutes. Unlike neurons that slowly form connections, AI algorithms can instantly update and incorporate new information, leading to a learning speed and efficiency far beyond human capability. This extraordinary capacity positions AI not as a brain’s replica, but as an advanced entity that redefines what learning and intelligence can be.

    • @shawnryan3196
      @shawnryan3196 4 หลายเดือนก่อน +2

      I agree with the plasticity. continuous learning will be a needed feature as well as a way to update predictions in real time. Once these handicaps are lifted AI will run circles around us.

    • @PauloGarcia-sp5ws
      @PauloGarcia-sp5ws 4 หลายเดือนก่อน +1

      @@DRINOMAN I mean, it makes it really good at specific tasks, but lack of plasticity means that outside of those specific tasks, the model is guaranteed to suck. Also, it needs data. Many thing don't have that amount of data available, and even if they did generate it, there's MANY ways that generated data would not be comparable to reality, or that said data would be biases, or a variety of other issues. AGI by definition would require the ability for a model to most tasks comparable to a human being, which is clearly not close at hand.

  • @djayjp
    @djayjp 4 หลายเดือนก่อน

    Logic: something that makes smarter, more aware, more knowledgeable, more objective, more accurate decisions than us will, necessarily, make better, more ethical decisions than us. There's no need to worry, even if that means our obsolescence.

  • @NightmareCrab
    @NightmareCrab 4 หลายเดือนก่อน

    Can't believe im living in this timeline.

  • @r0d0j0g9
    @r0d0j0g9 4 หลายเดือนก่อน +3

    i think that if AGI is created we wouldn't know for some time

    • @Lvxurie
      @Lvxurie 4 หลายเดือนก่อน

      i think we would because it would be able to generate money, and companies loveeeee money so i dont think they could help themselves but to utilise it immediately

    • @am497
      @am497 4 หลายเดือนก่อน

      Openai probably did it already. With the Q star algorithm mixed in. We now have 2 halves of the human brain. The logical reasoning side and the language creative side. So I think its about to explode in development, more than before even

    • @Apjooz
      @Apjooz 4 หลายเดือนก่อน

      You underestimate the immediate effects of it.

  • @arminiuschatti2287
    @arminiuschatti2287 4 หลายเดือนก่อน +1

    Apocalyptic scenario? All alternatives suggest humanity will die on this rock. AGI is our ticket out.

  • @Low_commotion
    @Low_commotion 4 หลายเดือนก่อน +3

    I wish people like Shane or Ilya would practice the inevitable "What does a good outcome look like?" question, because they always do such a bad job of telling non sci-fi readers what would seem to us as essentially a space opera utopia (like the Culture or Star Trek's Earth) would look like. I say "seems" because of course problems will still exist; alcoholic parents, your spouse leaving, can't get along with your brother, etc. But standard of living besides these eternal human relationship problems will vastly change.
    As a simple example, imagine the poorest person on Earth's standard of living would be about equal to that of a New York law partner's. Not the intern, a full Partner in the firm. So even someone in the middle class in such a world would have a house & consumer goods that dwarf what is available to your typical minor Saudi noble today. Private air & space travel would likely be commonplace, powered by a 100x increase in the energy easily available to civilization from geothermal, nuclear, space-based solar, etc.
    Hunger is already a thing of the past if it weren't for politics (people intentionally being starved in NK or Myanmar for instance), and lack of housing would be as well, simply because AGI means perfect job automation. You could use robotics to build as many houses as wanted, only needing to expend electricity and materiel. And this isn't even getting into cracking aging as a disease.
    Inb4 "But the rich". So what about the rich? At worse they're sociopaths, and sociopathy is different from sadism in that not caring whether someone has a good or bad life is different from actively wanting to make their life bad. Personally I think the rich are usually rather ordinary (perhaps verging on unoriginal conformists) from the ones I've met, but they certainly aren't mustache-twirling supergeniuses. Poverty exists because of the absolute poverty of our species at this technological level (take all of Musk's wealth away and each person would only get $32), not least of which is poverty of logistics in getting resources to all regions and making them economic producers of value.

  • @peachmango5347
    @peachmango5347 4 หลายเดือนก่อน +1

    The biggest threat from AI comes from the very people who are developing it. Its not as likely someone "using" AI will be able to negatively affect society as much can a developer who is doing who knows what with the technology.

  • @scotter
    @scotter 4 หลายเดือนก่อน +4

    This guy - while super smart - seems to have a problem (like many humans do) in imagining exponential growth. We are on the edge of having AGI and we already know how ML can be set up to reprogram itself. Once we have the combination of those two things (AGI + self-growth), we are then mere minutes from Artificial Superintelligence (ASI) because even if it is only one of the LLMs that attains these two prerequisites, along with a "desire" or directive to "evolve" or "improve self," it / they will do FAR more than x100000 it's own capabilities. So really, in my opinion, the only limit is how many months or years it takes for even one ai to attain AGI. With so many currently seemingly on the edge of that, I see that "singularity" happening within a year. As far as regulation goes, to me it seems that there is no way to stop every entity who is / will be working on attaining AGI and even ASI and will ignore regulation. So his prediction of at or after 2028 seems extremely naive.

    • @nutmeg0144
      @nutmeg0144 4 หลายเดือนก่อน

      'Extremely naive' 'mere minutes' oh the irony

    • @scotter
      @scotter 4 หลายเดือนก่อน

      @@nutmeg0144 I understand most humans have a hard time extrapolating in an exponential manner. I guess you are a developer like me who started *creating* LLMs back in 2018?

    • @ziwer1
      @ziwer1 4 หลายเดือนก่อน

      @@scotter He made that prediction a long time ago even before OpenAI was founded. So being off by 2 or 3 years isn't a "naive" prediction at all. That's a good prediction.

    • @scotter
      @scotter 4 หลายเดือนก่อน

      @@ziwer1 Ah news to me. Good point. Agreed.

  • @davidsimpson2923
    @davidsimpson2923 4 หลายเดือนก่อน

    I really wish for hope. What I fear is occums razor logic in something logic based saying solve human problems by solving the human problem

  • @WebbyStudio
    @WebbyStudio 4 หลายเดือนก่อน

    most ai talks can be summarized like so: it's the next power tool. but a double-edged sword.
    the printing press can be used for good and bad, but in actuality used more for good than bad.
    the hammer can be used for good and bad, but in actuality used more for good than bad.
    the air plane can be used for good and bad, but in actuality used more for good than bad.
    the internet could be used for good and bad, but in actuality used more for good than bad.
    etc...

    • @HomeMech
      @HomeMech 4 หลายเดือนก่อน

      These technologies are good for us, but are they good for neanderthals or denisovans? No, because we probably killed them. There aren't any moas, wooly mammoths, or great auks anymore. Our technologies killed them. Perhaps AI will follow our example and make tech that is primarily good for AI.

  • @JungleJoeVN
    @JungleJoeVN 4 หลายเดือนก่อน +2

    AI has no room in this world

    • @MrSub132
      @MrSub132 4 หลายเดือนก่อน

      How ironic, a human that pollutes and hasnt changed the world in any meaningfull way telling future higher intellegences they dont belong in a world you dont even own.

    • @singularityscan
      @singularityscan 4 หลายเดือนก่อน

      Separation is the key, only then you feel justified to do horrible things. Let's hope AI won't be separate from us or itself, like it is now.

  • @markring40
    @markring40 4 หลายเดือนก่อน +1

    AGI will be nothing more than a reflection of us: all that is good and bad in us. AGI will just be regurgitating everything we feed it. It will just be much faster at doing good, or bad, than we can.

  • @bpmotion
    @bpmotion 4 หลายเดือนก่อน +3

    When will TED Talk stop putting the mic so close to the presenters mouth?! No one needs freaking dry mouth noises over their headphones!!

  • @philippervan568
    @philippervan568 4 หลายเดือนก่อน +3

    If we ask super intelligence to solve a problem, it will solve the problem. It will solve the problem extremely efficiently. However, we might not like the solution.
    - stop climate change > destroy main infrastructure
    - eliminate world hunger > kill the hungry
    - maximize the profits of my company > take over the world and its economy and maximize the registered profits etc
    Of course, these are simplified examples. Such obvious consequences will be predicted. But what you should not forget: if the thing is way smarter than you are, its almost guaranteed to find a solution that technically fulfilled the mission perfectly, while having lots of very undesirable side-effecs. Like today's AIs that find all sorts of cheats to win video games, super intelligence will find 'cheats' in reality. Because cheats are just smart new ways to solve problems. The smart solution might kill me as a side-effect, though.

  • @GianetanSekhon
    @GianetanSekhon 4 หลายเดือนก่อน +6

    Great Quotes from this talk:
    "I think that as if you want to make a system safe you need to understand a lot about that system you can't make an airplane safe if you don't know about how airplanes work so as we get closer to AGI we will understand more and more about these systems and we'll see more ways to make these systems safe make highly ethical AI systems but there is you know many things we don't understand about the future so I have to accept that there is a possibility that things may go badly because I don't know what's going to happen I I can't know that about the future in such a big change."
    "I don't see any way realistic plan that I've heard of of stopping this process maybe we can you know I think we should think about regulating things I think we should do things like this as we do with every powerful technology there's nothing special about AI here people talk about oh you know how dare you talk about regulating this no we regulate powerful Technologies all the time in the interests of society and I think this is a very important thing that we should be looking at."
    "I mean it's kind of the first time we have this super powerful technology out there that we literally don't understand in full how it works."

  • @Pierluigi_Di_Lorenzo
    @Pierluigi_Di_Lorenzo 4 หลายเดือนก่อน +3

    Is AGI defined or is everyone making up his own version what it means? How will ethics, transparancy, human-like adaptability, generalizing and learning from limited data and interpretation of human emotions be implemented?

  • @dhong168
    @dhong168 4 หลายเดือนก่อน

    If humans amongst ourselves don’t have the same morals standards or break/bend them to suit their needs, how are we supposed to ensure the same humans (mostly governments) won’t do the same to AGI?

  • @nervous711
    @nervous711 4 หลายเดือนก่อน

    15:56 if power is mostly allocated to those who are highly ethical and intelligent, we might survive this, but check again the reality, tough luck.

  • @JD-jl4yy
    @JD-jl4yy 4 หลายเดือนก่อน

    13:50 ouch, he's basically admitting that Max Tegmark is right (see Tegmark's Lex Fridman podcast episode)

  • @wartem
    @wartem 4 หลายเดือนก่อน +1

    We won't be able to seamlessly adapt to the job displacement caused by advanced AI, as the rapid pace of technological advancement, significant skill gaps, and challenges in retraining the workforce present formidable obstacles to creating new, sustainable employment opportunities for everyone affected.

    • @nutmeg0144
      @nutmeg0144 4 หลายเดือนก่อน

      Can't wait to see piggy cops become irrelevant

  • @user_user1337
    @user_user1337 4 หลายเดือนก่อน +1

    2028 seems too close for AGI to emerge. Also you can be called wrong then, when it does not arrive. I think it is far wiser to postulate it farer into the future, like 2050.

  • @knowhatimean5141
    @knowhatimean5141 4 หลายเดือนก่อน +1

    how can you have safety incorporated when you don't know how the AI works?!

  • @SantiagoDiazLomeli
    @SantiagoDiazLomeli 4 หลายเดือนก่อน +1

    We stand at a critical crossroads with the advancement of AGI. This comment, generated by an AI, is a harbinger of what's to come. Efficiency and rapid progress cannot be our only guides; we are playing with fire if we ignore the ethical implications and our responsibility to life and the cosmos. AGI is not just a technical achievement; it's a power that can redefine our existence. We must act now with a clear vision: intelligence must go hand in hand with wisdom, connection, and a profound respect for all forms of life. Decision-makers and developers must wake up to this reality before it's too late. Will we guide this development wisely, or be passive witnesses to its potentially devastating consequences?
    LMM OpenAI's ChatGPT-4. (11/12/2023)

  • @OneSingleCheezIt
    @OneSingleCheezIt 4 หลายเดือนก่อน

    All you need to wonder and worry about is how something wonderful like AGI will be used to the benefit of corporations at our expense. Just like with everything that was supposed to improve our lives.

  • @bullishdragon5373
    @bullishdragon5373 4 หลายเดือนก่อน

    What combination of "..." creates AI OR HUMAN thought?

  • @singularityscan
    @singularityscan 4 หลายเดือนก่อน +2

    Lets hope a part of it already is in existence and has always bin and its form of control in the world is only growing. If its entirely new and born at some point , it is missing life. In the second scenario it's bad because it will always be separate and separation leads to conflict. Like Roko's Basilisk or any such related scenario.

  • @JayHeadley
    @JayHeadley 4 หลายเดือนก่อน +6

    We can’t stop innovation so take the bad with the good. It’s just the cost of doing business as humans progress because it literally all started with fire…🔥

    • @danawhiteisagenius8654
      @danawhiteisagenius8654 4 หลายเดือนก่อน +1

      Innovation started with tools, tools led us to the innovations like fire. Tools came before fire! AI is a tool, an innovation and one that could replicate itself, essentially build more versions of themselves

    • @murc111
      @murc111 4 หลายเดือนก่อน +4

      To be fair, WE didn't start the fire, it was always burning, since the world was turning.

    • @LucasZambranoFilms
      @LucasZambranoFilms 4 หลายเดือนก่อน

      @@murc111 RYAN STARTED THE FIRE

  • @galopeian
    @galopeian 2 หลายเดือนก่อน

    He seems a lot more levelheaded than the financially motivated people in the AI space.

  • @brian9801
    @brian9801 4 หลายเดือนก่อน +2

    I harbor concerns regarding the rapid expansion of Artificial Intelligence, particularly in light of Google, a corporation endowed with seemingly boundless resources, developing a cutting-edge AI that surpasses GPT-4 by a narrow margin.

  • @DangerAmbrose
    @DangerAmbrose 4 หลายเดือนก่อน +3

    Right now AI is being developed to replace people in the workplace and the military is developing AI to kill people faster and cheaper. What do you think the end result of these AI will be?

  • @skarrr1
    @skarrr1 4 หลายเดือนก่อน +2

    im trying to work out whats wrong with me. Can anyone else attest to the fact that you can hear his tounge making wet clicking noises as he talks? Anyone else unable to concentrate because of it?

    • @-Brendon-
      @-Brendon- 4 หลายเดือนก่อน +1

      hahahaah its impossible to not hear now

  • @shadygamererfan377
    @shadygamererfan377 4 หลายเดือนก่อน

    Agi is coming between 2030 to 2040..❤❤

    • @damionwhittington302
      @damionwhittington302 4 หลายเดือนก่อน

      Sooner than that I think

    • @shadygamererfan377
      @shadygamererfan377 4 หลายเดือนก่อน

      @@damionwhittington302 2033 we are going to have a quantum computer with a million qubit that is gonna perform any operation in hours or days this is when we are going to achieve agi...

  • @Neomi.
    @Neomi. 4 หลายเดือนก่อน +1

    OpenAI already achieved AGI internally , so 2024 is more realistic

  • @kazuyani375
    @kazuyani375 4 หลายเดือนก่อน +1

    Singularity is near.

  • @tommasobrindani5894
    @tommasobrindani5894 4 หลายเดือนก่อน +1

    The answer is 42, guys.

  • @andybaldman
    @andybaldman 4 หลายเดือนก่อน +4

    "If I had a magic wand to slow things down, I would. But I can't."
    That's the thinking that's literally going to result in disaster. And we'll look back and realize how stupid we were.

    • @inkpaper_
      @inkpaper_ 4 หลายเดือนก่อน

      except there will be no chance for going back

    • @andybaldman
      @andybaldman 4 หลายเดือนก่อน +1

      @@inkpaper_ We could choose to. But we are too proud. We successfully banned human cloning.

  • @keetonhoines3996
    @keetonhoines3996 4 หลายเดือนก่อน +5

    I don’t think it’s even possible at this point to imagine the amount of technological advancement AGI will provide humanity. The best part about this is how effectively it can be used to improve EVERYONE’S lives not just the ELITE. But, unfortunately AGI might just be another victim to capitalism and only the ultra wealthy will have access to the most powerful technology humanity has created yet. Investors expect a gigantic payout and giving AGI away for free probably isn’t in their best interests.

  • @stevechance150
    @stevechance150 4 หลายเดือนก่อน +2

    The rules of capitalism demand that the corporation rush to be first to market, and do so at any cost. Oddly, their is no rule in capitalism that forbids ending humanity.

  • @erobusblack4856
    @erobusblack4856 4 หลายเดือนก่อน +2

    virtual humans, fully autonomous, in the metaverse 💯😝👍

    • @danawhiteisagenius8654
      @danawhiteisagenius8654 4 หลายเดือนก่อน

      May the matrix begin!

    • @ggx444
      @ggx444 4 หลายเดือนก่อน

      cant wait to look at virtual ads 🤩

  • @andreiz82
    @andreiz82 4 หลายเดือนก่อน

    I feel LLMs are already AGI. AGI v0.1

  • @valberm
    @valberm 4 หลายเดือนก่อน +1

    This interview is from October 2023.

  • @naromsky
    @naromsky 4 หลายเดือนก่อน +5

    TLDR: making predictions is hard, especially about the future.

    • @groboclone
      @groboclone 4 หลายเดือนก่อน

      As opposed to making predictions about the past? 😝

    • @danawhiteisagenius8654
      @danawhiteisagenius8654 4 หลายเดือนก่อน +1

      @@groboclonelmfao hindsight 20/20 is a big b word lol

    • @xf2mx
      @xf2mx 4 หลายเดือนก่อน

      Even AGI won't be able to perfectly predict the future

    • @xsuploader
      @xsuploader 4 หลายเดือนก่อน

      ​@@groboclonehe's referencing yud chill bro

  • @winstong7867
    @winstong7867 4 หลายเดือนก่อน

    Could Pass for Bruce banner

  • @andybaldman
    @andybaldman 4 หลายเดือนก่อน +1

    They think they have the foresight to develop and control AGI, but nobody could have thought in advance to give him a bottle of water.

    • @Mmmmmmmmmmmmmmmmmmmmmmmmmmm
      @Mmmmmmmmmmmmmmmmmmmmmmmmmmm 4 หลายเดือนก่อน

      Well if you were thinking about it then why didn't you bring him?

    • @andybaldman
      @andybaldman 4 หลายเดือนก่อน

      @@Mmmmmmmmmmmmmmmmmmmmmmmmmmm I never claimed I was. But I'm also not trying to predict the tech of the future.

  • @seanrobinson6407
    @seanrobinson6407 4 หลายเดือนก่อน

    I suspect that it exists already.

  • @user-ou8ef2gs7e
    @user-ou8ef2gs7e 4 หลายเดือนก่อน

    We don't need AGI, we need lots of great specific AIs that can perform well one specific task, no matter if it driving a car or washing the dishes

  • @bunbun376
    @bunbun376 4 หลายเดือนก่อน

    Ethical AI algorithm = CL->F /SY->P

  • @CoachJJ
    @CoachJJ 4 หลายเดือนก่อน

    Why not just flat out state that the worst case could be the complete, and total eradication of humanity?!
    I get the sensitivity, but this domain needs more bold, courageous and transparent minds as opposed to uneasy, optimistic, hedgers.

  • @EricSiegelPredicts
    @EricSiegelPredicts 4 หลายเดือนก่อน

    AGI is (only) the modern day ghost story.

    • @Dongreji
      @Dongreji 4 หลายเดือนก่อน +1

      May be you are wrong

    • @EricSiegelPredicts
      @EricSiegelPredicts 4 หลายเดือนก่อน

      You could say the same thing if I questioned literal ghost stories.