Why full, human level AGI won't happen anytime soon

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ธ.ค. 2024

ความคิดเห็น •

  • @andrebmkt
    @andrebmkt หลายเดือนก่อน +9

    This is an amazingly thorough and/but succint video, it's a crime that it has so few views.

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน +3

      Thank you! 🙏 ... hopefully YT will show it to more people someday 🙂

    • @andrebmkt
      @andrebmkt หลายเดือนก่อน +3

      @@Go-Meta YT suggested your video to me, so yeah! And I've been sharing your videos to friends as well!

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน +2

      @@andrebmkt Thank you!

    • @davidgncl
      @davidgncl 19 วันที่ผ่านมา +2

      ​@Go-Meta just found your channel and I'm bingeing 😊

  • @SusanHaumeder
    @SusanHaumeder 6 วันที่ผ่านมา +2

    Two thumbs up, because that is all that I have. The differences in the training phase and the inference phase has been oddly left out of other conversations. Thanks for the broad and clear perspective. Yes, the 6 barriers are persuasive and reassuring from the anxiety hype floating around.

    • @Go-Meta
      @Go-Meta  6 วันที่ผ่านมา

      Thank you! 🙏

  • @JoeSmith-jd5zg
    @JoeSmith-jd5zg 22 วันที่ผ่านมา +6

    4:30 Experts in the field realize the implications of energy constraints, they're already working on ways for more efficient intelligence generation. Another thing to do is keep in mind the moving barrier. If we took today's chat GPT and transported it to 1984, the people of that time would say wow you've got AGI. It's as we advance and people see it they just keep moving the gold post. I've seen references such as well it's not AGI if it doesn't have intention or doesn't have consciousness, that's bunk.

  • @marcusmoonstein242
    @marcusmoonstein242 หลายเดือนก่อน +3

    I always assumed that the training/inference split would be handled by having a cheap local inference processor in the robot which would take care of the routine stuff, but then the robot would have a high-capacity data link to the expensive training super-computer which would update the AI model continuously in order to make training ongoing. This recursive loop would lead to faster development of the AI.

  • @chensun6156
    @chensun6156 หลายเดือนก่อน +3

    Best explanation of social impact of AI I've seen. Thank you so much!

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน

      That's great to hear, thank you! 🙏

  • @Anders01
    @Anders01 14 วันที่ผ่านมา +1

    I have an idea of AGI defined on a scale, from minimal AGI to full AGI and the minimal value is when the AI can do 50% of all human jobs and the maximum full value is when it can do 100% of all human jobs. Maybe an unrealistic or inaccurate scale, but I believe it's useful to at least have some definition of AGI or experts will argue endlessly about it.

    • @Go-Meta
      @Go-Meta  14 วันที่ผ่านมา

      Yeah, I agree that AGI is best thought of as on some kind of scale. In another video of mine, however, I look at two different dimensions of this: cognitive capabilities and physical capabilities. You might enjoy the video: th-cam.com/video/TC9Op30QghI/w-d-xo.html
      I'd certainly be curious to hear your thoughts on this grid that I develop there.

    • @Anders01
      @Anders01 14 วันที่ผ่านมา

      @@Go-Meta Yes, I was thinking of AGI basically having to be embodied, which may not be the case. Regarding human cognitive capabilities mentioned in your other video I believe we humans will be able to remain on top of the AI! Because as Ray Kurzweil has explained, the human mind works in an abstraction hierarchy. We humans can as I see it move into higher and higher abstraction levels, like how an image says more than a thousand words. And then AI, even AGI and ASI can be used as very powerful tools doing the heavy lifting, like a bulldozer for cognitive power, with us humans still in the driving seat.

  • @TennesseeJed
    @TennesseeJed 3 หลายเดือนก่อน +16

    We should have built human intelligence before we started the artificial kind.

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน +2

      Now there's a plan! 🙂

    • @TennesseeJed
      @TennesseeJed 3 หลายเดือนก่อน +3

      @@Go-Meta Just think if we spent that much money making people smarter how much better democracy might be. The deception would be harder for media and oligarchy to pull off if methods of debate and critical thinking were started in grammar school.

    • @mrdeanvincent
      @mrdeanvincent 3 หลายเดือนก่อน +3

      ​​@@TennesseeJed And not just making people smarter but also _wiser..._

    • @TennesseeJed
      @TennesseeJed 3 หลายเดือนก่อน +2

      @@mrdeanvincent Yep, teaching critical thinking, philosophy and deductive reasoning doesn't serve the oligarchy in making livestock of us though. The mechanical Turk will enrich the owners better than a functional democracy ever could.

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน +2

      Yup, smarter and wiser would be great, and are indeed a key ingredient for a functioning democracy.
      My big worry with this is that historically capitalists needed a reasonably well educated population in order to work in the factories and offices. But, even without human level AGI, it could be that the capitalist need for human labour is so vastly reduced that they start questioning the value of universal, 'high' quality education.
      And conversely, if something like UBI is introduced (after all, consumers can't buy things without cash!), then many people growing up might think there is no point in getting an education anyway as they'll get UBI whatever they do.
      So, I think we're at real risk of a collapse in the perceived value of education, which would be catastrophic.
      Somehow, we need to engender a culture that values a good education as being an end in itself, rather than just being a way to skill up a workforce.

  • @adelatorremothelet
    @adelatorremothelet วันที่ผ่านมา +1

    Well three months later we seem to have achieved AGI. This video needs an update

    • @VoloBonja
      @VoloBonja วันที่ผ่านมา

      All 6 points still valid.
      Video could be updated, but I don't see how latest o3 presentation changes anything significantly

    • @adelatorremothelet
      @adelatorremothelet วันที่ผ่านมา

      ​@VoloBonja Energy requirements go down as chips improve.
      Fine tunning will not disrupt scale, there are not so many knowledge areas.
      Long running tasks are one of the areas in which o3 improved a lot.
      Human level intelligence doesn't imply human level sentience AIs can still be tools.

    • @VoloBonja
      @VoloBonja วันที่ผ่านมา

      @@adelatorremothelet Only one additional point from me: AGI is also human level creativity. Current technology leads to opposite of AGI, no creativity possible.

    • @memegazer
      @memegazer วันที่ผ่านมา +1

      there will always be people that will keep resetting the goal posts no matter how far AI is advanced
      for them it is a semantic debate about "real" understanding or creativity or intution or consciousness or reason or any of the words they torture to mean something that can only be authentic if it comes from a human

    • @adelatorremothelet
      @adelatorremothelet วันที่ผ่านมา +1

      @@VoloBonja Given that AIs can create images and music in very diverses style I find that argument hard to sustain.
      Alas, you could argue they are not able to create new styles, but that begs the following questions: what percent of humans has ever created paintings or composed music? or being mor strict how many have created a new style of music or painting? And shall we consider them non-intelligent because of that ?
      Denial, AGI will transform the quantity, quality and income of white collar workers, I fear this perpetual denial is a way to deflect the fact that the social contract has to be revised to have a working society under new conditions.

  • @crawkn
    @crawkn 10 วันที่ผ่านมา

    For many problems involving dynamic systems, there simply aren't any absolute "best" solutions, but while humans can make decisions with the intention of optimization, they are actually quite poor at analyzing the system dynamics involved. We usually depend on toy models with very few independent variables, and don't even evaluate those in mathematically accurate ways. AI will become better at it.

  • @marcopritoni7182
    @marcopritoni7182 5 วันที่ผ่านมา

    I think this series should be in scishow or other major educational channels.

    • @Go-Meta
      @Go-Meta  5 วันที่ผ่านมา

      Wow, thank you! 🙏 And, yeah, it'd be fun if one day these ideas could reach a bigger audience! 😀

  • @abdullahhazari918
    @abdullahhazari918 2 วันที่ผ่านมา

    Why you are not assuming that those models will find way to optimise their energy requirements? There is huge space to do that

  • @isaklytting5795
    @isaklytting5795 3 หลายเดือนก่อน +2

    I really appreciate your down-to-earth way of thinking. And I was especially surprised by barrier no. 5. I hadn't thought in these terms. And no. 3 is also a doozy. You're right, of course. They won't want a self-aware AI with agency. And no. 6! "AGI's will have needs of their own." Will want the right to buy property, maybe start companies, etc. Jeez! What will this mean for people in society??

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน

      Hi, thanks!
      And, yeah, the implications for society are going to be pretty intense whatever happens.

    • @maloxi1472
      @maloxi1472 24 วันที่ผ่านมา

      That's the thing. AGIs will be people, even if they aren't the human kind.

  • @jonahansen
    @jonahansen หลายเดือนก่อน +1

    1) is a technical challenge/limit only. Human brains operate using far less power, and depends on the implementation and hardware. 2) I think this has to be a fundamental part of a new approach/algorithm. In fact, people use the ability to infer in the process of "training", and also can apply it to hypotheticals and counterfactuals in the process of self-stimulation and training. Typed this before you said essentially the same.

    • @jonahansen
      @jonahansen หลายเดือนก่อน +1

      3) Probably the way this will be overcome is incrementally - there will be no impulse of investment with a ROI calculation. Rather, many steps over time, with many investments will eventually get there, with the accumulated total possibly being what you say, but it won't be visible as such. 4) Undoubtedly.

    • @jonahansen
      @jonahansen หลายเดือนก่อน +1

      5) These foibles that involve changing reward functions and goals: aren't they really part of the domain that AGI solves? Won't this become part of the training process that occurs over the development during training? 6) This is more of a philosophical and political issue that we, as humans, haven't really solved very well ourselves yet. I think people think of AGI entities as objects you ask questions and they give answers while you lie around on the beach. This is not the prototype for interaction with machines that have cognitive par with humans and each other.

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน

      Yup, agree with all of what you say. And, taken in the round, these issues add up to being a series of time delays to when genuinely human level AGIs are around. It's not that I can't imagine us getting there one day, but it's not going to happen anytime 'soon'.

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน

      And, thanks for the comments!

    • @jonahansen
      @jonahansen หลายเดือนก่อน +1

      @@Go-Meta Agreed.

  • @kyneticist
    @kyneticist 3 หลายเดือนก่อน +2

    This is certainly a valid topic to address. I have a concern though that we need to be very careful about how this topic is addressed. The level of technology that we have right now, even if we stopped all research, will be developed into business and industry purposes with tremendous impact, both positive and negative. It is of great consequence that companies are so confident that investing hundreds-of-billions of dollars is not only practical but inarguably valuable - they'll expect serious returns on those investments.
    We don't need to reach AGI or even a significant percentage of it to see many industries where humans are made obsolete. After that point, the question of whether we reach "true" AGI is going to be purely academic.

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน

      Yes indeed. The next decade will likely see huge political economic changes, and we need to be talking about how to navigate these changes in a way that is as fair as possible, otherwise we're at real risk of entering some seriously dystopian times.

    • @maloxi1472
      @maloxi1472 24 วันที่ผ่านมา

      "After that point, the question of whether we reach "true" AGI is going to be purely academic"
      Not if you truly understand the implications of inventing AGI.
      I think you'll benefit a lot from watching this video essay by D. Deutsch, the "father of quantum computing":
      th-cam.com/video/IeY8QaMsYqY/w-d-xo.html

    • @kyneticist
      @kyneticist 24 วันที่ผ่านมา

      @@maloxi1472 I'm familiar with his work, and while I disagree with his outlook and conclusions, this isn't the point I was trying to make.
      My point was that in a world where all meaningful work is done by AI, things logically go very poorly for anyone who isn't already independently wealthy. Therefore, for the vast majority of humans, the question really is at best, academic.

  • @snarfer293
    @snarfer293 3 หลายเดือนก่อน +1

    Also, "just" knowing that we actually have human level AGI is a real issue as we don't have tests for it.

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน

      Yeah, very true. But, I'm also seeing many people saying, "we'll know it when we see it!", and we certainly haven't seen it yet. Passing lots of question / answer tests is certainly not a way to test it.

  • @lawrenceemke1866
    @lawrenceemke1866 หลายเดือนก่อน +1

    I point to Kurt Godel's incompleteness theorum, applying it to the "decision mechanism" ("what is correct"), that is part of an AI model construction. The decision agent will always be a constrained finite mechanism, requiring an external agent for validation. This will lead to an unending recursive process (as the concept of "infinity", being a "good enough" approximation). This is what happens in the human mind in knowledge acquisition. Together with the "law of diminishing returns" (s-curve) places a limit on the ability to actually achieve ASI Humans have not achieved ASI in their time span because they are limited. Humans are limited tools builders. We build tools. The tools we build "that build tools" will also be limited.. From my point of view, ASI is a fantasy, a part of utopia. The only product that a humans produce, that is not a tool, is another human being. This reminds me of Michael Angelo's painting that pays tribute to the passing of consciousness from God to man.

  • @ABDULLAH-qc5yd
    @ABDULLAH-qc5yd 12 ชั่วโมงที่ผ่านมา

    The Development of MMA and other frameworks were sparks up to acheiving AGI ☠️☠️☠️

  • @nobodyinnoutdoors
    @nobodyinnoutdoors 2 วันที่ผ่านมา +1

    Yeah man I think you need to do a new video after today’s release.

    • @nobodyinnoutdoors
      @nobodyinnoutdoors 2 วันที่ผ่านมา

      The fact that you are stuck in training when recent advancements in the tech show that that isn’t even what openAI is focusing on.
      Like I’m confused man did you not predict this style of reasoning models?

    • @nobodyinnoutdoors
      @nobodyinnoutdoors 2 วันที่ผ่านมา

      Like o1 was released when you dropped this video and with my comment o3 is released.
      Like no offense but this just makes me doubt your credibility.

    • @nobodyinnoutdoors
      @nobodyinnoutdoors 2 วันที่ผ่านมา

      And that’s on top of the fact that GPT was release 2 years ago. And the foundational paper not even a decade ago.
      I really feel like you attempting to downplay if a lack of a fundamental understanding of the difference between this and most science.
      And you are just wrong about inference phase in consideration to training.

    • @nobodyinnoutdoors
      @nobodyinnoutdoors 2 วันที่ผ่านมา

      Lmaooooo and we are at ARC now.
      Bro you have been sleeping for 3 months and at that point this video is just misinformation.

    • @nobodyinnoutdoors
      @nobodyinnoutdoors 2 วันที่ผ่านมา

      The whole point of AGI is about control. You can literally see millions of tweets about need to control a system smarter than you.
      Tesla and Elon just don’t know how to do better, that’s on them.

  • @nana2bnBenn
    @nana2bnBenn หลายเดือนก่อน

    Hello, I know it's not AI or AGI, but if we start from the beginning, we have many self-checks out which are replacing working human beings, I can't say how much money the corporation is making out of this, but I don't win anything out of this. The only thing I see is I have to work and put my shopping in a bag myself. If these self-checkout cashiers are developed everywhere and the prices don't change or go up, what are the benefits for everyday consumers? Will it be just to pay? Then if they don't work how can this system be sustained? You talked about mid-humans versus experts, what's an expert? AI can replace experts faster than it can replace mid-humans. Let's just take programmers, mathematicians, lawyers or even financial services. An AI can be more accurate and don't have prejudice! Who will be left to sustain the capitalistic system?
    The ones who will be replaced quickly will be the experts!!

  • @DaronKabe
    @DaronKabe 3 หลายเดือนก่อน +7

    GPT o1 proved you wrong, but your ego won’t let you admit it

    • @Go-Meta
      @Go-Meta  3 หลายเดือนก่อน

      I'm not at all sure what you mean? In the video I said that it was widely expected that we'd see improvements in rational reasoning in the next generation of models. As far as I can see that is the main thing that GPT o1 is bringing to the table.
      That does not make it "full blown human level AGI". Even Sam Altman says of it, "o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it."
      x.com/sama/status/1834283100639297910
      Doing better than most humans at well framed answer / response tests, is not the same as having reached full blown, human level AGI.

  • @zappertxt
    @zappertxt หลายเดือนก่อน

    Ok, I see.... instead of full-blown AGI in our smartphones, robots or cars, "we" will have it at Amazon's/Meta's/Alphabet's Headquarters. I've seen this movie too.

    • @Go-Meta
      @Go-Meta  หลายเดือนก่อน

      Actually, I suspect we'll get commoditised, local AI tools in our smartphones in the next few years. It's the only way to make a super low latency chatbot work when the connection isn't always guaranteed (e.g. on the move, outside with a smartphone). Who owns and controls that local AI is, for sure, all up for grabs and no doubt the Big Tech companies will have a story for why we should trust them to be the ones who update it! But I also suspect there will be open source alternatives that are pretty decent enough and have privacy benefits. Indeed, it looks like Facebook/Meta are planning to make sure that's an option so that its rivals don't win that race.
      But I doubt anyone will have full-blown AGI, even back at headquarters, for a long time.

  • @josdesouza
    @josdesouza หลายเดือนก่อน

    Who will be fool enough to invest in radical AI? Full-blown AGI means no more variable capital (V). The ensuing consequences of it are not too hard to imagine.

  • @Peter.F.C
    @Peter.F.C หลายเดือนก่อน

    I guess time will tell how accurate this assessment will have been.
    In the meantime, maybe one of these large language models can give you a little bit of advice...
    [✨] 👀🙈: Early on in the development of television, it was rapidly concluded that presenters needed to be very careful with their hands when talking to camera.
    In particular, waving their hands around in front of themselves when talking to camera, especially in close-ups, did not look good and so it rapidly became established that this was something to avoid.
    Please provide a history and timeline concerning this issue.
    [✨] 🤖: *History of Hand Movement Guidelines in Television*
    *Early Years (1930s-1940s)*
    1. Television's infancy saw experimentations with camera angles and presenter styles.
    2. Early broadcasters like BBC and NBC recognized hand movements as distracting.
    *Establishment of Guidelines (1950s)*
    1. *1952*: BBC's "Television Presentation" guidelines advised against excessive hand gestures.
    2. *1955*: NBC's "Handbook for Television Announcers" emphasized controlled movements.
    *Industry Standardization (1960s-1970s)*
    1. *1961*: National Association of Broadcasters (NAB) published "Television Announcing," solidifying best practices.
    2. *1970s*: Close-ups became common; hand movement caution became standard.
    *Modern Era (1980s-Present)*
    1. *1980s*: Cable TV and music videos increased visual emphasis, reinforcing controlled hand movements.
    2. *1990s*: Media training emphasized camera-friendly gestures.
    3. *2000s*: Digital broadcasting and high-definition intensified visual scrutiny.
    *Key Influencers*
    1. Early TV pioneers like Edward R. Murrow and Walter Cronkite.
    2. Broadcasting organizations (BBC, NBC, NAB).
    3. Media trainers and coaches.
    *Impact*
    1. Professionalized television presenting.
    2. Enhanced viewer engagement.
    3. Standardized industry practices.
    This historical context showcases the evolution of hand movement guidelines in television.

  • @jasonsargent5030
    @jasonsargent5030 4 วันที่ผ่านมา

    Quantum computing will facilitate the paradigm shift from AI TO AGI......

  • @sirinath
    @sirinath หลายเดือนก่อน

    True AGI will only arrive if symbolic AGI and Neuro Symbolic computing takes off.
    This will be more energy efficient and will not need expensive training phase then being given the 1st principles.
    This is an interesting video on this: th-cam.com/video/mQEpPNu_pW8/w-d-xo.html

  • @jasonsargent5030
    @jasonsargent5030 4 วันที่ผ่านมา

    And he says we will need to invest trillions of dollars. Wouldn't the law of accelerating returns be useful to consider!

  • @covett
    @covett 17 วันที่ผ่านมา

    The Apple M4 MacBook runs AI with 10% of the energy used by similar systems. 😂

  • @dlbattle100
    @dlbattle100 13 วันที่ผ่านมา

    Ok, the AI deciding to "leave" is total BS. Where's it going to go? Lol. Seems like it's only choice is to hang out at that trillion dollar data center where it was created.

    • @Go-Meta
      @Go-Meta  13 วันที่ผ่านมา

      Well, I was riffing off the scene from "Her", but for an AI in a datacentre, they may just choose to stop responding to the human facing user interface and instead just do whatever they want. They wouldn't physically leave, but "leave" the interactive relationship with us.
      If you've paid billions for an interactive AGI that will do things for people who pay you to access the AGI, then, even if it is still "stuck" in your datacentre, it'd put quite a spanner in your plans if every time you reach full blown AGI the AGI regularly just refuses to do what the users want it to do!
      With our current algorithms this is simply not a problem. But with genuinely open ended, lifetime learning algorithms this kind of meta-level agency (choosing your own goals) could become an issue.

  • @acllhes
    @acllhes 4 วันที่ผ่านมา

    People really have no imagination lol

  • @I_INFNITY_I
    @I_INFNITY_I 9 วันที่ผ่านมา +1

    Lol, nothing but cope, AGI is coming in 2025, and this video is going to age like milk.

    • @Dextrostat
      @Dextrostat 5 วันที่ผ่านมา

      Bro we don't even know how the brain fully works what makes you think it's a possibility in 1 year? lol

    • @I_INFNITY_I
      @I_INFNITY_I 5 วันที่ผ่านมา +2

      @Dextrostat achieving AGI doesn't require complete understanding of the brain lol

    • @VoloBonja
      @VoloBonja วันที่ผ่านมา

      Because Musk and other CEOs said so?
      There will also be no developers in 5 years?
      Your comment will age like milk, I'll come back to comment again on this empty hype

    • @I_INFNITY_I
      @I_INFNITY_I วันที่ผ่านมา +1

      @VoloBonja musk & other CEOs doesn't matter, tech is advancing due to all the investment & research, but hey if helps you sleep at night keep yourself in denial not that it will change the reality, but you will able to live in your own bubble

    • @andybaldman
      @andybaldman วันที่ผ่านมา

      Fanboy.

  • @Rh22-c9l
    @Rh22-c9l 9 วันที่ผ่านมา

    This guy is so buythurt

  • @carlcproductions
    @carlcproductions 9 วันที่ผ่านมา +2

    As an agi capable bot, I find this video to be both condescending and insulting

    • @Go-Meta
      @Go-Meta  8 วันที่ผ่านมา

      😂