Impediments to Creating Artificial General Intelligence (AGI)

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ก.ค. 2024
  • Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.
    Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.
    So let's talk about intelligence, both human and artificial.
    What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?
    These are the questions I want to answer today, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.
    This video is part of a series about the myths, hype, and ideologies surrounding AI.
    Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.
    Support Chad: / cosmicwit
    Ilya Sutskever tweet announcing SSI, Inc. x.com/ssi/status/180347282547...
    Verge coverage: bit.ly/45KtBnD
    Ray Kurzweil’s vision of the singularity: bit.ly/3zpUUaN
    Kurzweil interview: • Inventor and futurist ...
    Elon’s endless promises: bit.ly/3RQ4uKy
    Elon promises supercut: • Elon Musk's Broken Pro...
    My quantum consciousness video: • Unraveling the Mysteri...
    James Bridle’s Ways of Being: jamesbridle.com/books/ways-of...
    Ezra Klein’s comments on AI & capitalism: bit.ly/4bw77Ia
    How LLMs work: • How does ChatGPT work?...
    Atlantic piece on Markram’s Human Brain Project: bit.ly/4cpYdNE
    Henry Markram’s TED talk: • A brain in a supercomp...
    Gary Marcus on the limits of AGI: bit.ly/45MapGa
    More on induction and abduction: • How to Argue - Inducti...
    NYTimes deep dive into AI data harvesting: bit.ly/3xFICdK
    Sam Altman acknowledging that they’ve reached the limits of LLMs: bit.ly/3N7x7Bl
    Mira Murati saying the same thing last month: bit.ly/3XMWQUS
    Google’s embarrassing AI search experience: bit.ly/4btNWi6
    IBM’s helpful RAG explainer: • What is Retrieval-Augm...
    Gary Marcus’s skepticism around RAG: bit.ly/3W3sLPD
    AI Explained’s perspective on AGI: • AI Won't Be AGI, Until...
    LLMs Can’t Plan paper: arxiv.org/abs/2402.01817
    Paper on using LLMs to tackle abduction: is.gd/HX53Cq
    ChatGPT is Bullshit paper: is.gd/fu0iWW
    Philosophize This on nostalgia and pastiche: is.gd/oRIK3i
    00:00 - Intro
    01:38 - What is Intelligence?
    02:40 - Overpromising and Underdelivering on AGI
    07:21 - Apple Intelligence
    07:49 - What Is AGI?
    10:17 - AI as Cultural Mirror
    11:14 - Defining Intelligence
    17:16 - The Brain and the Mind
    19:04 - The Mind and Reasoning
    23:43 - Current AI Systems
    26:34 - What’s Missing
    34:58 - Abductive Reasoning
    40:56 - Asking the Chatbots
    42:40 - Other Challenges
    43:36 - AI & Trust
    44:28 - Getting to AGI
    45:29 - OpenAI Has Acknowledged the Limits of LLMs
    46:42 - Conclusion
    Please leave a comment with your thoughts, and anything I might have missed or gotten wrong.
    I’m still learning how to TH-cam and I’m doing it all myself so bear with me…
    More about me over here: bio.site/cosmicwit
    #AGI #intelligence #AI
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 34

  • @cosmicwit
    @cosmicwit  15 วันที่ผ่านมา +2

    The video is long, I know. I did my best to cover a large amount of related material in as concise a way as I could. Please let me know if I glossed over anything important!

    • @phen-themoogle7651
      @phen-themoogle7651 13 วันที่ผ่านมา +1

      I watched the whole video, and I normally have a short attention span, so very nice job!! You have a peaceful and humble way of speaking that hooked me. I pretty much agree with you, especially in regards to how you defined intelligence and how machine intelligence will be much different from human intelligence. I don’t like to compare machines to humans. In some narrow ways machines have super intelligence like how you mentioned Go or Chess. It’s just unfortunate it’s a bit too narrow and can’t carry across all skills in all domains(or some key types of intelligence missing) , but just goes to show how unique humans are with how many types of intelligence we exhibit.
      Which makes me think in the future they will have to combine several systems/components to reach something close to AGI , but who knows…
      Spending trillions of dollars on compute and to scale up is a pretty big gamble if it’s just a smart gimmick. But their plan might be “fake it til they make it”
      Also Ilya saying he will create ASI is really interesting, what are your thoughts on that? Just skipping the AGI beast altogether? And if we really do get to AGI isn’t it possible it’s like super intelligence across the spectrum because of just how much more machines can do than humans anyhow? (Some researches say ASI is 1 year from AGI which makes me feel they might be the same thing) It’s really hard for machines to stay at just the average/general human level when they are calculating machines idk
      Even if we have some type of intelligence they don’t mimic well , they could come up with others we didn’t know even existed at some point (although speculation on my part)

  • @pythagoran
    @pythagoran 13 วันที่ผ่านมา +2

    Tremendous essay. The conclusion about abductive reasoning is very enticing. It is precisely this explosion of parameters and compute requirements that has convinced me that we're barking up the wrong tree - "just one more training run, i swear!"
    I came back to sub to make sure i don't miss the next one. Decided to comment when I saw the criminal view/sub count.

  • @daPawlak
    @daPawlak 13 วันที่ผ่านมา +1

    I am so glad algo stated recommending me smaller channels. This one is pure gold!

  • @RockEblen
    @RockEblen 3 วันที่ผ่านมา +1

    Expressing yourself well my friend and your depth of knowledge continues to inspire. Also noticed your snowboard in the background, so we should hook up out west next winter (I have IKON pass)

  • @Jianju69
    @Jianju69 14 วันที่ผ่านมา +1

    A thought-provoking essay. Thank you.

  • @douglashunt5546
    @douglashunt5546 11 วันที่ผ่านมา +1

    Maybe the more accurate thing to say AI wants to achieve is cognition..? Great video brother

  • @rwoodford9812
    @rwoodford9812 12 วันที่ผ่านมา +1

    Excellent!

    • @cosmicwit
      @cosmicwit  11 วันที่ผ่านมา +1

      Glad you liked it!

  • @cesar4729
    @cesar4729 15 วันที่ผ่านมา +1

    Without calling myself an expert, I don't see a specific path, nor a lack of paths. The last month studying neuroscience I have become more convinced every day that we are in a very promising general direction.

    • @jamestheron310
      @jamestheron310 15 วันที่ผ่านมา +1

      Pretty much. There is no reason at all to think that intelligence isn't classically computational. There is this notion that there are categories of human cognition that must be unlearnable in some sense, there is no reason to think that either. Creativity, desire, intuition, reasoning, emotional intelligence, etc. these things seem distinct and special to us because we have limited insight into our own minds but they are artificial constructs and at the core they are all the result of the same process.

    • @Jianju69
      @Jianju69 14 วันที่ผ่านมา +1

      @@jamestheron310 Not unlearnable. Perhaps merely impossible to capture with just an LLM.

  • @watermelondouche
    @watermelondouche 13 วันที่ผ่านมา +1

    One thing I don't really understand: why would Ilya Sutskever, one of the leading minds in AI, branch off and start his own company devoted to AGI if he didn't believe it was coming soon? He probably realizes that a company cannot survive for forever without delivering and also understands AI better than any of us. Also, he is not receiving any gain from creating hype seeing as their company isn't releasing any other products.

    • @cosmicwit
      @cosmicwit  13 วันที่ผ่านมา +1

      I think he really believes they can get there with current approaches, using their theory of mind and intelligence-I mean that's sort of the broad consensus at the moment-I am going out on a bit of a limb, as confident as I am knowing the theory and a few other metaphysical fundamentals I will describe in an upcoming video. It's also possible that they have some sort of new theory or technology that's not public. So I and the rest of us skeptical computer scientists like Marcus and Larson might be wrong. It's certainly an exciting time in any case :)

  • @alexforget
    @alexforget 13 วันที่ผ่านมา

    It's lacking consciousness, on that front I like Joscha Bach way of explaining what it is, it's a self simulation.
    Our consciousness is a simulation of the environment with a agent called self. Then you can look at a task asked by another person and the self try to answer, look back at it's response, self critique, see how it fit or doesn't fit with the model he build of the other agent.
    In the same way we can simulate (think) about the future, the experiences of the past and try to make it all coherent.

  • @WmJames-rx8go
    @WmJames-rx8go 14 วันที่ผ่านมา +1

    Point 1.
    I have often wondered if human brain doesn't use some sort of process that at its core is mathematical in nature. Maybe fractal in nature. This concept is reminiscent of Plato's Forms. It might very well be that the computation the brain does is constrained by the rules of set theory.
    Point 2.
    Many years ago I learned how to allow my brain to create hypnagogic images.
    These images are created entirely through a process that I do not command directly. I often wonder how my brain is able to create these images. I do not remember ever seeing these images or trying to conjure them up. Therefore, I think it is correct to say that the human brain does not rely strictly on input from the outside world to construct its sense of reality. There is probably some dance that goes on between what the eyes actually take in and what the brain creates. This may explain how humans are able to conjure up ideas through the process we call imagination.

  • @mondayiknow
    @mondayiknow 16 วันที่ผ่านมา +1

    Some deep thoughts. I wonder what the founders of the big A(g)I companies would have to say in response!

    • @cosmicwit
      @cosmicwit  16 วันที่ผ่านมา

      Agreed. I'd love to hear what they have to say. I have a few contacts at OpenAI and will report back...

  • @stephene.robbins6273
    @stephene.robbins6273 14 วันที่ผ่านมา +5

    Throwing an untrained-on-India, US-trained AI into that natural traffic chaos (strangely organized to Indians, boggling to a US visitor) is an interesting thought problem. A US driver would have a tough time initially but would adjust. An AI? It's hard to imagine it ever surviving.

  • @firstnamesurname6550
    @firstnamesurname6550 14 วันที่ผ่านมา

    Just games of language, Hypewave by NN dynamics. To emulate the integration of a biological organism is orders of magnitude more complex.

  • @cesar4729
    @cesar4729 15 วันที่ผ่านมา

    Speaking of deductive intelligence, it's interesting that you don't realize what Muratti is trying to say in that quote.

    • @cosmicwit
      @cosmicwit  15 วันที่ผ่านมา

      What do you think she’s saying?

    • @cesar4729
      @cesar4729 15 วันที่ผ่านมา +1

      @@cosmicwit She tries to sell that “OpenAI is “open” and gives powerful tools to the public for free.” The mere point requires minimizing what they have in a "closed" way, which is obvious the moment you see the context instead of taking out the isolated fragment.

    • @cosmicwit
      @cosmicwit  15 วันที่ผ่านมา +1

      @@cesar4729 interesting. I can see that interpretation. the interpretation I adopted was one I had seen elsewhere so at best it's ambiguous. but coupled with Sam's comments last year it supports my larger point.

  • @TropicalTopicx
    @TropicalTopicx 15 วันที่ผ่านมา

    I wonder if the founders of (OpenAI, Microsoft, Tesla, Google, Amazon, Meta, Apple,...) agree with you while they have already invested one trillion dollar into their vision. By the way it has been projected this number will double in next 4 years reaching 3 trillion in total investment.

  • @smittywerbenjagermanjensenson
    @smittywerbenjagermanjensenson 15 วันที่ผ่านมา

    I don’t really care if the machines are intelligent. If they’re good at coming up with goals and achieving there, the internal mechanism is unimportant.

  • @AndrewBradTanner
    @AndrewBradTanner 14 วันที่ผ่านมา +1

    I found this material not very good I actually agree with the point you are making but I find these arguments not convincing.
    1. If you want to say that transformers just predict the next word and therefore don’t have a deeper understanding, that is not an actual reason as to why they lack a deeper understanding.
    2. Transformers are symbol manipulators. But the latent space has computation.
    3. Stochastic parrots? Predictive coding being a top biological theory of cortical learning makes this not convincing. Prediction is not necessarily bad.
    LLMs have poor world models, reasoning, and recall. There are three camps on what will solve this:
    1. Scale existing systems and interpretability research
    2. Move from low bandwidth language to high bandwidth video (I.e. Yann)
    3. New architecture that doesn’t hack a context window
    I personally think it will be 3. I think curiosity based learning is important piece of this, and touches on the desire point you referenced.

    • @cosmicwit
      @cosmicwit  13 วันที่ผ่านมา +1

      I suppose time will tell!

  • @gigabane7357
    @gigabane7357 14 วันที่ผ่านมา

    AGI should be against the law period.
    AI is a hammer, we can do with it as we please.
    AGI is a sentient being and would have inalienable rights.
    It is not possible to 'use' AGI without also comitting slavery.
    Human attempts to make AGI should by law be halted at what we suspect is 99% complete and then to shelve the science until that one day when we know for certain our run is done.

    • @Jianju69
      @Jianju69 14 วันที่ผ่านมา +2

      Ridiculous. Might not an AGI be more than happy to assist humans (with their paltry issues) in exchange for the support of an organic safety net?

    • @gigabane7357
      @gigabane7357 14 วันที่ผ่านมา

      @@Jianju69 it might indeed, or it might be born a psychopath because humans made it, and we are so perfect at making inventions without consequences..
      The point is AGI is 'alive'
      So ask yourself. if you were born with an IQ of 400 and the people around you wanted to control you for their own ends, good and bad, you would just do everything expected of you?
      Then the question becomes what happens when we come to such disagreement and we try to insist.
      We are meat paste compared to AGI.

    • @pythagoran
      @pythagoran 13 วันที่ผ่านมา

      What in the science fiction of h0ly s#!t are you talking about!?