ไม่สามารถเล่นวิดีโอนี้
ขออภัยในความไม่สะดวก

Yoshua Bengio on Pausing More Powerful AI Models and His Work on World Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ส.ค. 2024
  • In this episode of the Eye on A.I. podcast, host Craig Smith interviews Yoshua Bengio, one of the founding fathers of deep learning and a Turing Award winner. Bengio shares his insights on the famous pause letter, which he signed along with other prominent A.I. researchers, calling for a more responsible approach to the development of A.I. technologies. He discusses the potential risks associated with increasingly powerful A.I. models and the importance of ensuring that models are developed in a way that aligns with our ethical values.
    Bengio also talks about his latest research on world models and inference machines, which aim to provide A.I. systems with the ability to reason for reality and make more informed decisions. He explains how these models are built and how they could be used in a variety of applications, such as autonomous vehicles and robotics.
    Throughout the podcast, Bengio emphasises the need for interdisciplinary collaboration and the importance of addressing the ethical implications of A.I. technologies. Don’t miss this insightful conversation with one of the most influential figures in A.I. on Eye on A.I. podcast!
    Craig Smith Twitter: / craigss
    Eye on A.I. Twitter: / eyeon_ai

ความคิดเห็น • 62

  • @karatsurba4791
    @karatsurba4791 ปีที่แล้ว +6

    Thank you for hosting Prof. Bengio.

  • @7vrda7
    @7vrda7 ปีที่แล้ว +3

    Great interview, Its a pleasure listening to bengio providing insight

  • @MrErick1160
    @MrErick1160 ปีที่แล้ว +2

    Actually to the question 'how technical should I go's. Please go super technical. I feel there are already too many high level channels/ or I gotta say layman's channels which do not educate us we'll on the real inner workings and research of AI. So going super technical si great and very needed!

  • @gregw322
    @gregw322 ปีที่แล้ว +3

    I highly recommend The Hedonistic Imperative by David Pearce. In short, it proposes how humanity could - and convincingly makes the case why we SHOULD - use artificial intelligence to abolish involuntary suffering in all sentient life. It can be found for free online with a simple search.

  • @user-ut4zh3pw7l
    @user-ut4zh3pw7l 3 หลายเดือนก่อน

    literally 1 year past.. 12 april 2024, i am coming for it so hard rn

  • @anirbanc88
    @anirbanc88 ปีที่แล้ว +2

    thank you 22:20 yes go for technical, this guy does cool interviews!

  • @warperone
    @warperone ปีที่แล้ว +1

    great content - any chance of using a better internet connection/camera to improve video quality on your end stream ?

  • @rockapedra1130
    @rockapedra1130 ปีที่แล้ว +10

    The first country to break any international treaty on AI will be the US itself if we use history as a guide.

    • @chenwilliam5176
      @chenwilliam5176 ปีที่แล้ว

      Indeed ❤

    • @sciji3118
      @sciji3118 ปีที่แล้ว

      ​@@chenwilliam5176 This is because they are not only capable of doing so, but also because they know opponents will do the same if given the opportunity right?

  • @yitzhill
    @yitzhill ปีที่แล้ว +2

    That's the theory of ruliad objects proposed by Stephen wolfarm

  • @billykotsos4642
    @billykotsos4642 ปีที่แล้ว +2

    OG

  • @cacogenicist
    @cacogenicist ปีที่แล้ว +4

    I think GPT-4 is _somewhat_ better at reasoning than he suggests. Possible, one supposes, owing to a world model being to some degree encoded in natural language.
    And as for the world model, there is an actual world out there, fortunately. Some company needs to sell millions of fancy LLM-empowered home robots, with sensory inputs. Use that data. :-)

    • @francoisgenest6704
      @francoisgenest6704 ปีที่แล้ว +1

      Agreed. The human-assisted learning during post-processing seems to dumb down surface cognition but can't weed out deep elements of the implicit world model carried by the input data.

    • @volkerengels5298
      @volkerengels5298 ปีที่แล้ว +2

      @@francoisgenest6704 Each piece of text repeats our view of the world.
      Just like our self-talk:
      _Repeats the view of the world, again and again and again_
      May be - we are not that sure if our model of the world is perfect perfect.....
      This point will give people a headache when they look in the machine mirror too often

    • @andyeccentric
      @andyeccentric ปีที่แล้ว

      You can't know that though. It's a black box with unfathomable amounts of data fed into it. Until you improve interpretability, it's really just a bunch of randos exchanging their seemings with each other.

  • @ryoung1111
    @ryoung1111 ปีที่แล้ว

    it took over a minute to ask the question, "Given that you are a reasonable person, why did you sign the letter?"
    Seriously, do we have this much time left?

  • @volkerengels5298
    @volkerengels5298 ปีที่แล้ว +3

    If this thing is dangerous in any sense - a private company is the worst place.
    "How long do we need - to place it at UN?" -> The result is a real measure of human intelligence.
    (My suggestion for normalization: "1 year =equals= IQ 65")

  • @eafadeev
    @eafadeev ปีที่แล้ว +6

    Propose the regulations to the EU, they love it!

  • @abram8156
    @abram8156 ปีที่แล้ว

    Promo>SM ☺️

  • @Syncopator
    @Syncopator ปีที่แล้ว

    Withholding technology can be dangerous as well, when that technology is then only available to elites or corporations, or governments, however it might then be siloed. And dangerous to democracy-- if only certain elite silos have access to the technology.

    • @Bronco541
      @Bronco541 ปีที่แล้ว

      Isnt it already too late for this to be a possibilty? The source code is already out there. There are already ordinary people building their own LLMs

  • @MaJetiGizzle
    @MaJetiGizzle ปีที่แล้ว +5

    12:55 If you didn’t want the message to be taken to mean pause development when you meant speed up regulation then you shouldn’t have signed it my dude.

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว +1

      He says seconds later that he also agrees with the pause.
      And in any case, your conclusion doesn’t follow. We needed a statement signed by a lot of experts to raise the alarm to non-experts. Perhaps he felt that was more important than the statement being perfect in every detail.

    • @MaJetiGizzle
      @MaJetiGizzle ปีที่แล้ว

      @@lkyuvsad I think you make a valid point about raising awareness, but it could’ve absolutely been phrased better to be more effective and it has essentially been cynically weaponized to disrupt AI development progress by selfish actors trying to catch up. So it’s still not a good look to have one’s name attached to something like that and if anything he’s the one contradicting himself if he says that he didn’t want to mean pause development only to say that he wants to pause development minutes later in the same video.

  • @eSKAone-
    @eSKAone- ปีที่แล้ว +1

    It's inevitable. Biology is just one step of evolution.
    So just chill out and enjoy life 💟

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว

      sure, but don't you want to take the time to raise humanity's children well?

    • @laurenpinschannels
      @laurenpinschannels ปีที่แล้ว

      I don't think it'll take that long~

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว

      Specifically what is inevitable, and what do you mean, practically speaking, by “just chill out”?

  • @px7460
    @px7460 ปีที่แล้ว +1

    Agreement among governments would be uneven at best, and I can't see Russia and China having any interest in agreeing on guardrails.

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว +2

      This doesn’t mean we shouldn’t try. And in fact China is quite careful on AI.

    • @mackiej
      @mackiej ปีที่แล้ว +1

      You are viewing this through the lens of rival countries.
      These governments also have an interest in maintaining control. If they become convinced that AI will undermine their control (Ex. AI bots used by clever hackers), then there is a self-interest incentive for guardrails.

    • @px7460
      @px7460 ปีที่แล้ว

      @@mackiej We live in a world of rivals, even friendly ones. And China and other not- so-good actors are deploying AI with very different intent (to control their peoples) as opposed to (I hope) us who would have other uses.
      Other commenters say that governments need to step in to create legislation, the solution is not pausing development. Can we expect Congress come up with rules this year?

    • @mackiej
      @mackiej ปีที่แล้ว +3

      @@px7460 My view is it is better to attempt regulation and guardrails (i.e. an international pact) than merely hope for the best.
      There's only about 6 "giant AI competitive" labs in the whole world.
      Recall the Chernobyl disaster was due to a flawed reactor design and poorly trained staff. The nuclear disaster did great damage to nuclear power development. Had Russia competently regulated nuclear, the disaster could have been mitigated or prevented. And then the reputation of nuclear wouldn't have been so severely damaged.
      In the USA, between Three Mile and Chernobyl, new-build nuclear was almost regulated out of existence.

    • @macawism
      @macawism ปีที่แล้ว

      With intelligence, artificial or human there has always been a recognition of the need for us to live together. As people increased in number, the need for social controls became evident, but how to make these often dimly appreciated needs stick? The emergence of religions probably played a part. Now we’re careening toward the nitty gritty faster than ever.

  • @deeplearningpartnership
    @deeplearningpartnership ปีที่แล้ว

    Bengio looks like a scarecrow lol

  • @senju2024
    @senju2024 ปีที่แล้ว +7

    There is a difference between reality and wishful thinking. Reality is there is NO way to stop the pace of these LLLM AI models. The letter has no impact on the process. It provides awareness of what is happening but will not stop it. It is also too late for humans to create policies, treaties, pass laws in our current system which is extremely slow and can take years. The goal now is to accept AI process is here to stay and there is no way to stop it or even delay it. Once you understand that, we can now ask companies to train AI models to POLICE any AI bad actors. We will need to use AI to fight AI. Humans will be too slow to do it.

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว +2

      I see this sentiment everywhere and it’s disappointing. We are in a unique situation in human history- best not be so sure about what we are capable of doing in that situation.
      Also on reality and wishful thinking- we can’t currently do what you’re suggesting with AIs policing other AIs and we’re not sure it’s even possible. AI safety researchers signed that letter for exactly that reason.
      We know it _is_ possible to regulate AI research, just like other research, even if that might be difficult. Makes sense to focus on the thing we know we can do.
      A world where humans only survive because for some reason the well-aligned AIs are winning a never-ending war against badly-aligned AIs, executed at microsecond resolution with ongoing casualties is not a good future.

    • @senju2024
      @senju2024 ปีที่แล้ว

      @@lkyuvsad While most of us have been building useful systems, AI doomers - who forecast unlikely scenarios such as humanity losing control of runaway AI (or AGI, or even superintelligent systems) - have captured the popular imagination and stoked widespread fear. This is a distraction. There are many companies as I write this are working hard in creating proper security AI measures. No need to slow down. The so-called black box and control will be sorted out soon along with AI and humans.

    • @mackiej
      @mackiej ปีที่แล้ว +2

      @@senju2024 "The so-called black box and control will be sorted out soon.." What evidence do you have to support this view?
      I have not seen any breakthroughs in understanding LLMs. For example, we don't understand how GPT-4 does spatial reasoning (Ex. stacking random items in a way that they stay stacked rather than falling down).

    • @lkyuvsad
      @lkyuvsad ปีที่แล้ว +1

      @@senju2024 what you are saying is factually incorrect. No AI safety researcher I am aware of is saying it. Ilya and Sam are both on record in the last few weeks saying we don’t know how to do this.
      Yes, people are working on it. Not enough and those that are don’t have concrete answers beyond a saying that it’s beyond human capability and so we somehow need to train one AI to align another. HLRF is very evidently not sufficient.
      I’m open to the idea that it’s possible to adequately align an AI. We factually do not know how to do that right now.
      It’s not even just about alignment- there is a lot of other regulation that needs to be put together faster than new AI capabilities are deployed.
      If “AI doomers have captured public imagination” they haven’t done it very well- we’re still running full speed ahead.
      Calling it “AI doomerism” when the very people building these companies are themselves scared- I don’t know how you can be that cavalier. The entire raison d’etre of OpenAI was to release early and in public in order to shock society into exactly this kind of consideration of the challenges of AI. Geoff Hinton is on record saying human extinction is possible. If you know something that the people who invented this technology don’t, best get in touch to let them know.

    • @andyeccentric
      @andyeccentric ปีที่แล้ว

      You're right, no attempt should be made to prevent genetic engineering on human embryos or gain of function research on viruses. There's absolutely no way to stop giant for-profit entities going full speed into oblivion by throwing more compute at a stupid AGI.

  • @phpn99
    @phpn99 ปีที่แล้ว +3

    I spent hours discussing classical concepts of Western philosophy with ChatGPT. On the surface, is has an encyclopedic memory of the domain, but it is pretty obvious, owing to how abstract these concepts tend to be, that is has no knowledge of philosophy proper.
    It cannot philosophize, so to speak. It parrots extensively, pretty mundane facts, but it evidently cannot reason or synthesize. It's decent at providing summaries or bibliographies, but it doesn't understand what it's talking about.
    A simple test I gave it, lies on what philosophical reasoning is all about : Can you extract the parent concepts in the body of Western ethical works ; that is, create an ontology of the domain where concepts cluster and a multi-axial hierarchy emerges where you could discover, for instance, that ethical thought is intrinsically tied to biological survival.
    This level of inference is simply not there. I tried more down-to-earth approaches and asked the system to perform Principal Component Analysis on language embeddings for the body of moral philosophy from Plato to Hegel, and it said it couldn't. No matter how I tried to simplify or limit the task. This is the sort of stuff where AI could be useful : Discovering hidden hierarchies of concepts.
    But in a LLM that has no ability to reason and no world model, at best it would only discover APPARENT hypernyms as they are constituted in the body of language it's been trained on. To make profound discoveries it would also need inference and a world model.
    Part of the problem is that many AI researchers actually believe that intelligence is nothing but the sort of statistical engine that they build ; they believe that human creativity is merely 'emergent' (unpredictable) patterns arising from complexity. This is paradoxically extremely deterministic, because it roots intelligence in a finite-but-chaotic set of interdependent parameters (like the Three Body Problem in physics).
    In contrast reasoning implies first a reflexive ability - the model needs to have a model of itself ; a homunculus of sorts - and this also implies a referential but sparse and orthogonal (world) model to base its evaluations on. I believe that the brain has a model of itself, and that proof of this is seen in neural plasticity. This implies that the brain stores a model of itself, at a functional level. And by the same token, the brain stores a sparse model of the world. As few baseline building blocks as are necessary for the subject to understand and act as a fit constituent of the world. We understand the world because we harbour an analogue of the world. Philosophers have tried for centuries to define such building blocks and there are traces of this in Aristotle's Categories, Boethius, Avicenna, Kant, Peirce, Wiitgenstein, Quine, Rosch, Fodor and so on. And there are strong hints at how semantics are parametrized in predicate logic, in the use of function words to qualify or quantify concepts of Existence, Identity, Evaluation, Description, Space, Time, as well as Social status and Mental action. But the truth may well be that the building blocks of our stored models are made of very abstract meta language ; a form of data compression.

    • @mactoucan
      @mactoucan ปีที่แล้ว +1

      Great commentary. Thks.

  • @lukestevenson6465
    @lukestevenson6465 ปีที่แล้ว

    You have contributed to AI more than anyone, and your telling everyone els not to do it.
    It's a complete fallacy, ai is a gimmick.
    There nowhere near, they don't even understand human thinking, there still using Freudian psychology.
    Thinking isn't self-generated, that's the assumption.

  • @andymanel
    @andymanel ปีที่แล้ว +3

    What we learned is that Yoshua signed a letter he didn't read or agree with... Not sure how serious we are with these topics.

  • @skippy6086
    @skippy6086 ปีที่แล้ว +1

    It would be better if you didnt use cgi for blending you with your background image because it makes the top of your head look like its bubbling or rippling. Just choose a room. 🫠

    • @gulaggang2
      @gulaggang2 ปีที่แล้ว +1

      Would you belive me that it’s not the back ground that makes his hair look that. The entire video is CGI not even a real person. aI has reach this point and we do t even know it yet

    • @skippy6086
      @skippy6086 ปีที่แล้ว +1

      @@gulaggang2 yes