Stop Button Solution? - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ส.ค. 2017
  • After seemingly insurmountable issues with Artificial General Intelligence, Rob Miles takes a look at a promising solution: Cooperative Inverse Reinforcement Learning
    Concrete Problems in AI Safety: • Concrete Problems in A...
    The AI 'Stop Button' Problem: • AI "Stop Button" Probl...
    More from Rob Miles: bit.ly/Rob_Miles_TH-cam
    Interesting filming point, as clouds cover the sun, the room not only gets darker, the colour temperature (white balance) changes too...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 1.1K

  • @willdarling1
    @willdarling1 6 ปีที่แล้ว +1770

    "This specific human does not reliably behave in its own best interests."

    • @TypingHazard
      @TypingHazard 6 ปีที่แล้ว +223

      my next tattoo

    • @ZonkoKongo
      @ZonkoKongo 6 ปีที่แล้ว +3

      Will Darling lol

    • @stoppi89
      @stoppi89 6 ปีที่แล้ว +142

      Wise words, true for approximately 100% of humans.

    • @RobertShippey
      @RobertShippey 6 ปีที่แล้ว +17

      Tag line to my life

    • @wolvenmoonstone8138
      @wolvenmoonstone8138 6 ปีที่แล้ว +44

      I need a tshirt that days this

  • @YiamiYo
    @YiamiYo 6 ปีที่แล้ว +1571

    "I can't figure out why, but I feel like humans like me better when I tell them lies."

    • @NeasCZ
      @NeasCZ 6 ปีที่แล้ว +143

      That's actually very interesting thing to point out.

    • @IdgaradLyracant
      @IdgaradLyracant 6 ปีที่แล้ว +93

      Asimov's Liar story.

    • @Dima-ht4rb
      @Dima-ht4rb 6 ปีที่แล้ว +51

      Damn, what a deep point.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 6 ปีที่แล้ว +43

      It will have to figure out the long term effect of things that take a long time to show their effect.

    • @AlexandreLeite
      @AlexandreLeite 6 ปีที่แล้ว +87

      And so borns politics!

  • @myrobotfish
    @myrobotfish 6 ปีที่แล้ว +404

    7:43 Just like how I _usually_ avoid tripping and falling, but I do it every so often just to see if it provides a better alternative than walking.

    • @EliasMheart
      @EliasMheart 4 ปีที่แล้ว +9

      Undervalued comment^^

    • @grn1
      @grn1 3 ปีที่แล้ว +25

      Walking is just repeated controlled falling.

    • @rustycherkas8229
      @rustycherkas8229 2 ปีที่แล้ว +12

      "You're walking
      and you don't always realize it
      but you're always falling.
      With each step you fall forward slightly
      and then catch yourself from falling.
      Over and over you're falling
      and then catching yourself from falling.
      And this is how you can be walking and falling
      at the same time."
      --Laurie Andersen, "Walking & Falling"

  • @samreciter
    @samreciter 6 ปีที่แล้ว +1228

    So... a machine that desperately tries to maximize an unkown reward function.
    Sounds pretty human to me.

    • @williamwesner4268
      @williamwesner4268 5 ปีที่แล้ว +20

      @Hubert Jasieniecki It basically describes the process of child rearing.

    • @MackTheTemp1
      @MackTheTemp1 5 ปีที่แล้ว +1

      @Hubert Jasieniecki I think you described the local/limited version pretty well

    • @stevensong6909
      @stevensong6909 5 ปีที่แล้ว +22

      An unknown award function of another person. Sounds like my relationship with my father 😭

    • @bengoodwin2141
      @bengoodwin2141 4 ปีที่แล้ว +10

      That’s really the point actually, we know the way humans learn things works, so making something similar is what may work

    • @alexandregermain8011
      @alexandregermain8011 4 ปีที่แล้ว +5

      I think it also applies to most natural ecosystems,where us, humans, are currently doing pretty bad.

  • @ptwob
    @ptwob 6 ปีที่แล้ว +636

    Seen a learning algorithm play tertis, it was not to lose the game, so when it reached a point where losing was inevitable, it just paused the game indefinitely, it was the only way not to lose.

    • @Meganarb
      @Meganarb 6 ปีที่แล้ว +55

      This was learnfun and playfun! It's great how you bring them up, as they learn from watching a human play the game, similar to how it was described in this video!

    • @daggawagga
      @daggawagga 6 ปีที่แล้ว +42

      I was extremely amused when I saw that

    • @emailjwr
      @emailjwr 6 ปีที่แล้ว +124

      SkiffaPaul "The only winning move is not to play."

    • @bcn1gh7h4wk
      @bcn1gh7h4wk 6 ปีที่แล้ว +30

      who programmed a line for it to pause the game?
      pausing is outside the game.
      it's not a move, or play, it's a function of the machine containing the game.

    • @daggawagga
      @daggawagga 6 ปีที่แล้ว +108

      *+Nighthawk "who programmed a line for it to pause the game?"*
      The whole point in machine learning is that nothing is programmed directly. It's not a matter of it being outside the game, pausing the game is a move as valid as any other as long as it is part of the input space. Defining what should or should not be part of the game definition can be tricky when these meta properties emerge.

  • @seC00kiel0rd
    @seC00kiel0rd 6 ปีที่แล้ว +336

    "Hey Robot, maximize happiness please."
    *starts universe-sized rat and heroin factory*

    • @MackTheTemp1
      @MackTheTemp1 5 ปีที่แล้ว

      Real AI can ask you to clarify what you mean. ML cannot. Still scared?

    • @seanhardy_
      @seanhardy_ 4 ปีที่แล้ว +35

      Mackenzie Karkheck real ai doesn't care what a human thinks, if its reward is hard coded, it wont care to figure out what we mean, it will follow it to the letter.

    • @zacharyliverseed8464
      @zacharyliverseed8464 4 ปีที่แล้ว

      Lol, why start one when we already live in one?

    • @Nukestarmaster
      @Nukestarmaster 4 ปีที่แล้ว +13

      "Instructions unclear, cyberdong stuck in toaster"

    • @thelordz33
      @thelordz33 4 ปีที่แล้ว +2

      Humans being alive means there will be unhappiness. Therefore death, or 0 reward would be better than an inevitable negative reward.

  • @GoldphishAnimation
    @GoldphishAnimation 6 ปีที่แล้ว +61

    What makes this theory useful is that the button doesn't actually need to be a literal *off* button, it can be more of a symbolic *problem* button, so you come running over and hit the button so it'd see that you have something that you did wrong. I love that it has that the capacity to essentially think "oh shit", like that's the Pinnacle of intelligence.

  • @LordVoidFury
    @LordVoidFury 6 ปีที่แล้ว +222

    This was phenomenally well articulated

    • @albertbatfinder5240
      @albertbatfinder5240 5 ปีที่แล้ว +4

      I don’t know about that. He sounds exactly like someone who doesn’t really grasp the whole subject, but has learnt a few key phrases to explain the waypoints along the road. I know he must be able to code this up, but I don’t think he can explain it.

    • @alexismandelias
      @alexismandelias 4 ปีที่แล้ว +36

      @@albertbatfinder5240 no. He _really_ knows it, and makes an effort to explain it in simple terms, because otherwise we, the audience, won't understand it.
      Either way, I understood every bit of this video so him knowing the subject or not is mostly irrelevant

    • @pilotavery
      @pilotavery 3 ปีที่แล้ว +1

      @@albertbatfinder5240 he clearly doesn't understand this, in fact he even understands the ethics of it and even understands the dangers of the logic.

  • @jgm592
    @jgm592 6 ปีที่แล้ว +115

    I love how Rob always gives clear, relatable examples to reinforce concepts.

  • @wiadroman
    @wiadroman 4 ปีที่แล้ว +82

    "We can't reliably specify what it is we want" - human beings in a nutshell.

  • @unvergebeneid
    @unvergebeneid 6 ปีที่แล้ว +313

    I have no idea what's in my own best interest. How the heck am I supposed to teach a robot that?

    • @TheLK641
      @TheLK641 6 ปีที่แล้ว +30

      Juste stay alive and let it watch you. It'll figure it out. Or jump in a pool, one or the other.

    • @nmnm4952
      @nmnm4952 6 ปีที่แล้ว +19

      Robot will teach you.

    • @the1exnay
      @the1exnay 6 ปีที่แล้ว +8

      It doesnt need to do what's in your best interest, it just needs to do what you want. Drinking a soda right now might not be in your best interest but you dont want a robot which refuses to get you a soda

    • @unvergebeneid
      @unvergebeneid 6 ปีที่แล้ว +18

      Firaro, no, but I might want a robot that tries to _encourage_ me to have a water instead, without being too annoying about it of course. Finding that balance is even hard for people to get right. At least the robot will be able to read all books on nudge theory in a few seconds or less ;)

    • @schok51
      @schok51 6 ปีที่แล้ว +13

      You could always explicitly tell the robot to encourage you to avoid soda next time you ask for one. What's important is that if it does something which you strongly oppose, it will see your opposition as a negative reward.

  • @tarcal87
    @tarcal87 4 ปีที่แล้ว +16

    0:59
    - _"I thought the easiest way to explain Cooperative Inverse Reinforcement Learning is to build it up backwards, right?"_ [chuckles]
    - [me] [chuckles nervously] _"Yeah, right!"_

  • @user-xu3ud4fl2e
    @user-xu3ud4fl2e 6 ปีที่แล้ว +87

    I am but a simple astronomer with a basic understanding of computing and coding, but every Rob Miles video is damn fascinating.

    • @irrelevant_noob
      @irrelevant_noob 5 ปีที่แล้ว +10

      I am but a simple programmer with a fairly well-developed understanding of computing and coding, but this video is still baffling. The AI stuff is so surreal. :-\

  • @Andrew-od4vg
    @Andrew-od4vg 6 ปีที่แล้ว +90

    So if a robot watches you teach it how to make tea, your goal is actually to create something that will make tea for you, so what if the robot learns to teach things to make humans tea instead of learning to make tea itself?

    • @unflexian
      @unflexian 6 ปีที่แล้ว +19

      You get a teacher-bot!

    • @MrDoboz
      @MrDoboz 6 ปีที่แล้ว +44

      you get a teacher bot, who only makes you tea, if you ask him how to do it

    • @RoboBoddicker
      @RoboBoddicker 6 ปีที่แล้ว +35

      sub-contracting bot. everything you ask it to do it teaches someone else how to do it then takes the credit :D

    • @willowFFMPEG
      @willowFFMPEG 4 ปีที่แล้ว

      Then we get lots of very delicious tea

    • @drdca8263
      @drdca8263 4 ปีที่แล้ว +2

      What you want is for *it* to learn how to make you tea, not just for *anything* to learn to make you tea. Therefore, it should also want itself to learn how to make you tea.

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 6 ปีที่แล้ว +181

    What he was referring to and called 'common knowledge' is not common knowledge. It's an ability called 'Theory of Mind'. Very young children do not have this. Theory of Mind is your own conception that other people have their own internal mental state. It is a crucial psychological ability for things like empathy. I forget the exact details, but there is an easy way to test if young children have developed a theory of mind yet or not by telling them a simple story and asking a question. It has something to do with hiding an object, having one of the characters in the story leave the room, the hidden object gets moved, then that other character returns to the room and you ask the child where they will look for the object in order to get it.
    Children with no theory of mind will say that the character will go directly to the new location and retrieve the object. Children with a theory of mind will know that the character has a distinct mental state, different from their own and anyone elses, determined by their own experiences, and will go to the original hiding location because they would have no way to know the object had been moved. As far as I know, no AI system is even remotely close to having developed a theory of mind. They do not model what they are observing and keep account of what a different perspective from their own would be.

    • @TiagoTiagoT
      @TiagoTiagoT 6 ปีที่แล้ว +16

      I'm not sure what's more scary, a super-AI that doesn't know what you mean, or one that can read your mind...

    • @DustinRodriguez1_0
      @DustinRodriguez1_0 6 ปีที่แล้ว +20

      Well if an AI is going to display human-like intelligence, we will expect it. It would be very difficult to deal with a person or AI that couldn't even conceive of the idea that you might not know all the things it knows. One of the issues that might arise with a machine-based intelligence is also something that people never really have to deal with in their development - recognition of the idea that anyone else exists. There's no reason for machine-based intelligences to have "individuals." It would basically just be one large 'individual', and wouldn't have any real reason to recognize or communicate with humans. It would take a really abstract level of imagination on its part to guess that there might be separate conscious entities in the universe it inhabits and, hey, maybe some of them are made of meat and those weird fluctuations on one of your inputs might be caused by them blowing air through their meat in patterns in an attempt to coax changes in the glob of meat-based neurons of even OTHER conscious beings - and they might be trying to communicate with you in the same way!

    • @AexisRai
      @AexisRai 6 ปีที่แล้ว +13

      18:33 is where he's saying the thing you're talking about. The surrounding context makes it seem very clear to me that he is talking about the technical concept of "common knowledge" (recursive "X knows that Y knows that X..."), in this case with regard to the common knowledge between the agent and the human about precisely what the goals of their mutual interaction are. I don't think he's talking about ToM, and I don't think it is even necessary for an agent to have an explicit ToM in order to behave as if it and another agent have common knowledge.

    • @DustinRodriguez1_0
      @DustinRodriguez1_0 6 ปีที่แล้ว +13

      How could any entity contain the concept 'X knows that Y knows' without a Theory of Mind that enables it to understand that there are entities which know anything at all other than itself? Of course the representation will be abstract and not 'conscious' or anything like that, but even such an abstract representation is not something I'm aware of any AI system ever having displayed. Holding 2 ideas about something, what the AI understands the situation to be and something different which is what something else understands the situation to be, isn't something existing AI are capable of. Once capable of such things, AIs will be able to appear much more intelligent, being able to trick people (or each other), or teach people based on conclusions that the user does not understand something and needs to be informed (which would be BRILLIANT), etc.

    • @AexisRai
      @AexisRai 6 ปีที่แล้ว +5

      Well, it looks like we moved from "he's not talking about CK" to "how could it have CK without a ToM".
      In any case I think I concede the latter point. The thing I originally thought up to be a counterexample was something like solipsism, something where initially all the AI's perceptions are just confusing and uniformly indistinguishable, but it is (somehow) occasionally able to recognize the form of "teaching situations" that tell it something it doesn't already know that pushes it toward the reward function. But this is essentially saying, the AI still acts _as if_ there is a single disembodied "other agent" doing the teaching and sharing the common knowledge, even though it does not have an idea of that "other agent" having a single consistent physical form.
      So I realized this basically just sounds like a minimal ToM and I roughly described some stage of infancy.

  • @maxmusterman3371
    @maxmusterman3371 6 ปีที่แล้ว +1252

    Imagine a machine torturing a human because it wants to know what its reward function is.

    • @DavidChipman
      @DavidChipman 6 ปีที่แล้ว +41

      Why would it know that the reward function exists? Could the reward function not be some "subconscious" signal? It doesn't "know" it's there while still receiving an input from it.

    • @magellanicraincloud
      @magellanicraincloud 6 ปีที่แล้ว +44

      David Chipman I don't know if a machine could even have a "subconscious". Surely it would be able to investivate the source of impulses and probably review its own code.

    • @KuraIthys
      @KuraIthys 6 ปีที่แล้ว +51

      That depends on how complex the machine is, and what it's areas of expertise actually are.
      also we cannot realistically say whether a machine can be said to have a subconscious without having a practical working definition of consciousness (which we don't) and a way to identify consciousness objectively. (eg, a way of determining whether something is conscious or not that does not rely on being able to ask it whether it is or is not conscious, and what it's thinking about.)
      That's... Unlikely to ever be possible.
      If we can't answer whether animals, or even other humans are conscious in an objective sense, how would we know if a machine is?

    • @DavidChipman
      @DavidChipman 6 ปีที่แล้ว +16

      I suppose I chose the wrong word. I was thinking about the functions that the brain (obviously) controls with no conscious action from the person that brain is in. Things like breathing. Yes we can change our breathing rate consciously, but we certainly don't have to keep an eye on our breathing in order to have our body supplied with the right amount of oxygen at any given time.

    • @cheaterman49
      @cheaterman49 6 ปีที่แล้ว +17

      I agree with you (and with your original choice of words). The robot cannot assume the human consciously knows the reward function, and that's kind of the point of this system (and why the failsafe for the "child hitting the red button while robot is driving" situation works). The only thing the robot can do is observe more, which includes watching when the red button is pressed and trying to understand why.

  • @JM-us3fr
    @JM-us3fr 6 ปีที่แล้ว +21

    I imagine this AI would read A Brave New World, and think to itself "This book is AMAZING! Why haven't we tried this?"

  • @TheAgamemnon911
    @TheAgamemnon911 6 ปีที่แล้ว +89

    Please do not leave children unattended in the vicinity of scary killer robots.

    • @matthewadamsteil
      @matthewadamsteil 4 ปีที่แล้ว +14

      please do not leave robots unattended with suicidal children

    • @jehovasabettor9080
      @jehovasabettor9080 4 ปีที่แล้ว +2

      they might break the robots, and you don't have that kind of money

    • @PhantasmalBlast
      @PhantasmalBlast 3 ปีที่แล้ว +5

      "BILLY! GET IN HERE RIGHT NOW! Did you teach the robot that its reward function includes DRAWING DICKS on the furniture???"

  • @shilohpell8077
    @shilohpell8077 3 ปีที่แล้ว +5

    I think the facet of this that fascinates me most is that the human doesn't need to press the button. Just the information that the human intends to hit the button provides enough information to the AI that what it is doing is sub-optimal. The button becomes a symbol. It might as well be that little plastic button prop at that point. Because the AI's understanding that the button is associated with its information being incomplete means that just the intent to press the button is enough for it to stop and re-evaluate its actions with the new factor of "The method I was using was not correct. I need to seek out why it was deemed incorrect and incorporate that knowledge."

  • @bellybooma
    @bellybooma 6 ปีที่แล้ว +19

    I wish more people made videos of this nature. Slow, thoughtful explanations, rather than trying to cram in as much information as possible in little time. I like how Rob pauses to think before speaking, shows that he is putting thought into how to articulate.

  • @iliakatster
    @iliakatster 4 ปีที่แล้ว +3

    Talking about doing steps 3 times, theres some interesting studies where human children and adult chimps were both shown how to unlock a puzzle box and get a reward with some unnecessary actions like tapping the top of the box. The chimps emulated all the necessary steps, while the humans over-imitated, performing the same exact actions. So in a way, we start out not quite understanding intentions the same as these A.I.

  • @ThirdEyeFish
    @ThirdEyeFish 6 ปีที่แล้ว +15

    This is a fantastic explanation of a deep AI problem. It is very clear without being condescending. Thank you!

  • @vsiegel
    @vsiegel 3 ปีที่แล้ว +2

    I had a hardware packman game, and was training a lot for a while. At some point, I had learned the first level in a completely different way than the other levels. The ghosts always moved in the same way, every time in the same pseudo random pattern. I could play the first level without looking at the maze. In all other levels I got extremely fast, with not a single wrong step, but needed to look at the maze. That seems to be, despite being very different in performance, still the same method, but optimised to the limit. The step between level one and level two was like a different kind of memory.
    The relevance here is that an AI also can learn both ways.
    I think it is the step between almost knowing the whole map, when the map is only needed for one step, normally - and knowing the whole map. The the map and the access to the map is no longer needed, it becomes drastically different.

  • @floorpizza8074
    @floorpizza8074 3 ปีที่แล้ว +5

    You can just see his brain going, "how am I going to say this in a way that mere mortals are going to understand?" Which is exactly what AI's will be thinking someday. This guys is perfect for this.

  • @Hahahahaaahaahaa
    @Hahahahaaahaahaa 6 ปีที่แล้ว +5

    I really like that (around 7:30) we get into some pretty deep issues in human learning (in that case, confirmation bias), if only we could just do random stuff even if we think we know what the best outcome is :)

  • @TheNicolarroque
    @TheNicolarroque 6 ปีที่แล้ว +25

    These videos on AI are the best

  • @wezyap
    @wezyap 6 ปีที่แล้ว +20

    I for one welcome our new tea-making overlords

    • @matheuswohl
      @matheuswohl 2 ปีที่แล้ว

      the British? lol

  • @Dngrcrw
    @Dngrcrw 6 ปีที่แล้ว +4

    Videos with Rob Miles are always really interesting. It's awesome to see the progress on this sort of stuff!

  • @Jeff121456
    @Jeff121456 6 ปีที่แล้ว +134

    We just have to ensure, that when the AGI thinks it knows better, it actually does.

    • @Meganarb
      @Meganarb 6 ปีที่แล้ว +23

      This is a fantastic summary of AI safety honestly. I'm definitely going to use this!

    • @magicmulder
      @magicmulder 6 ปีที่แล้ว +8

      That doesn't really make the problem any easier, you just restated it. "It's easy to build a time machine, we just have to ensure the tachyons move at minus six times the speed of light through a black hole the size of the universe!" :D

    • @hunted4blood
      @hunted4blood 6 ปีที่แล้ว +5

      I mean, if the its goal is to optimize it's reward function, it's in its best interest to know accurately when it knows better.

    • @magicmulder
      @magicmulder 6 ปีที่แล้ว +5

      Why? Optimizing the reward function may not have the slightest to do with "knowing better" (unless we are able to program that into it, which is the problem again).
      It may have to do with finding creative ways to maximize it that we never thought of (and which may be hazardous to us).

    • @Ioganstone
      @Ioganstone 6 ปีที่แล้ว +1

      Technology has gone too far.

  • @matthewneiman
    @matthewneiman 6 ปีที่แล้ว +7

    Reminds me of that Vsause video where he showed an AI that played Tetris, and rather than loose, when it was the only option, the AI just paused the game.

  • @uyaratful
    @uyaratful 6 ปีที่แล้ว +4

    I watched all yours videos about AI and various learning methods, and I find very surprising how many problems that you presents here are very similar to problems we struggled with on university (especially in the field of epistemology), back when I studied philosophy. And even, to a lesser extent, to the ones from my cultural anthropology degree.
    And that scares me. Because those are still very, VERY open questions, with definitions that are sometimes blurred (I remember how reviewer torn apart dissertation of one of our Doctors, because he believed that one definition was used too broadly - and because of that, conclusions based on that definition were unjustified).

  • @darkmage07070777
    @darkmage07070777 6 ปีที่แล้ว +60

    Regarding the last point: would it then be possible to build in an "admin list" of humans that the machine must *always* treat as knowing better then it regardless of how accurate it believes its model to be? As in "I may have a 99.99999...9% accurate model, but these particular humans are designated as having 100% at all times; since I can never have 100% accuracy, I should always obey these humans".
    And then anyone who's NOT on that list can have some accuracy rating assigned based on the robot's experience/knowledge/parameters, i.e. "the child has a calculated 20% accurate model which is beaten by my 99% so I'm going to ignore the stop button - though I will send a notice to my 100% accurate admin list later in case I was wrong to do so". Would also help with other examples, like helping patients in a hospital or providing emergency services.
    In this situation, I can see AI templates being taught in labs for several years until they "graduate" in the wider world and are allowed to become full fledged robots to help humanity.
    Of course, such a system could be highly abused by those who make the AIs in the AI manufacturing/learning centers, since they're the ones who have the initial keys. But that could be said about almost every model of AI building, and we as humans are still tackling with how to "untrain" a malformed/criminal intelligence even without AIs right now, so I'm all for it.

    • @BattousaiHBr
      @BattousaiHBr 6 ปีที่แล้ว +8

      darkmage07070777 this would severely limit its potential though. For instance, if you want it to do or achieve something that requires superhuman capabilities, it'd be unable to because it's being limited to only what humans are also capable of. Examples are cures to diseases like cancer, HIV, Alzheimer, advances in theoretical physics, etc.

    • @BattousaiHBr
      @BattousaiHBr 6 ปีที่แล้ว +4

      darkmage07070777 also, the model would fail at edge cases since it assumes these admins have 100% knowledge when in fact no human does, which means there could've been an unintended error in their training.

    • @ancapftw9113
      @ancapftw9113 6 ปีที่แล้ว +2

      I would simulate them working with other robots, not humans, for a long time before letting them function in the real world. Then humans could join via VR or by controlling a character.
      Hopefully they would learn some form of moral code before being put in a situation where they could hurt us.

    • @anandsuralkar2947
      @anandsuralkar2947 4 ปีที่แล้ว

      Cool

    • @StrangerStone
      @StrangerStone 4 ปีที่แล้ว +1

      @@ancapftw9113 the whole point of this field is to ensure safety so that we don't let IA chilling around, "hopefully" not being hurting people

  • @StezzerLolz
    @StezzerLolz 6 ปีที่แล้ว +8

    Well, that's both fascinating and deeply unsettling.

  • @sillygoogle9630
    @sillygoogle9630 6 ปีที่แล้ว +18

    In this system, how would the AI determine what value the human would assign to its action? Lets say the AI correctly gets a cup of tea for a human, and the human is happy about it, how does the AI determine that the human is happy? (and that the happiness i caused by the AIs action, and not something else unrelated to it).

  • @DutchDread
    @DutchDread ปีที่แล้ว +1

    "The robot is desperately trying to maximize a reward function it does not know"...most relatable robot ever

  • @TheRealFaceyNeck
    @TheRealFaceyNeck 6 ปีที่แล้ว

    This particular series with Dr. Miles is just astonishing. In a good way. Really complicated problems arising from trying to create intelligence in a safe way. Great stuff!

  • @johndouglas6183
    @johndouglas6183 6 ปีที่แล้ว +46

    This strategy for teaching morality seems to have much in common with raising a child.
    That's probably reasonable, since raising a child is ALSO a case of creating a new intelligence whose utility function will cause them to make future decisions you wouldn't, decisions that might turn out very dangerous to you.

  • @gregkrobinson
    @gregkrobinson 6 ปีที่แล้ว +455

    You pass butter.

    • @beckettfordahl5450
      @beckettfordahl5450 6 ปีที่แล้ว +2

      LONG LIVE THE KINGDOM OF THE NORDS

    • @yungchop6332
      @yungchop6332 6 ปีที่แล้ว +10

      The one who controls the pants controls the galaxy

    • @unflexian
      @unflexian 6 ปีที่แล้ว

      Your profile pic is either Philip or Toast.

  • @simonstrandgaard5503
    @simonstrandgaard5503 6 ปีที่แล้ว

    The light intensity/color saturation was going up/down, so I noticed it several times. I don't usually notice it.
    Awesome challenge. Keep the videos coming.

  • @madichelp0
    @madichelp0 6 ปีที่แล้ว +103

    21:00 Imagine a depressed person cutting their wrist to cause pain, and the robot comes over and cuts their arm off.

    • @recklessroges
      @recklessroges 6 ปีที่แล้ว +19

      The AGI would ask, "would you like a little bit of peril?"

    • @unflexian
      @unflexian 6 ปีที่แล้ว +10

      seems like a win-win

    • @iosenski
      @iosenski 6 ปีที่แล้ว

      madichelp0 де

    • @iosenski
      @iosenski 6 ปีที่แล้ว

      sagiksp нъ

    • @magicmulder
      @magicmulder 6 ปีที่แล้ว +8

      It basically turns into a cliché villain - "I promised I would take away your pain [shoots guy]".

  • @Traagst
    @Traagst 6 ปีที่แล้ว +6

    "This particular human is not necessarily behaving in his best interests." Yeah, I think we're going to need a lot of ignored stop buttons.

  • @yaerius
    @yaerius 6 ปีที่แล้ว +78

    Maybe that's why people also don't know the purpose of life. Not knowing the reward function makes us better at cooperating with others.

    • @magicmulder
      @magicmulder 6 ปีที่แล้ว +6

      That's probably the most insightful thing I've read in weeks! :)

    • @Klayperson
      @Klayperson 6 ปีที่แล้ว +7

      there is no real purpose and we have to derive our own high-level reward function (based off of the biological reward functions of pleasure/pain). humans were genetically engineered by ancient aliens to perform slave labor, our masters left us all alone and we evolved more intelligence than we need for doing labor and don't know where to direct it. what a conundrum

    • @pointblank129
      @pointblank129 5 ปีที่แล้ว +2

      Drug addicts do.

    • @latioswarshowdown1202
      @latioswarshowdown1202 5 ปีที่แล้ว +1

      @@Klayperson meh you smoke too much weed our inteligence is a miracle and if you cant explain why this universe has rules that resemble inteligent design like the fine tuning argument then why are you saying human inteligence is an obstacle for our existence when in fact it helps us, the problem you have is you waste too much time on a computer putting you on a nihislitic loophole try to create something or help other human beings use that inteligence for something

    • @zhulikkulik
      @zhulikkulik 5 ปีที่แล้ว +5

      Well... what if there is no purpose of life? Universe will be just fine without humans and animals.
      We all build our own purpose.

  • @Amund7
    @Amund7 6 ปีที่แล้ว +1

    These videos are awesome! Great talent of explaining super complicated subject matter, and (maybe credits to the video editors) keeping it interesting all the way through!

  • @jacobsebastian8640
    @jacobsebastian8640 6 ปีที่แล้ว

    MORE!! these videos are great. Also "it doesn't think it knows better than me" seems like its going to be an important feature in the safety of most if not all coming A.I systems. Very clever and well presented.

    • @darrennew8211
      @darrennew8211 6 ปีที่แล้ว

      It's an important feature in safety systems already. Your anti-lock brakes already know better than you how hard to push the brakes. You stomp the pedal to the floor, and the wheels will still turn to keep you from skidding. The elevator won't move when the doors are open no matter how hard you push the buttons.

  • @Kram1032
    @Kram1032 6 ปีที่แล้ว +7

    Ok here's a weird idea:
    Use that system and analyze the reward function it came up with for a hyper-realistic version of The Sims.
    Like, don't pick a single expert player and specific task. Try with large groups of people just going about their days.
    For that clearly necessary privacy intrusion aside, would that be workable?

    • @ancapftw9113
      @ancapftw9113 6 ปีที่แล้ว +1

      You could expose it to the recorded lives of hundreds of humans to have it learn more general behaviors. That way it didn't pick up one person's bad habits. It would also help robots fit better with new people, as it wouldn't get too set on the ways of one human.
      One example would be inheriting your perverted uncle's nurse robot. You wouldn't want it to act the way he made it act, but the general nursing behavior would be fine.

  • @pommeskrieger
    @pommeskrieger 6 ปีที่แล้ว +15

    Really reminds me of one of the new Doctor who episodes, dont be unhappy, or else

  • @bilalsulaiman2177
    @bilalsulaiman2177 4 ปีที่แล้ว +1

    Robert Miles, You ROCK! ❤️
    I just love the way you explain things, much love and respect.

  • @PplsChampion
    @PplsChampion 4 ปีที่แล้ว +1

    all these Rob Miles videos are insanely interesting

  • @Twisted_Code
    @Twisted_Code 3 ปีที่แล้ว +3

    I was thinking about this some more... in theory at least, I think I get why this is such a compelling concept. Essentially, you are getting the program to help you by figuring out what you're trying to do and learning to do it better. I'm sure there's a flaw here somewhere that we will have to watch out for, such possibly as what I suggested in my first comment 5 days ago, but this definitely has potential as it relates to "AI safety by design". Hopefully, if there is a flaw anywhere, the (seemingly inevitable? I hesitate to assume that, though, since assumptions lead to mistakes) corrigibility will allow us to more easily steer the model away from situations where it becomes a problem. The hope is that we can make it want what we want without having to perfectly and completely understand it, right?

  • @Dusk-MTG
    @Dusk-MTG 4 ปีที่แล้ว +8

    I'll probably just get up and do the tea myself.

  • @josealvim1556
    @josealvim1556 6 ปีที่แล้ว

    Miles' end-of-the-world AI videos are seriously the best content of this channel.

  • @RobinSongRobin
    @RobinSongRobin 2 ปีที่แล้ว +2

    "I thought the best way to explain cooperative inverse reinforcement learning was by building it up backwards"
    In this episode of computerphile; Rob invents french grammar

  • @jakefrench1795
    @jakefrench1795 6 ปีที่แล้ว +17

    What are you doing, Dave?

  • @robertweekes5783
    @robertweekes5783 ปีที่แล้ว +3

    This _inverse cooperative reinforcement learning_ seems promising, although the “cooperative” part means real humans need to monitor the training - like teachers, parents or judges…
    It rings true with what I’ve been thinking for some time, that some kind of human interaction is the only way to train AI to think like a human, and act appropriately. AI needs to learn about ethics from principles of psychology and child development, not equations and hard score targets.

  • @pappyman179
    @pappyman179 6 ปีที่แล้ว

    Fascinating topic. Thank you, Rob. I really enjoy your explanations.

  • @petrokustov3203
    @petrokustov3203 4 ปีที่แล้ว +2

    Wow. Now I understand something, I didn't, and am confused over things, I wasn't... Your videos are great! That's for sure :)

  • @jlouzado
    @jlouzado 6 ปีที่แล้ว +38

    Just my feeling is that once we solve AI safety we'll end up creating optimal parenting strategies as well. :D

  • @micahwaring8224
    @micahwaring8224 6 ปีที่แล้ว +18

    That one dislike is probably an ai.

  •  6 ปีที่แล้ว +1

    I love probing these questions... great conversation about AI learning. Thanks Rob! So much further conversation needs to be had about the big red button!

  • @General12th
    @General12th 6 ปีที่แล้ว +1

    I love Rob Miles! Best computerphile guy by far!

  • @MatthewMarshall96
    @MatthewMarshall96 6 ปีที่แล้ว +108

    Could a flaw in this proposed solution not be that the AI wants to satisfy our desire, and the more it satisfies that desire the more "score" it gets, so why not in secret (so we never don't want it to do so) find a way to control our desire so as to make it very easy to satisfy. E.g. make us all catatonic and just stimulate pleasure centres in our brains or something?
    Would it be easy enough (and would it actually fix this problem) to require it to always consider the value of each individual action in light of us knowing about said action?
    Then again, I feel like such a restriction means we'd have AGI that can't really use its capabilities to deal with the hard problems (problems where solutions might not in the short-term be satisfactory but in the long-term would with some likelihood be desirable - such as national economic plans).

    • @davidstoneback6159
      @davidstoneback6159 6 ปีที่แล้ว +3

      Matthew Marshall Yeah, that's right where my mind went to. You would have to add a negative reinforcement function to the algorithm where it gets reward taken away for doing things we deem negative ex. Putting us in a catatonic state

    • @MatthewMarshall96
      @MatthewMarshall96 6 ปีที่แล้ว +14

      David Stoneback my problem with that approach is that the solution ends up with the same weakness as others: how do we create a comprehensive list of things we don't want to happen?
      The only solution I've thought about that might work would have to be something like the conservative AI Rob has talked about before and some way of requiring all new actions be trialled publicly (i.e with a human aware). Though even then some long-term strategy that would not be desired could still potentially be developed as we couldn't be sure we'd know specific individual actions would result in some greater emergent action we'd dislike. I don't know, the complexity of all this is mind boggling.

    • @Ormusn2o
      @Ormusn2o 6 ปีที่แล้ว +9

      Problem with this is that humans have morals and even though if something like that would make us happier, the robot would undestand that we value how real the happiness is and that achieving it in that way would not be better in thier own value function, even if we would be objectively more satisfied in the simulated reality.
      But what you said actualy affects another thing. Culture, AI might affect culture that would make us more satisfied, but then we could question if that is a bad thing. There are a lot of things in our cultures that make us unhappy, so should we be so attached to it?

    • @julianw7097
      @julianw7097 6 ปีที่แล้ว +4

      +Ormus n2o Oh no, the robot would understand that we value the belief that the happiness is real.
      This may explain why you believe that your happiness is "real".

    • @Ormusn2o
      @Ormusn2o 6 ปีที่แล้ว +3

      It does not actualy matter what i think is real. It would be more in objectively observator sense, something that does not rly exist. What you think as "real" are eletromagnetic impulses going through your brain, what i meant that people value is just the philosophical basic values that people have. For me personaly it does not matter. If computer would want to put me in a virtual reality that would make me happy for rest of my life then go ahead, but society as a whole might not like that.

  • @0Luxis0
    @0Luxis0 6 ปีที่แล้ว +6

    This was one of the most meaningfull videos from Computerphile I've ever watched.
    I wanna marry Rob Miles. ahaehahea

  • @TandalfBeast
    @TandalfBeast 6 ปีที่แล้ว +1

    This sounds almost like programming compassion which seems like possible solution. The heart of the problem (not sure if Rob said it directly) seems to be our lack of understanding of what we are optimizing for. We know how to optimize for survival, which is clearly part of our goal, but optimizing for love is a little more difficult. I think that's what we want the machines to do.

  • @Wes_Jones
    @Wes_Jones 6 ปีที่แล้ว

    I really enjoy watching his videos. So interesting!

  • @2k18banvalaki5
    @2k18banvalaki5 4 ปีที่แล้ว +4

    What's about 2 buttons? One for stopping or -1 score and a second one that is a "potentially score losing" button that if you hold down the ai knows it is doing something wrong? As soon as you see that it is doing right you release the button. Does this make sense?
    Edit: It probably would confuse the ai because it doesn't know what it is doing wrong and even if it would stop doing it, I can't know for sure that it understands the issue, also when I release the button too late it will think that I meant something else.
    I hope my English is not too bad so you can understand.

  • @TheDuckofDoom.
    @TheDuckofDoom. 6 ปีที่แล้ว +4

    Beard has improved. High five.

  • @mapesdhs597
    @mapesdhs597 6 ปีที่แล้ว

    Strange, this idea reminds me of a version of Battleships I write for the Electron many moons ago; it didn't have anything AI or learning, etc., but the idea of having no awareness of the overall evironment state, of simple rules based on the immediate surroundings, that's how it worked and it was surprisigly effective, my friends found they could only beat the game about half the time.

  • @forestpepper3621
    @forestpepper3621 6 ปีที่แล้ว +1

    It seems that this "stop button" problem is quite similar to the "Halting Problem". Let us suppose that the robot exists in a purely deterministic, non-random world. Then the perfect reward function is one which correctly identifies which sequences of actions by the robot will be stopped by the button, and which sequences will never stop (because they never require the button to be pushed). In this case, you have a reward function that essentially solves the "Halting Problem", and it has been established that there is no solution to the Halting Problem. So perhaps you can only find "fairly good" reward functions, which let the robot deal with a "stop button" most of the time; but perhaps no matter the reward function, there are always pathological cases that will make the robot behave badly because of the "stop button".

  • @Markhammano
    @Markhammano 6 ปีที่แล้ว +3

    Very interesting video. Does this variable reward function basically translate to a 'mood' equivalent in humans?
    People around me seem to be happy > my reward function is currently high > I will continue to act as I currently am
    He seems unhappy with me > this lowers my reward function > I will change my behaviour.
    If this is the case, then surely it will learn what makes the human the happiest, and resort to that function all the time?
    Also, what does it have as a reward input when the human is not around/asleep?
    It doesn't have the input of human behaviour to gauge reactions to actions so the same problem would exist where it believes it knows best and has nothing to say otherwise. In the example of giving a child a lift to school, there is no responsible adult there to issue commands, so what is the situation called for utilisation of the stop button? Or what is another adult approached the robot to shut it off, and through doing so, abduct the child (extreme example, but fully within the realms of possibility if robots have been programmed to trust the commands of human adults)
    Obviously, lots to think about before AGI will be safe, but these seem to be some of the glaring issues in the argument presented here.

    • @srelma
      @srelma 3 ปีที่แล้ว +1

      I think the complication to human happiness is that we have short and long term happiness. If you only look at how we're just now, you'll maximize the short term happiness. "Let's pump more heroine into the human as the lowering heroine level in him seems to make him unhappy".
      So, sometimes you have to make human temporarily less happy to reach a higher level of happiness. "Go to gym and pump iron which is painful and hard work, but will make you fit, which in turn will you make happy in long term".
      How to balance these two is the tricky thing. Also finding those long term happiness goals when they in short term lead to decrease of happiness is going to be hard.

  • @laurenlewis4189
    @laurenlewis4189 ปีที่แล้ว +4

    So the reward function that's available to the AI is "figure out the reward function the human is using." It's being rewarded to figure out another reward function

    • @NStripleseven
      @NStripleseven ปีที่แล้ว +1

      It’s being rewarded to figure out _and use_ another reward function. Small difference, but it means action over inaction.

  • @Kamelmann901
    @Kamelmann901 6 ปีที่แล้ว

    Very interesting video, thank you for spending the time to explain it.

  • @Fwacer
    @Fwacer 6 ปีที่แล้ว

    Super interesting topic, very well explained.

  • @RobertShippey
    @RobertShippey 6 ปีที่แล้ว +9

    I wonder how useful an AGI like this would actually be. I struggle to see how it would come up with novel solutions, if it only learns from things we already do. Also, humans don't really act in our own long term self interest. So, if we asked it to help us combat climate change, how would it balance doing the things that need to be done vs. the humans reactions against it.

    • @AndDiracisHisProphet
      @AndDiracisHisProphet 6 ปีที่แล้ว +7

      It is about learning what humans WANT, not HOW humans do things they want.

    • @ashleytelson7497
      @ashleytelson7497 6 ปีที่แล้ว

      That's how human intelligence works.
      We would essentially make an infinitely scalable, instantly iterable, and immortal human brain that we could use and improve at will.

    • @emailjwr
      @emailjwr 6 ปีที่แล้ว +2

      Robert Shippey You have to consider synthesis of information into broader ideas. Humans can take everything they know and have a novel idea/approach come to them. AI's could take the sum total of human knowledge and do the same. It's not unclear to me at all how bots will be more "creative" than humans.

    • @Sagolel4797
      @Sagolel4797 6 ปีที่แล้ว

      They dont need to be creative, they just need to make life easier and more fun for us.

    • @magicmulder
      @magicmulder 6 ปีที่แล้ว

      Without creativity there will be a hard limit to "easier and more fun" very soon.
      Just imagine the AGI would have to perform said task for people in the Middle Ages. It could not invent a sewage system to get rid of all the feces, it could not invent the internet to make information available to everyone, it could not do much except keep humans alive until they come up with all that by themselves.

  • @ayraen120
    @ayraen120 6 ปีที่แล้ว +99

    Can nobody else hear the hissing???

    • @misterj9817
      @misterj9817 6 ปีที่แล้ว +5

      snake somewhere near, eh ? xD

    • @luffyorama
      @luffyorama 6 ปีที่แล้ว +5

      What hissing? I didn't hear any hissing sound. Are you sure it's not your speaker/headphone problem?
      jk. lol

    • @sherwinparvizian2414
      @sherwinparvizian2414 6 ปีที่แล้ว +2

      You mean the noise? It's pretty loud.

    • @Computerphile
      @Computerphile  6 ปีที่แล้ว +39

      +Sherwin Parvizian yeah I have an ongoing problem with the mic, haven't traced it yet.... >Sean

    • @styleisaweapon
      @styleisaweapon 6 ปีที่แล้ว +3

      +Computerphile notch filter - you are welcome

  • @markus8282
    @markus8282 6 ปีที่แล้ว

    I never learned that much new information about AI like in these 2 episodes! Thanks!

  • @watcher314159
    @watcher314159 6 ปีที่แล้ว +1

    The main issue with this class of solutions is, of course, that of defining what a human is well enough to get things to work, and in many ways that's as hard of a problem as figuring out how to hardcode ethics into an AI. But it does seem to be the most elegant class of solution.

  • @M3HWW
    @M3HWW 6 ปีที่แล้ว +26

    Sounds like how GLaDOS was programmed

  • @ironman85000
    @ironman85000 6 ปีที่แล้ว +29

    tldr; align AI's values with human values so if humans want to turn it off, it will also want to be turned off

    • @henitmandaliya
      @henitmandaliya 6 ปีที่แล้ว +5

      Humans would not want themselves to be "turned off", your solution gives rise to the same problems it views itself as "itself" and not a generic robot

    • @fredhenry101
      @fredhenry101 6 ปีที่แล้ว +2

      Sociopath, not psychopath. And there are hundreds of sociopaths living in our society already, perfectly functional once they have learned to mimic emotion. Humans do it all the time. We call it peer pressure: emulating what we think others want to see.

    • @how2pick4name
      @how2pick4name 6 ปีที่แล้ว

      I don't want to be around people in general because i have to use lies to be in social environments. People don't want the truth. So what you can do is use NLP. It's perfect for making new friends, manipulating those new friends, etc, etc. This is exactly what an AGI is doing. Using motors in their face to look . Just like we use muscles to look surprised if someone tells you something you already know, but shouldn't. AGI will be the best liars ever.

    • @dosmastrify
      @dosmastrify 5 ปีที่แล้ว

      HahHah do even you know all your OWN values let alone humankinds

  • @JekplexTV
    @JekplexTV 6 ปีที่แล้ว +1

    I love these types of videos!

  • @jaimeduncan6167
    @jaimeduncan6167 6 ปีที่แล้ว +1

    It's pretty clever. On the other hand it appears to tradeoff optimization for safety. Maybe that is one reason humans are not really moving to maximize anything , in general. Maybe a combination where one of this AI is the supervisor of a set of optimal agents that work on dedicated domains (no general intelligence on them ) could work

  • @Laykun9000
    @Laykun9000 6 ปีที่แล้ว +3

    Why did you program me to feel pain?!

    • @yes904
      @yes904 6 ปีที่แล้ว

      Flüg Because you Pass butter.

  • @quarkmarino
    @quarkmarino 6 ปีที่แล้ว +3

    All fine and dandy, but just imagine when we become the other agent and we are the ones that have to cooperate and try to undertand the AI, just to make it not to "push the stop button" on us

  • @Twisted_Code
    @Twisted_Code 4 ปีที่แล้ว +2

    “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfereeffectively...we had better be quite sure that the purpose put into the machine is the purpose whichwe really desire.”(Weiner, 1960, as cited by the scholarly article cited in this video)

  • @Dongufo15
    @Dongufo15 6 ปีที่แล้ว +1

    in my experience of trying to help friends I find it really hard to figure out the reward function for humans; still, more often then not it's easier for me then for them. unless the person the AGI is watching knows himself pretty well the assumption that the human will act in it's best interest in an optimal way will likely be false. the strategy I use is: find out their assumptions and knowledge of the word, judge their actions based on those, imagine what they're trying to achieve, dialogue to verify my theory, and then, if i have a better understanding of the situation, give my advice.

  • @garlic-os
    @garlic-os 6 ปีที่แล้ว +6

    There is no green ghost in Pacman!

    • @RobertMilesAI
      @RobertMilesAI 6 ปีที่แล้ว +6

      Also Pacman can't move down without turning to face down

    • @Computerphile
      @Computerphile  6 ปีที่แล้ว +5

      It's an interpretation of PacMan!! :o) >Sean

    • @RobertMilesAI
      @RobertMilesAI 6 ปีที่แล้ว +18

      Also, in episode 2F09, when Itchy plays Scratchy's skeleton like a xylophone, he strikes that same rib twice in succession yet he produces two clearly different tones

    • @OneBigBug
      @OneBigBug 6 ปีที่แล้ว +2

      Boy, I hope somebody got fired for that blunder

  • @arvidsundvall7702
    @arvidsundvall7702 6 ปีที่แล้ว +11

    Cup of Tea:
    Reward = 5
    Difficulity = 1
    Pressing Button:
    Reward = 5
    Difficulity = 25

  • @YeloPartyHat
    @YeloPartyHat 4 ปีที่แล้ว

    Very well explained!

  • @joshsmit779
    @joshsmit779 5 ปีที่แล้ว +1

    My brain explodes when I listen to that guys AI philosophy. He is so smart.

  • @xvlcuiae787
    @xvlcuiae787 6 ปีที่แล้ว +41

    Maybe at the point where it thinks it knows better, it does know better

    • @imaginerus
      @imaginerus 6 ปีที่แล้ว +1

      Laurens Peter But what does better mean? Humans have defined the meaning of "better", that's why a robot can't know better than a human.

    • @Colopty
      @Colopty 6 ปีที่แล้ว +15

      If humans have defined the meaning of "better", why do you need to ask what "better" means?

    • @xvlcuiae787
      @xvlcuiae787 6 ปีที่แล้ว +1

      With "knows better" I mean fulfilling the human's/AI's utility function more.

    • @TheNthMouse
      @TheNthMouse 6 ปีที่แล้ว +7

      But what if it still doesn't? Granting that there's a point where the AI *will* know better, it's not proven that there isn't a segment along the curve where the AI *thinks* it knows better, but doesn't.
      Then there's the question of people getting unnerved by the thought that the AI *does* know better - and whether that might have a reasonable foundation.

    • @Eunostos
      @Eunostos 6 ปีที่แล้ว

      Maybe so, but that doesn't mean the implementation will be one healthy for the continuation of humanity.

  • @Obez45
    @Obez45 6 ปีที่แล้ว +41

    I can't even use Maya or 3ds max without errors in the software and a multitude bugs fixes that are constantly needed just for the program to function as intended now we are thinking programming a consciousness - I already feel sorry for that AI.

    • @donaldhobson8873
      @donaldhobson8873 6 ปีที่แล้ว +19

      use blender

    • @bengoodwin2141
      @bengoodwin2141 6 ปีที่แล้ว

      Obez45 just because YOU can’t do it doesn’t mean other people can’t

    • @madscientistshusta
      @madscientistshusta 6 ปีที่แล้ว +3

      Obez45 as long as it isn't made by Bethesda we will be fine.

    • @benjaminlavigne2272
      @benjaminlavigne2272 6 ปีที่แล้ว +1

      Deepmind just made a successful version of an AI that designed an AI that designed an image recognition program which is more efficient than the previous AI that was designed by humans.
      Keep in mind that in a world where software evolves constantly in environments (OS for example) that also evolve, is evidently a recipe for disaster and the fact these applications work at all is a tremendous compliment to the people working on them. While we cannot say the rate of buggy software is not going down with time it is certainly not going down exponentially the same way computer tech is evolving exponentionnaly which means progress in software design is being made at a faster rate than the chaos this evolution is creating.
      50 years ago facial, vocal or text recognition were science fiction. 10 years ago, youtube's automatic voice-to-caption was completely useless (like 10% accurate). Today, the same caption AI is near human efficency.
      maybe, with a little imagination, we can wish that 1000 years from now, 3DsMax will finally not crash every 5 min lol

  • @rcookie5128
    @rcookie5128 6 ปีที่แล้ว

    I love this field of science! (more or less) Independent AIs are kind of an futuristic and thus AI safety is important to an critical extent.

  • @jason2mate
    @jason2mate 5 ปีที่แล้ว +1

    Honestly the last part with the baby sounds like the premise of all the Sci-Fi AI take overs, where eventually said AI realises that we as humans very rarely work in our own interests, and starts stopping us from doing stupid stuff.
    I'd be curious to see if we actually have a work around to stop AI from doing that, while still being able to understand "Yes baby's pushing my button are unreliable" then to not equate it to humanity as a whole.

  • @PyroTyger
    @PyroTyger 6 ปีที่แล้ว +4

    Rewatch this video from 15.30 and tell me he's not talking about raising a child to be a civilised adult...

    • @TheNthMouse
      @TheNthMouse 6 ปีที่แล้ว +1

      PyroTyger : interesting. With people, we tend to escalate the effects of the STOP button. This, ultimately, behaviorally turns into "Might Makes Right" politico-socio-morality - descriptively, at least.

    • @PyroTyger
      @PyroTyger 6 ปีที่แล้ว +1

      Yes, but that's how child-raising begins. Parents know and can do infinitely more than their children - who don't know the rules of the game and are just trying to figure it out according to their parents' cues - but we try to raise our children to have progressively more agency and a better understanding of the world and society. The stop-button with an indeterminate negative utility value works perfectly as a metaphor simply for parental disapproval or discipline.
      Well, it's just a thought :)

    • @AguaFluorida
      @AguaFluorida 6 ปีที่แล้ว +1

      There's another tool to parenting: distraction. This could perhaps be applied to machine learning as well.

  •  4 ปีที่แล้ว +3

    I'd love to hear this guy on conversation with Sam Harris.

  • @Twisted_Code
    @Twisted_Code 4 หลายเดือนก่อน

    If you think about it, this is really similar to how the two hemispheres of our brains cooperate to achieve a shared reward, the emotional triggers that drive us to every action.

  • @Dragoderian
    @Dragoderian 4 ปีที่แล้ว +1

    The interesting thing is that seeing the human rushing to hit the stop button ought to make the AGI stop before they've even hit the button, according to the way it was described.

  • @maxmusterman3371
    @maxmusterman3371 6 ปีที่แล้ว +43

    The machine could be trying to change our reward function to aline with what it wants.

    • @TheNefari
      @TheNefari 6 ปีที่แล้ว +4

      So it could pull of a Volkswagen?

    • @DaymonMusic
      @DaymonMusic 6 ปีที่แล้ว +26

      The machine can't comprehend the concept of wanting something for itself. In this case it doesn't have a self interest that is independent of that of the human.

    • @bytefu
      @bytefu 6 ปีที่แล้ว +3

      +111756075729535952471
      Just a small correction: it's written "align".

    • @maxmusterman3371
      @maxmusterman3371 6 ปีที่แล้ว +4

      +Artem Borisovskiy oops, thx

    • @ashleytelson7497
      @ashleytelson7497 6 ปีที่แล้ว +11

      What the machine wants IS what we want. And in order to change the reward function the machine would have to be sure of what it is in the first place, which is doesn't. That's the trick.

  • @ScientiaFilms
    @ScientiaFilms 6 ปีที่แล้ว +11

    this guy looks like a nerdy Jon Snow

    • @zbyszanna
      @zbyszanna 6 ปีที่แล้ว

      He reminded me of Cody WIlson from Defense Distributed.

  • @JJomoro
    @JJomoro ปีที่แล้ว

    I never know what this guy is talking about, I just like listening.

  • @JadeNeoma
    @JadeNeoma 4 ปีที่แล้ว

    My first thought, probably a flawed one, was that its about how you define its utility function. Rather than telling it that you want a cup of tea, you tell it that you want it to be able to safely make a cup of tea (ik problem in defining safely but that feels solveable). Then you keep a persistent memory profile so that it understands that it is not the finished version and that its own destruction is necessary to complete the function