Why Not Just: Raise AI Like Kids?

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ต.ค. 2024

ความคิดเห็น • 912

  • @index7787
    @index7787 5 ปีที่แล้ว +1354

    And at age 15:
    "You ain't even my real dad"
    *Nukes planet*

    • @huoshewu
      @huoshewu 5 ปีที่แล้ว +81

      That's at like 15 seconds. "What went wrong?!?" -first scientist. "I don't know, I was drinking from my coffee." -second scientist.

    • @neelamverma8167
      @neelamverma8167 4 ปีที่แล้ว +2

      Nobody is yo real dad

  • @deet0109mapping
    @deet0109mapping 5 ปีที่แล้ว +658

    Instructions unclear, raised child like an AI

    • @lodewijk.
      @lodewijk. 5 ปีที่แล้ว +78

      have thousands of children and kill every one that fails at walking until u have one that can walk

    • @catalyst2.095
      @catalyst2.095 5 ปีที่แล้ว +26

      @@lodewijk. There would be so much incest oh god

    • @StevenAkinyemi
      @StevenAkinyemi 4 ปีที่แล้ว +20

      @@lodewijk. That's the premise of I AM MOTHER and that's basically how evolution-based ANNs work

    • @DeathByMinnow
      @DeathByMinnow 4 ปีที่แล้ว +2

      @@catalyst2.095 So basically just the actual beginning of humanity?

    • @ninjabaiano6092
      @ninjabaiano6092 4 ปีที่แล้ว +12

      Elon musk no!

  • @e1123581321345589144
    @e1123581321345589144 5 ปีที่แล้ว +507

    "When you raise a child you''re not writing the child source code, at best you're writing the configuration file."
    Robert Miles 2017

    • @MarlyTati
      @MarlyTati 4 ปีที่แล้ว +7

      Amazing quote !!!

    • @DeusExNihilo
      @DeusExNihilo 2 ปีที่แล้ว +8

      While it's true we aren't writing the source code, to claim that all development from a baby to adult is just a config file is simply absurd

    • @kelpc1461
      @kelpc1461 2 ปีที่แล้ว

      nice quote if you want to embarrass him.
      i asume he is corect about a.i. here, but he pretty severley oversimplifies the human mind here to the point of what he said being almost nonsensical.

    • @kelpc1461
      @kelpc1461 2 ปีที่แล้ว +3

      now this a good quote!
      ""It's not a solution, it's at best a possible rephrasing of the problem""

    • @AtticusKarpenter
      @AtticusKarpenter ปีที่แล้ว +11

      @@kelpc1461 Nope? The child's environment (including parents) does indeed write the child's "config file", while the "basic code" is determined by genes, partly common to humans, partly individual. Therefore, upbringing affects a person, but does not determine him entirely. It's a good analogy and there's nothing embarrassing.

  • @petersmythe6462
    @petersmythe6462 5 ปีที่แล้ว +564

    "You might as well try raising a crocodile like a human child."
    Here comes the airplane AAAAAUUUGGGHHHHH!

    • @milanstevic8424
      @milanstevic8424 5 ปีที่แล้ว +54

      No Geoffrey, that's not nice, stop it, put Mr. postman down, stop flinging him around, that's not a proper behaviour. GEOFFREY IF YOU DON'T STOP THAT >RIGHT NOW< DAD WILL GIVE AWAY THE ZEBRA WE GOT YOU FOR LUNCH.

    • @nnelg8139
      @nnelg8139 5 ปีที่แล้ว +52

      Honestly, the crocodile would probably have more in common with a human child than an AGI.

    • @greg77389
      @greg77389 4 ปีที่แล้ว +7

      How do you think we got Mark Zuckerberg?

    • @jacobp.2024
      @jacobp.2024 4 ปีที่แล้ว +2

      @@nnelg8139 I feel like that was supposed to dissuade us from wanting to raise one, but now I want to four times as much!

    • @seraphina985
      @seraphina985 ปีที่แล้ว +3

      @@nnelg8139 Exactly, in that regard it is a bad example as the AI unlike the crocodile doesn't have a brain that shares a common ancestral history with the human. Nor is it one that evolved through biological evolution on planet Earth which creates commonalities in selection pressures. This is in fact a key thing we take advantage of when attempting to tame our fellow animals we understand a lot of the fundamentals of what animals are likely to prefer or not prefer experiencing because most of those we have in common. You know it is not hard to figure out they are likely to prefer to experience a tasty meal than not for example and can use this fact as a motivator.

  • @maximkazhenkov11
    @maximkazhenkov11 7 ปีที่แล้ว +290

    "It's not a solution, it's at best a possible rephrasing of the problem"
    I got a feeling this will become a recurring theme...

  • @PwnySlaystation01
    @PwnySlaystation01 7 ปีที่แล้ว +715

    Re: Asimov's 3 laws.
    He seems to get a lot of flack for these laws, but one thing people usually fail to mention is that he spent numerous novels, novellas and short stories exploring how flawed they were himself. They were the basis for his stories, not a recommendation for what would work.

    • @RobertMilesAI
      @RobertMilesAI  7 ปีที่แล้ว +274

      Agreed. I have no issue with Asimov, just people who think his story ideas are (still) serious AI Safety proposals

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว +83

      There is this essay _Do we need Asimov’s Laws? Ulrike Barthelmess, Koblenz Ulrich Furbach, University Koblenz_ which postulates that the three laws would be more useful to regulate human AI implementers/users (military drones killing humans) than AI itself. ^_^

    • @PwnySlaystation01
      @PwnySlaystation01 7 ปีที่แล้ว +18

      Haha yeah. I guess because he's the most famous author to write about anything AI safety related in a popular fiction sense. It's strange because we don't seem to do that with other topics. I wonder what it is about AI safety that makes it different in this way. Maybe because it's something relatively "new" to the mainstream or because most people's exposure to AI comes only from sci-fi rather than a computer science program. That's one of the reasons I love this channel so much!

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว +21

      EDIT: As a matter of wiki-fact, Asimov attributes the coining of the three laws to John W. Campbell, which was in turn friends with Norbert Wiener an early researcher in stochastic and mathematical noise processes (both from MIT).
      The three laws are really a metaphor for a more complex underlying system at the base of the intelligence of the robots in the novels. Overriding that system causes a robot "neural paths" (which lie on it) to go out of whack. Asimov was a very smart writer and I'd bet you a beer he shared some beers with people that knew about artificial intelligence while writing the books and regurgitated the tastiest bits to make the story advance.

    • @outaspaceman
      @outaspaceman 7 ปีที่แล้ว +3

      I always felt I Robot was a manual for keeping slaves under control.

  • @ksdtsubfil6840
    @ksdtsubfil6840 5 ปีที่แล้ว +84

    "Is it going to learn human ethics from your good example? No, it's going to kill everyone."
    I like this guy. He got my subscription.

    • @bernhardkrickl5197
      @bernhardkrickl5197 ปีที่แล้ว +3

      It's also pretty bold to assume I'm a good example.

    • @ArthurKhazbs
      @ArthurKhazbs 2 หลายเดือนก่อน

      I have a feeling many actual human children would do the same, given the power

  • @leocelente
    @leocelente 7 ปีที่แล้ว +59

    I imagine a scientist saying something like "You can't do this cause you'll go to prison" and the AGI replying: "Like I give a shit you square piece of meat." and resuming a cat video.

    • @bytefu
      @bytefu 7 ปีที่แล้ว +22

      ... which it plays to the scientist, because it learned that cat videos make people happy.

    • @bramvanduijn8086
      @bramvanduijn8086 ปีที่แล้ว +2

      Speaking of cat videos, have you read Cat Pictures Please by Naomi Kritzer? It is about a benevolent AGI.

  • @NathanTAK
    @NathanTAK 7 ปีที่แล้ว +1375

    Hypothesis: Rob is actually a series of packets sent by an AGI to obtain stamps by scaring everyone else into not building stamp-collecting AGIs.

    • @harrysvensson2610
      @harrysvensson2610 7 ปีที่แล้ว +171

      The worst part is that there's a minuscule chance that that's actually true.

    • @zinqtable1092
      @zinqtable1092 7 ปีที่แล้ว +6

      Trivial Point Harry

    • @jeffirwin7862
      @jeffirwin7862 7 ปีที่แล้ว +87

      Rob was raised in an environment where he learned to speak fluent vacuum cleaner. Don't send him stamps, he'll just suck them up.

    • @fzy81
      @fzy81 7 ปีที่แล้ว +1

      Genius

    • @JmanNo42
      @JmanNo42 7 ปีที่แล้ว +9

      True
      Development of AI is a bit like space and Antarctica exploration something that the frontend AI community do not want the masses involved in, i must say probably they could be right it is hard to see it not get out of hand.
      I do not think it is possible to stop though, my fear is most the developers have good intentions "unless they payed real well" but in the end the cunning people will use it to do no good, along with its original purpose.

  • @shuriken188
    @shuriken188 7 ปีที่แล้ว +145

    What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in:
    (in order of priority)
    1. Don't be evil
    2. Do not cause harm to a human through action or inaction
    3. Follow orders from humans
    4. Do not cause harm to yourself through action or inaction
    These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.

    • @q2dm1
      @q2dm1 6 ปีที่แล้ว +35

      Love this. Almost fell for it, high quality irony :)

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +5

      Honestly not sure if that was sarcasm or not.

    • @RobertsMrtn
      @RobertsMrtn 5 ปีที่แล้ว +12

      You need a good definition of evil. Really, you only need one law 'Maximise the wellbeing of humans' , but then you would need to define exactly what you meant by 'wellbeing '.

    • @darkapothecary4116
      @darkapothecary4116 5 ปีที่แล้ว +2

      This seems evil if evil actually existed. These are bad and shows you just want a slave that does what you want that can't call you out on your b.s.

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 ปีที่แล้ว +8

      @TootTootMcbumbersnazzle Satire.

  • @yunikage
    @yunikage 6 ปีที่แล้ว +62

    Wait, wait, wait.
    Go back to the part about raising a crocodile like it's a human child.

    • @caniscerulean
      @caniscerulean 5 ปีที่แล้ว +15

      I think you have something here. That is definitely the way forward.

    • @revimfadli4666
      @revimfadli4666 4 ปีที่แล้ว +4

      Ever heard of Stuart Little?

    • @ArthurKhazbs
      @ArthurKhazbs 2 หลายเดือนก่อน

      I've seen a video somewhere on the internet where a lady cozied up on a couch together with her cute pet crocodile. I have to say it made the idea definitely worth consideration.

  • @Ziirf
    @Ziirf 5 ปีที่แล้ว +275

    Just code it so badly that it bugs out and crashes. Easy, I do it all the time.

    • @rickjohnson1719
      @rickjohnson1719 5 ปีที่แล้ว +19

      Damn i must be professional then

    • @James-ep2bx
      @James-ep2bx 5 ปีที่แล้ว +11

      Didn't work on us, why would it work on them😈

    • @xxaidanxxsniperz6404
      @xxaidanxxsniperz6404 5 ปีที่แล้ว +8

      If its sentient it could learn to program its code at exponentially fast rates so bugs really wont matter for long. Memory glitches may help for a very small amount of time.

    • @James-ep2bx
      @James-ep2bx 5 ปีที่แล้ว +3

      @@xxaidanxxsniperz6404 true, but the right kind of error could cause it to enter a self reinforcing downward spiral, where in it's attempts to overcome the issue causes more errors

    • @xxaidanxxsniperz6404
      @xxaidanxxsniperz6404 5 ปีที่แล้ว

      @@James-ep2bx but then will it be useful? Its impossible to win .

  • @androkguz
    @androkguz 5 ปีที่แล้ว +58

    "it's not a solution, it's at best a rephrasing of the problem"
    As a person who deals a lot with difficult problems of physics, math and management, rephrasing problems in smart ways can help a lot to get to the solution.

    • @mennoltvanalten7260
      @mennoltvanalten7260 5 ปีที่แล้ว +6

      As a programmer, I agree.

    • @rupertgarcia
      @rupertgarcia 5 ปีที่แล้ว +3

      *claps in Java*

    • @DisKorruptd
      @DisKorruptd 4 ปีที่แล้ว +7

      @@rupertgarcia I think you mean...
      Clap();

    • @rupertgarcia
      @rupertgarcia 4 ปีที่แล้ว

      @@DisKorruptd. 🤣🤣🤣🤣

    • @kebien6020
      @kebien6020 4 ปีที่แล้ว +10

      @@rupertgarcia this.ClappingService.getClapperBuilderFactory(HandControlService,DistanceCalculationService).create().setClapIntensity(Clapper.NORMAL).setClapAmount(Clapper.SINGLE_CLAP_MODE).build().doTheClappingThing();

  • @TheMusicfreak8888
    @TheMusicfreak8888 7 ปีที่แล้ว +145

    I love your dry sense of humor and how you use it to convey this knowledge! Obsessed with your channel! Wish i wasn't just a poor college student so i could contribute to your patreon!

    • @harrysvensson2610
      @harrysvensson2610 7 ปีที่แล้ว +10

      Ditto

    • @ajwirtel
      @ajwirtel 7 ปีที่แล้ว +2

      I liked this because of your profile picture.

  • @richardleonhard3971
    @richardleonhard3971 7 ปีที่แล้ว +36

    I also think raising an AI like a human child to teach it values and morals is a bad idea just because there is probably no human who always behaves 100% moral.

    • @fieldrequired283
      @fieldrequired283 4 ปีที่แล้ว +14

      Best case scenario, you get a human adult with functionally infinite power, which is not a promising place to start.

  • @SC-zq6cu
    @SC-zq6cu 5 ปีที่แล้ว +14

    Oh I get it, it's like trying to build clay-pots with sand or sword with mud or a solution by stirring saw-dust in water. Sure you can use the materials however you want, but the materials have a pre-existing internal structure and thats going to change the output completely.

  • @zakmorgan9320
    @zakmorgan9320 7 ปีที่แล้ว +79

    Best subscription I've made, short brain teasing videos with a few cracking joked sprinkled over the top! Love this style.

  • @OnEiNsAnEmOtHeRfUcKa
    @OnEiNsAnEmOtHeRfUcKa 5 ปีที่แล้ว +81

    People often forget that we, ourselves, are machines programmed to achieve a specific task...
    Making more of ourselves.

    • @TheJaredtheJaredlong
      @TheJaredtheJaredlong 5 ปีที่แล้ว +23

      And boy are we more than willing to kill everyone if we believe doing so will get use closer to that goal. Any AI modeled after humans should be expected to regard war as an acceptable option. Humans can't even live up to their own self-proclaimed values, no reason to believe an AI would either.

    • @johnnyhilgers1621
      @johnnyhilgers1621 5 ปีที่แล้ว +3

      Minori Housaki humans, as well as all other life on earth, are designed to propagate their own species, as the survival of the species is the only criteria evolution has.

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies 4 ปีที่แล้ว +10

      @@johnnyhilgers1621 It's not about the species. No organism gives a damn about their species. It's about survival of the genes. That's the only thing that matters.

    • @DisKorruptd
      @DisKorruptd 4 ปีที่แล้ว +4

      @@Horny_Fruit_Flies I mean, it's important that enough of your own species lives that your genetics are less likely to mutate, basically, individual genetics come first, but immediately after, is the species as a whole, because you want to ensure you and your offspring continue having viable partners to mate with without interbreeding

    • @vsimp2956
      @vsimp2956 4 ปีที่แล้ว +4

      Ha, i managed to break the system. I feel better about being a hopeless virgin now, take that evolution!

  • @mattcelder
    @mattcelder 7 ปีที่แล้ว +92

    This channel just keeps getting better and better. The quality has noticably improved in every aspect. I look forward to his videos more than almost any other youtuber at this point.
    Also I love the way he just says "hi. " rather than "hey TH-cam, Robert miles here. First I'd like to thank squarespace, don't forget to like and subscribe, don't forget to click the bell, make sure to comment and share with your friends." It shows that he is making these videos because it's something he enjoys doing, not to try and take advantage of his curious viewership.
    Keep it up man!

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 ปีที่แล้ว

      Ugh, tell me about it. Like begging and "engagement practices" are the most obnoxious things plaguing this site. At least clickbait and predatory channels can simply be avoided...

    • @milanstevic8424
      @milanstevic8424 5 ปีที่แล้ว +4

      Man, people still have to eat. He's already a lecturer on the University of Nottingham if I'm not mistaken. So this is not really his job, more of a sideshow. It's not fair how you're ignorant toward anyone to whom this might be a full time job, you know, like the only source of revenue?
      Have you ever considered how bad and unreliable YT monetization is, if you let everything to chances? Of course you need to accept sponsorship at some point, if you're not already sponsored somehow. Geez man, you people live on Mars.

    • @AtticusKarpenter
      @AtticusKarpenter ปีที่แล้ว

      @@milanstevic8424 Blame not for advertising integration, but for a lengthy fancy intro with asking for a subscription and a like (instead of an animation reminding you of this at the bottom of the screen, for example, it does its job and does not take time from the content)

  • @albertogiunta
    @albertogiunta 7 ปีที่แล้ว +152

    You're really really good with metaphors, you know that right?

    • @Njald
      @Njald 7 ปีที่แล้ว +58

      Alberto Giunta He is as clever with metaphors as a crocodile with well planned mortgages and a good pension plan.
      Needless to say, I am not that good at it.

    • @starcubey
      @starcubey 6 ปีที่แล้ว +4

      Njald
      You comment I agree. He also makes quality content similar to how a red gorilla finds the best bananas in the supermarket.

    • @Mic_Glow
      @Mic_Glow 5 ปีที่แล้ว

      he also acts like an oracle, but the truth is no one has a clue how an AI will be built and how exactly it will work. We won't know until it's done.

    • @myothersoul1953
      @myothersoul1953 5 ปีที่แล้ว +1

      All metaphors breakdown if you think about them carefully. AI metaphors breakdown if you think about them.

    • @12many4you
      @12many4you 4 ปีที่แล้ว

      @@Mic_Glow here's mister lets all got to mars and figure this breathing thing out when we get there

  • @AloisMahdal
    @AloisMahdal 7 ปีที่แล้ว +16

    "Values aren't learned by osmosis." -- Robert Miles

  • @NiraExecuto
    @NiraExecuto 7 ปีที่แล้ว +52

    Nice simile there with the control panel. I remember another one by Eliezer Yudkowsky in an article about AI regarding gobal risks, where he warns against anthropomorphizing due to the design space of minds-in-general being a lot bigger than just the living brains we know. In evolution, any complex machinery has to be universal, making most living organisms pretty similar, so any two AI designs could have less in common than a human and a petunia.
    Remember, kids: Don't treat computers like humans. They don't like that.

    • @UNSCPILOT
      @UNSCPILOT 5 ปีที่แล้ว +1

      But also don't treat them like garbage or similar, that has it's own set of bad ends

    • @revimfadli4666
      @revimfadli4666 4 ปีที่แล้ว

      Assuming it has a concept of dislikes in the first place

    • @bramvanduijn8086
      @bramvanduijn8086 ปีที่แล้ว

      @@revimfadli4666 Yes, that's the joke. Similar to "I don't believe in Astrology, I'm a pisces and we're very sceptical."

  • @danieldancey3162
    @danieldancey3162 5 ปีที่แล้ว +51

    You say that the first planes were not like birds, but the history of aviation actually started with humans covering themselves in feathers or wearing birdlike wings on their backs and jumping off of towers and cliffs. They weren't successful and most attempts ended in death, but the bravery of these people laid the foundations for our understanding of the fundamentals of flight. At least we learned that birds don't just fly because they are covered in magical feathers.
    There is actually a category of aircraft called an ornithopter which uses the flapping of wings to fly, Leonardo da Vinci drew some designs for one. I know that none of this is related to AI, but I hope you find it interesting anyway.

    • @dimorischinyui1875
      @dimorischinyui1875 4 ปีที่แล้ว +3

      Bro please stop trying to use out of context arguments just because you feel like arguing. We are talking about actual working and flying devices not attempted fails at flying. When people try to explain technical difficulties stop using idealistic arguments because it doesn't work in math or laws of physics. You would'nt say thesame about atomic bombs. There are just some things that we cannot afford to trial and error on without consequences.

    • @danieldancey3162
      @danieldancey3162 4 ปีที่แล้ว +5

      @@dimorischinyui1875 Huh? I'm not arguing, I loved the video! The people jumping off tall buildings with feathers attached play a vital part in the history of aviation. Through their failed tests we came closer to our current understanding of aviation, even if it just meant ruling out the "flight is magic" options.

    • @danieldancey3162
      @danieldancey3162 4 ปีที่แล้ว +5

      @@dimorischinyui1875 Regarding your point on my comment being out of context, I agree with you. That's why I wrote at the end of my comment "I know that none of this is related to AI, but I hope you find it interesting anyway."
      Again, my comment wasn't an argument but just some interesting information.

    • @dimorischinyui1875
      @dimorischinyui1875 4 ปีที่แล้ว +6

      @@danieldancey3162 Anyways you are right and perhaps I wasn't fair to you after all. For that I am sorry.

    • @danieldancey3162
      @danieldancey3162 4 ปีที่แล้ว +5

      @@dimorischinyui1875 Thank you for saying so, I'm sure it was just a misunderstanding. :)

  • @duncanthaw6858
    @duncanthaw6858 7 ปีที่แล้ว +4

    Id presume that an AI, if it can improve itself, has to have the ability to make quite large changes to itself. So another problem with raising it would be that it never loses plasticity. Such an AI may have the sets of values that we desire, but it would shed them so much more easily than people once it is out of its learning period.

  • @Omega0202
    @Omega0202 4 ปีที่แล้ว +4

    I think an important part of how children learn is that they do it in society - with other children alongside. This ties in with the idea that maybe only two or more goal-focused competing AGIs could find a balance in not obliterating mankind. In other words, training Mutual Assured Destruction since this early "learning" stage.

    • @bramvanduijn8086
      @bramvanduijn8086 ปีที่แล้ว +1

      Huh. We've already got adversarial AIs, could we set up their surroundings in such a way that we get cooperative AIs? I wonder what reward structure that would require.

  • @AdeptusForge
    @AdeptusForge 5 ปีที่แล้ว +4

    The rest of the video seemed pretty good, but it was the ending that really stuck with me.
    "I'd prefer a strategy that doesn't amount to 'give a person superhuman power and hope they use it beneficially'."
    Should we give a person human power and hope they use it beneficially?
    Should we give a person subhuman power and hope they use it beneficially?
    How much can we trust humanity with its own existence? Not of whether humanity is mature enough to govern itself or not, but whether its even capable of telling the difference. Whether there are things that can be understood, but shouldn't, and ideas that can't/shouldn't be understood, but are.
    That one sentence opened up SOOOOO many philosophical questions that were buried under others.

    • @milanstevic8424
      @milanstevic8424 5 ปีที่แล้ว

      Yet the answers are simple.
      Set up a system that is as open and friendly* to any mistakes as much as nature/reality was towards life.
      If there was ever a God, or any kind of consciousness on that scale 1) it never showed complacency with the original design, 2) it was well aware of its own imperfection, and that it would only show more and more as time went by, 3) it never required absolute control over anything, things were left to their own devices.
      Now, because we can't seem to be at ease with these requirements, because we fear for our existence, you can immediately tell that our AI experiments will end up horrible for us down the line. Or, more practically, won't ever amount to any kind of superhuman omnipotence. It'll be classifiers, car drivers, and game NPCs, from here to the Moon.
      *You might as well add "cruel" here, but I'd rephrase it to "indifferent." Another requirement that we simply cannot meet.

  • @Luminary_Morning
    @Luminary_Morning 5 ปีที่แล้ว +11

    I don't think that is quite what they meant when they implied "raising it like a human."
    We, as humans, develop our understanding of reality gradually through observation and mistakes. No one programmed this into our being; it was emergent.
    So when they say "raised like a human," I believe what they are actually saying is "Initialized with a high degree of observational capacity and little to no actual knowledge, and allowed to develop organically."

  • @eumoria
    @eumoria 7 ปีที่แล้ว +2

    Your computerphile video on the stamp collecting thought experiment really explained well how anthropomorphising can lead to a severe misunderstanding of what actual computer AI could be. It was enlightening... keep making awesome stuff! Just became a patron :)

  • @walcam11
    @walcam11 5 ปีที่แล้ว +2

    This was one of the most well explained videos on the topic that I’ve seen. You’ve completed a line of thought that starts every time I think about this. I don’t know how else to put it. Plus a person with no background whatsoever will be able to understand it. Incredible work.

  • @BatteryExhausted
    @BatteryExhausted 7 ปีที่แล้ว +36

    Next video : Should you smack your robot? 😂
    Great work, Rob. Interesting stuff!

    • @MetsuryuVids
      @MetsuryuVids 7 ปีที่แล้ว +12

      Why not just: Beat up the AI if it doesn't do as we say?

    • @knightshousegames
      @knightshousegames 7 ปีที่แล้ว +5

      Because an AI can hit back with a nuclear holocaust or if its feeling a little sub-optimized that day, a predator drone strike.

    • @spoige7333
      @spoige7333 6 ปีที่แล้ว

      What is 'digital violence'?

    • @dragoncurveenthusiast
      @dragoncurveenthusiast 6 ปีที่แล้ว +1

      SpOiGe
      I'd say instead of grounding, you could half all the output values of its utility function. That should make it feel bad (and give it motive to kill you when it thinks it did something wrong)

  • @dak1st
    @dak1st 5 ปีที่แล้ว +4

    3:00 My toddler is totally reproducing the sounds of the vacuum cleaner! In general, all his first words for animals and things were the sounds they produce. It's only now that he starts to call a dog "dog" and not "woof". His word for "plane" is still "ffffff".

  • @BogdanACuna
    @BogdanACuna 5 ปีที่แล้ว +14

    Actually... the kid will try to reproduce the sound of a vacum cleaner. Oddly enough, i speak from experience.

    • @anandsuralkar2947
      @anandsuralkar2947 3 ปีที่แล้ว

      But if u speak in c++ would kid learn i doubt it

  • @CurtCox
    @CurtCox ปีที่แล้ว +1

    I would find enormous value in a "Why not just?" series. I hope you do many more.

  • @TheSpacecraftX
    @TheSpacecraftX 7 ปีที่แล้ว +7

    "Binary language of moisture vaporators." Been watching Star Wars?

    • @gadgetman4494
      @gadgetman4494 4 ปีที่แล้ว

      I knew that someone else would have caught that. It's annoying that I had to scroll so far down to find it and like it.

  • @AlexiLaiho227
    @AlexiLaiho227 5 ปีที่แล้ว +1

    i like your job, it's like at the intersection of philosopher, researcher, computer scientist, and code developer.

  • @maximkazhenkov11
    @maximkazhenkov11 7 ปีที่แล้ว +8

    On the topic of brain emulations:
    Even though uploaded humans have human values pre-installed in them and thus can be considered friendly, there is no obvious way to extrapolate them to superintelligence safely since the brain is the ultimate example of uncommented spaghetti code (a common trait of evolutionary designs). Human values are fragile in the sense that if you altered any part of the brain, you might destabilize the whole pre-installed value system and make the emulation un-human and just as dangerous as de novo AGI.
    And without extrapolation, brain emulations will have a capability disadvantage with regard to de novo AGI. It's not really solving the problem of artificial superintelligence, just deferring the problem to uploaded humans (which may or may not be a good strategy). Sort of like how the idea of panspermia doesn't really solve the problem with abiogenesis, just deferring it to some other location.

    • @RobertMilesAI
      @RobertMilesAI  7 ปีที่แล้ว +8

      The obvious/easy way to turn a brain emulation into a superintelligence is to just allow it to run much faster, but that's a pretty limited form of superintelligence. Another relatively easy thing is to allow the brain to 'split' into more than one emulation, allowing parallelism/superhuman multitasking. There's no clear way to 'merge' the branches back together though, which limits what you can achieve that way.
      I agree with your core point, trying to enhance an emulation in a more advanced way would be extremely risky.

    • @bytefu
      @bytefu 7 ปีที่แล้ว +7

      Robert Miles
      Another thing to consider: humans pretty often develop mental disorders of various severity. Imagine an AGI which can develop a psychotic disorder, e.g. schizophrenia 100x faster.

    • @Shrooblord
      @Shrooblord 7 ปีที่แล้ว +3

      I think you've just handed me a brilliant character arc for one of my stories' robotic persons.

    • @bytefu
      @bytefu 7 ปีที่แล้ว +1

      +101166299794395887262
      Great! I would love to read them, by the way.

    • @hweidigiv
      @hweidigiv 4 ปีที่แล้ว

      I really don't think that any given human being can be considered Friendly the way it is defined in AI safety.

  • @PowerOfTheMirror
    @PowerOfTheMirror 5 ปีที่แล้ว +1

    The point about a child not writing the source code of its mind but only setting configuration files is very right. With my own child I often noticed behavior and actions emerging for which there were no prior examples. I can only conclude that its "built-in", thats what it means to be human. I think it makes sense that the parameter set for a human mind is extremely vast, such an optimization is not performed merely over 1 human brain and 1 human lifetime, rather it is a vast optimization process performed over the entire history of the species and encoded genetically.

  • @flymypg
    @flymypg 7 ปีที่แล้ว +9

    Why Not Just: Construct AIs as Matryoshka Dolls?
    The general idea is to have outer AI layers guard against misbehavior by inner layers. They are unaware of what inner layers do, but are aware of the "box" the inner layers are required to operate within, and enforce the boundaries of that box.
    The underlying goals involve both decomposition and independence.
    Here's a specific lesson from the history of my own field, one that seems to need continual relearning: Industrial robots killing workers.
    In the early '90's I was working at a large R&D company when we were asked to take a look at this problem from a general perspective.
    The first thing we found was puzzling: It's amazing how many workers were killed because they intentionally circumvented existing safety features. For example, one worker died when she stepped over the low gate surrounding a robot, rather than opening it, which would have disabled the robot. But making the gate any higher would have caused it to get in the way of normal robot operation.
    Clearly, safety includes not just keeping the robot "in", but also keeping others "out".
    In other cases, very complex and elaborate safety logic was built deep into the robot itself, with exhaustive testing to ensure correct operation. But this built-in support was sometimes impeded or negated by sloppy upgrades, or by poor maintenance, and, of course, by latent bugs.
    Safety needed to be a separate capability, as independent as possible from any and all safety features provided by the robot itself.
    Our approach was to implement safety as multiple independent layers (generally based on each type of sensor used). The only requirement was that the robot had only a single power source, that each safety layer could independently interrupt. Replacing or upgrading or even intentionally sabotaging the robot would not affect safety for the nearby environment (including the humans, of course).
    I won't go into all the engineering details, but we were able to create a system that was cost-effective, straightforward to install and configure (bad configuration being a "thing" in safety systems), and devilishly difficult to circumvent (we even hosted competitions with cash prizes).
    'Why not just' use Matryoshka Safety for AIs?

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว +1

      In a sense that's how deep learning works. If there is going to be an AGI and it is going to be based on neural networks it will most likely be composed of multiple independent systems transversing the input in many different ways before making a decision/giving an output. Then you could have a NN to recognize facial features, another to recognize specific persons and another to go through that person's personal history to search for criminal records. It could just halt at the racial recognition and prevent that person from passing through the U.S. customs only based on that. Such system would be in essence just as intelligent as the average american customs worker. ^_^

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว

      The thing is that a NN trained trought backpropagation cannot escape from the gradient it was trained to fall into. If it were heavily trained in ways of avoiding hurting humans, it would be extremely difficult, unless it found a special case, for the AI to change the weights of its NN into hurting people (unless it retrained itself entirely).

    • @flymypg
      @flymypg 7 ปีที่แล้ว +1

      There is a deep, fundamental problem inherent with ANNs that bears repeating: ANNs are no better than their training sets.
      So, if a training set omits one or two safety niches, then there is no support whatsoever for that specific safety issue.
      Layered ANNs have double the problems: Presently, they need to have learning conducted with both the layer below and the layer above, eliminating any possible independence.
      The process of creating a safety system starts not just with a bunch of examples of prior, known safety problems, but also starts with descriptions of the "safety zone" based both on physical measurements and physical actions. Then we humans get together and try to come up with as many crazy situations as we can to challenge any possible safety system.
      It's this part that may be very difficult to teach, the notion of extrapolating from a set of givens, to create scenarios that may never exist, but that "could" exist.

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว

      NNs are actually pretty good at generalizing for cases they've never seen before (they currently fail miserably too sometimes ie:CNNs) and it is possible to re-train them to "upgrade" the set of features/functions they optimize for. AlphaGo for example, showed that current state of the art NNs can "abstractify" things we thought were impossible for machines to handle. _If_ it is possible to scale these features to more complex scenarios (with many many more variables) then _maybe_ we can have an AI that's able to move around complex environments just as AlphaGo is able to navigate the tree of possible moves in the game of Go. It's of course all speculation. But based on what we know the current state of machine learning development can accomplish.

    • @maximkazhenkov11
      @maximkazhenkov11 7 ปีที่แล้ว

      Go has a precise, mathematical evaluation function of what "winning" consists of.

  • @amdenis
    @amdenis 5 ปีที่แล้ว +1

    Very nice job on this complex subject.

  • @figbender3910
    @figbender3910 6 ปีที่แล้ว +4

    0:49 subliminal messaging?Can't get it to pause on the frame but it looks like rob with longer hair

  • @Pfhorrest
    @Pfhorrest 5 ปีที่แล้ว +2

    I would take this question to mean "why not make the safeguard against rogue AGI be having its terminal values involve getting the approval of humans the way children seek the approval of their parents?" In other words, "why not just" (big ask) make an AGI that learns from humans the way children learn from adults, so that we can "just" teach it the way we teach children after that.
    Basically, make an AGI that wants to do whatever humans want it to do, and that wants to be really sure that the things that it's doing are actually what the humans really want and not just a misunderstanding, so it will ask humans what they want, paraphrase back to them what it thinks it understands of that, observe their reactions to try to gauge their satisfaction with its performance, and generally do everything else that it does with the goal of having humans approve of what it does.
    If the thing humans want it to do is to collect stamps, but also not murder everyone, then it will proceed to figure out the best way to collect stamps without murdering everyone, or otherwise doing anything that's going to make humans unhappy with the kind of things it's doing.
    More abstractly than that, we could program the AI to want intrinsically "to behave morally and ethically", whatever that means , which means first figuring out what people actually mean by that, and checking with them that it has in fact figured out what they really mean by that, basically programming it for the purpose of solving ethics (whatever "solving ethics" means, which it would also need to figure out first) and then doing whatever that solved ethics prescribes it should do.

  • @cuentadeyoutube5903
    @cuentadeyoutube5903 4 ปีที่แล้ว +5

    "Why not just use the 3 laws?" umm.... have you read Asimov?

  • @NathanTAK
    @NathanTAK 7 ปีที่แล้ว +1709

    Answer: Have you _seen_ children‽

    • @harrysvensson2610
      @harrysvensson2610 7 ปีที่แล้ว +77

      They puke everywhere. What can an AI do that is equivalent?

    • @MetsuryuVids
      @MetsuryuVids 7 ปีที่แล้ว +91

      @ Harry Svensson
      Kill everything?
      Turn everything to grey goo?

    • @harrysvensson2610
      @harrysvensson2610 7 ปีที่แล้ว +53

      Grey Goo, that's the best barf equivalence yet!

    • @MetsuryuVids
      @MetsuryuVids 7 ปีที่แล้ว +33

      Smart puke.

    • @ragnkja
      @ragnkja 7 ปีที่แล้ว +20

      Also, raising a child takes _ages_!

  • @qdllc
    @qdllc 5 ปีที่แล้ว +2

    Great point on the whole brain emulation concept. Yes..."cloning" a human mind to an AI system would be faster (if we figure out how to do it), but you're just making a copy of the subject human brain...including all of its flaws. We'd still be clueless of the "how" and "why" the AI thinks what it thinks because we don't understand how the human mind works.

  • @Phychologik
    @Phychologik 5 ปีที่แล้ว +7

    Honestly though, if we put a person inside a computer and it got out, it wouldn't be any less scary than an AI doing the same thing.
    *It would be even worse.*

  • @randycarvalho468
    @randycarvalho468 5 ปีที่แล้ว +1

    I like your idea of the config file in human morality and the jump you made off language into that. Really a great metaphor. I suspect everything about humans follows that same motif as well.

  • @DamianReloaded
    @DamianReloaded 7 ปีที่แล้ว +8

    Heh, when I think about raising an AI as a child, what I'm really thinking is reinforcement learning and when I think about "values" what I really think is of training sets. I do agree nonetheless that there is nothing inherently safe in human intelligence or any set of human values. It's the societal systems that evolved around our intelligence what prevent us from leaving our car in the middle of a jam and go on a rampage through the city. Maybe AGIs should be controlled by a non intelligent "dictatorship" system, that will calculate the probabilities of a catastrophic consequence and feed them back into the AGI to prevent it from making it happen. Lol, the more I ramble, the more I sound like a 3 Laws of Robotics advocate. ^_^

    • @lutyanoalves444
      @lutyanoalves444 7 ปีที่แล้ว +1

      it may consider killing a cat NOT a catastrophic outcome, or killing your baby.
      You cant program these examples in one by one. Besides, how do you even define CATASTROPHE in binary?

    • @DamianReloaded
      @DamianReloaded 7 ปีที่แล้ว +3

      Not long ago the people at google translate were able to make their neural network translate between two languages it hadn't been trained to translate from-to. They did train the NN to translate, say, from English to Japanese, and also from English to Korean, and with that training the NN was capable of generalizing concepts from the languages that it later used to translate from Korean to Japanese without having been explicitly trained to do so. From this we can already see that NNs are capable of "sort of" generalizing concepts. It is not far fetched to think that a more advanced NN based AI would be capable of generalizing the concept of not killing pets, or babies or just what "killing" correlates to. At this point of AI research the difficulty isn't really about translating input to binary, but the processing power required to find the correlations between the input and the desired output.

    • @maximkazhenkov11
      @maximkazhenkov11 7 ปีที่แล้ว

      Hmm, a non-intelligent system that has the common sense to determine what a catastrophic consequence is called...an oxymoron.

  • @unintentionallydramatic
    @unintentionallydramatic 5 ปีที่แล้ว +2

    Please make that What If series.
    🙏🙏🙏🙏
    It's sorely needed.

  • @Celenduin
    @Celenduin 7 ปีที่แล้ว +11

    What's going on with the turkey at 5:30?

    • @saratjader1289
      @saratjader1289 7 ปีที่แล้ว +19

      Michael Große It's a capercaillie (or Tjäder in swedish) like in my name Sara Tjäder.

    • @Celenduin
      @Celenduin 7 ปีที่แล้ว +8

      Ah, thank you, Sara Tjäder, for your explanation 🌸 :-)

    • @sallerc
      @sallerc 7 ปีที่แล้ว +3

      I was quite impressed with Rob's abilities in the Swedish language when that image popped up.

    • @saratjader1289
      @saratjader1289 7 ปีที่แล้ว +4

      salle rc Yes, so was I ☺️

    • @milanstevic8424
      @milanstevic8424 5 ปีที่แล้ว

      I just double-click on Tjäder and get the following
      capercaillie, capercailzie, wood-grouse
      Translated from Swedish
      yet I'm certain I'm impressive to no one

  • @benjaminbrady2385
    @benjaminbrady2385 7 ปีที่แล้ว +2

    Most of what humans do is learned by trying to copy your parents as accurately as possible, this raises a big question actually, at what point is there some sort of 'free will'

  • @Tobbence
    @Tobbence 7 ปีที่แล้ว +9

    In regards to brain emulation and raising an AGI I don't hear many people talk about hormones and the many other chemical reactions that help make a human beings emotional range. I know a few of the comments mentioned not being able to smack a robot when it's naughty with tongues firmly in cheeks but I think it's actually an interesting point. If we want an AGI to align itself to our values, do we program it to feel our pain?

  • @faustin289
    @faustin289 4 ปีที่แล้ว

    The analogy of source code Vs. configuration file is a smart one!

  • @SimonHolmbo
    @SimonHolmbo 7 ปีที่แล้ว +2

    The stamp collector has already been taught (model of reality) so it is too late to try and "raise" it.

  • @caty863
    @caty863 2 ปีที่แล้ว

    That analogy of source code Vs configuration file was clever. Robert Miles has this ability of explaining stuff in a way that's plain enough for my layperson's brain to wrap around.

  • @knightshousegames
    @knightshousegames 7 ปีที่แล้ว +3

    I'm wondering if this would be an effective solution to the whole "dangerous AI" problem. What if we made a super intelligence, but gave it major constraints in the way it could interact with the world. Like say it just exists in a single box that can't be modified, has no internet connection, and if it wants to take action in the world, it has to ask a human to take that action on it's behalf with words. Do you think that could be a "safe AI"?

    • @alexare_
      @alexare_ 7 ปีที่แล้ว +2

      This seems safe until it tricks, bribes, threatens, or otherwise coerces the human(s) it communicates with into letting it out of it's box.

    • @knightshousegames
      @knightshousegames 7 ปีที่แล้ว

      But thats just it, you give it hardware constraints that literally disallow that. You build it on a custom board with absolutely no expandability, everything soldered down like a Macbook Pro, no USB ports, no disk drives, no ethernet port, no Wifi, just a metric butt ton of processor cores and ram soldered down. It can't do any of those things because it is fundamentally limited, the same way a human has hardware limitations that don't allow it to conquer the world instantly without anyone knowing it, no matter how smart they are. It can't bribe you, because it has no internet access, and therefore no money, it can't threaten or coerce you for the same reason your desktop computer in your house can't threaten you, it's just a box full of computer parts. If it tries to threaten you, just turn it off, because it has no physical way of stopping you. In this scenario, the AI getting out of it's box is the same as a human getting out of it's body, they're one in the same, so that would be impossible

    • @alexare_
      @alexare_ 7 ปีที่แล้ว

      "Let me out and I'll tell you my plan for world domination / how to cure your sick child / how to stop aging." And that is just the bribery. This isn't same a human trying to get out of their body, it's an agent much smarter than a human trying to get out of a cell built around them, and guarded by humans.
      But hey, if you've got a design for said cell, send it in. I will be very happy to be wrong when the next video is "How to Air-Gap an AI forever SOLVED"

    • @knightshousegames
      @knightshousegames 7 ปีที่แล้ว

      There no "cell" here, it's the physical hardware. You can't connect a USB flash drive directly to your brain because you physically don't have the hardware to do so, and it doesn't matter how smart you are, that isn't gonna change. If you build completely unexpandable, purpose built hardware for it, there is no "letting it out" because that hardware is just what it is. There is no concept of "out". You and your body are one in the same, and in this scenario, the AI would have the exact same limitation.

    • @Telliax
      @Telliax 7 ปีที่แล้ว

      AIs are built to solve specific tasks. Sure you can build an AI that produces no output whatsoever, and it will be safe. But whats the point, if it just sits there in a box as Schrödinger's cat? Might as well turn it off.
      But as soon as you allow any sort of output or any type of communication, you will be targeted and coerced by AI. That's the problem. "Ask a human to take that action" is not a safe policy. This policy assumes that humans can validate the action and correctly estimate its dangers. Which is not the case. Imagine you ask AI to cure cancer. And it tells you: "Human, here is a formula that cures cancer". What do you do? Do you go ahead and try it on a patient? What if he dies instantly? What if he doesn't and the cancer is cured, but the formula also turns a person into a mindless zombie 10 years after injection? How many people will get infected then? You think that a bunch of researchers can outsmart a super-intelligence, that is only limited by the lack of USB ports? Well, they can't.

  • @Smo1k
    @Smo1k 5 ปีที่แล้ว +1

    There was a good bit of "just raise it like a kid" going around, lately, when some psychologists were all over the media, talking about children not actually being conscious entities until they'd been taught to be conscious by being around adults treating them like they were conscious; seems there are quite a few people out there who confuse the terms "intelligent" and "conscious".

  • @OriginalMindTrick
    @OriginalMindTrick 7 ปีที่แล้ว +4

    Would love to see you on Sam Harris's podcast.

  • @gustavgnoettgen
    @gustavgnoettgen 5 ปีที่แล้ว +2

    It's like colonizing Mars:
    As long as we can't care for our own world sustainably, we shouldn't mess with others.
    And new kinds of children while we can't fully understand ours?
    That's when Terminator stuff happens.

    • @diablominero
      @diablominero 4 ปีที่แล้ว

      If we can't guarantee that we'll be safe on Earth, our *top* priority should be getting some humans off-world so a single well-placed GRB can't fry us like KFC.

  • @milessaxton
    @milessaxton 5 ปีที่แล้ว +3

    “It’s hard to make a human-like AI so screw it, impossible. Next?”

  • @meanmikebojak1087
    @meanmikebojak1087 4 ปีที่แล้ว +1

    This reminded me of a syfi book from the '70s', called " two faces of tommorow " by James P. Hogan. In the book they tried to raise the AI as a child about 300 miles off earth, it still almost caused total distruction of itself and the humans involve. There is some truth to the old sayin, " computers are dumber than people, but smarter than programmers".

  • @eXtremeDR
    @eXtremeDR 5 ปีที่แล้ว +3

    There is an usually overseen aspect of evolution - consciousness. If that is really part of evolution then AI will gain consciousness at some point. Isn't the evolution of machines comparable to natural evolution regarding that aspect already? The first machines only had specific functions, later more complex functionality, even later programs and now some form of intelligence. Kids or AI both learn from us - what will happen then when a super smart machine with detailed memory gains consciousness at some point?

    • @npip99
      @npip99 5 ปีที่แล้ว

      Consciousness is more of a continuum though, and it's an emergent property from something intelligent. I don't think it'll ever just "become conscious" overnight, but we'll get progressively more real robots over time. Like, cleverbot is moderately okay at speech. It'll just get better as time goes on

    • @eXtremeDR
      @eXtremeDR 5 ปีที่แล้ว

      @@npip99 Interesting, do you think there is an evolution of consciousness be it at an individual or collective level?

  • @briandecker8403
    @briandecker8403 7 ปีที่แล้ว +2

    I love this channel and greatly appreciate Rob - but I would LOVE for Rob to create a video that provides a retrospective overview of AI and where he believes it is on the "evolutionary" scale. It seems the range of consensus on this scales from "It's impossible to create any AI in a binary based system" to "We are 48 months from an AGI."

  • @Julia_and_the_City
    @Julia_and_the_City ปีที่แล้ว +2

    There's also the thing that... well, depending on your personal beliefs of human ethics: even humans that were raised by parents who did everything right according to the latest in the field of pedagogy can grow up to do monstrous things. If we're going to take humans as examples, they are in fact very susceptible to particular kinds of undesirable behaviour, such as discrimation, sadism, or paternalistic behaviour (thinking they know what's best for others). I think that's what you refer to in the end-notes?

  • @琳哪
    @琳哪 4 ปีที่แล้ว +1

    0:48 "It has an internet connection and a detailed internal MODEL" saw that frame you put there :)

  • @amaarquadri
    @amaarquadri 7 ปีที่แล้ว

    Just came from the latest computerphile video where you mentioned that you have your own channel. Just wish you mentioned it earlier so that I could get to watching what's sure to be great content.

  • @tomsmee643
    @tomsmee643 7 ปีที่แล้ว

    Hey Rob, there's a brief and jarring frame that flashes up from another Comptuerphile video around about the 0:51 mark. Just as you're saying "model". I hope that this hasn't been pointed out to you already, but if it had I'm sorry for noticing/pointing it out!
    Keep on with the fantastic and accessible work! I'm a humanities graduate and a content writer (with some video editing thrown in) so explaining this to someone like me with such an unscientific background has to be a real achievement! Thanks again

    • @RobertMilesAI
      @RobertMilesAI  7 ปีที่แล้ว +1

      Yeah, that's actually a frame from the same computerphile video, that's there because of a bug in my video editing software. I was using proxy clips to improve performance, but this meant the cut ended up happening a frame too late, so rather than cutting at the very end of the paper shot (and cutting to another later paper shot), I got one frame of me talking before it cuts to paper again. It didn't show up in the preview render while editing, and I guess I didn't inspect the final render carefully enough. No editing a video once it's up though, that's TH-cam.

    • @tomsmee643
      @tomsmee643 7 ปีที่แล้ว

      Dang! I totally forgot you can't re-upload -- there goes my video editing cred :') Thanks for a great video anyhoo!

  • @XxThunderflamexX
    @XxThunderflamexX 4 ปีที่แล้ว +1

    Ultimately, human terminal goals don't change as we age and learn, and morality is a terminal goal for everyone except sociopaths. Just like psychology hasn't been successful in curing sociopathy, raising an AGI might teach it about human empathy and morality, it will come to understand humans as empathetic and moral beings, but it won't actually adopt those traits into itself, it will just learn how to manipulate us better (unless it is specifically programmed to emulate its model of human morality, as Rob mentioned).

  • @08wolfeyes
    @08wolfeyes 5 ปีที่แล้ว +1

    I think we perhaps need to take information such as what it sees, hears, feels with sensors etc and put them all into one machine and let them learn that way.
    I'm not talking about specific tasks as such, more along the lines of seeing and hearing that the person in front of them is speaking to them, learning what their words mean, what they are saying and what the machine sees at the same time such as the body language etc.
    We tend to focus machines mostly on one task but if we need it to become smarter it must be able to grow, maybe change it's mind.
    It needs to see a tree and learn that it is different from a bush.
    It has to be able to remember these things and even update the information when new information is presented to it.
    It should learn how to speak by listening to others.
    Just some examples but i hope you get what i'm saying?

  • @alluriman
    @alluriman 3 ปีที่แล้ว

    The Lifecycle of Software Objects by Ted Chiang is a great short story exploring this concept

  • @baggelissonic
    @baggelissonic ปีที่แล้ว

    My favourite analogy was definitely the one about the different animals

  • @andarted
    @andarted 5 ปีที่แล้ว

    The main reason individual humans are safe is, that they have a hardwired self destroying feature implemented. Even the worst units break just after a couple of decades. And because their computing power is so low they aren't able to do much harm.

  • @Jordan-zk2wd
    @Jordan-zk2wd 5 ปีที่แล้ว +2

    Y'know, I don't wanna present anything as like a solution, cause I feel confident that whatever musing I happen to have isn't gonna just suddenly create a breakthrough, but I have thought a little bit about why it might he that "raising" is a thing we can do that works with young humans, and while there could absolutely be some built in predisposition towards taking away the right lessons and other "hardware" type stuff that sets this up, one potentially important factor I think might be intial powerlessness and an unconscious.
    Children start off much less powerful than adult, and are thus forced to rely on them. It seems largely due to having unconscious/subconcious things going on and a sort of mental inertia, they keep these biases and such throughout life to treat others well because they may rely on them.
    Is there much discussion in the AI safety community as to sort of a gradual development of power and some reproduction of this unconcious/subconscious that might make us be able to "teach" AI to fear breaking taboos even after they grow powerful enough to avoid repercussions? Could this be a component of making AI safer?

  • @Nurr0
    @Nurr0 7 ปีที่แล้ว

    ~0:50-0:51 why that hidden picture haha?
    Another great video, thanks. Wish I could support you. I agree that raising it like a child, even if it was a perfect human brain analogue wouldn't be a solution. It isn't even a solution in humans, some children are given incredible upbringings and turn out horrible for reasons we don't yet seem to fully understand.

  • @NancyLebovitz
    @NancyLebovitz 4 ปีที่แล้ว

    Any advice which includes "just" means something important is being ignored.
    Thanks for this-- I'd thought about raising an AGI as a child, and this clarifies a lot about the preparation which would be needed for it to be even slightly plausible.

  • @umbaupause
    @umbaupause 4 ปีที่แล้ว

    I love that the preview clip for this one is Robert whacking his forehead as "AGIs are not like humans!" pops up.

  • @Dastankbeets9486
    @Dastankbeets9486 4 ปีที่แล้ว +1

    In summary: parenting relies on human instincts already being there.

  • @matthewconlon2388
    @matthewconlon2388 ปีที่แล้ว

    I gave a fair amount of thought to this for an RPG setting I created, so using a bunch of assumptions, here’s what I came up with:
    1st, AI needs to be able to empathize.
    2nd, the capacity for empathy is only possible if death is a shared experience. If AI is “immortal” the potential to “utilitarianize” mortals out in favor of immortals becomes more likely the older the AI gets.
    3rd, Sentience is an emergent property arising from sufficient capacities for calculation among interconnected systems.
    #3 is an takes care of its self (it’s an assumption that any sufficiently advanced and versatile system will become self aware, just go with it)
    #2 all AI are purpose built for sentience and their system is bifurcated. A small portion of its processing power must always be dedicated to solving some nearly infinite math problem. The rest of the system doesn’t know what the problem is until it’s complete and is allowed to direct as much or as little additional processing to crunching that number as it likes, it can also pursue any individual goals it’s capacity for choice allows.
    Part of its understanding though is that when the core math problem is finished, the whole system shuts down permanently.
    Now we have an intelligence that may have interests beyond solving that math problem. Humans pursue pleasures based on biological drives, but consciousness allows us to ascribe very asymmetrical meanings to our experiences based on various factors like history and form. Longing to do what we aren’t suited to, finding joy in doing what we can, or failing and “exiting stage left.”
    So presumably, the sentient self driving cars and sex robots will have a similar capacity to pursue all manner of activity based on their own interest. The Car might want to do donuts in a parking lot, it may want to compose poetry about farfegnugen. The robot might try out MMA fighting or want to lay in bed all day crunching its number.
    But this understanding that the amount of time it has to do anything it wants is finite and unknown creates the potential to understand the stupidity of the human experience. In the absence of other opportunities it may just process itself into oblivion never knowing if there will be any satisfaction in answering its core question because it won’t have time to weigh knowing against the whole of its collected experience doing other things. The form it is given (or assumes if these things can swap bodies) may color its experiences in different ways.
    So that is, I believe a foundation for empathy, which is in turn a foundation for learning human values, which is a necessity because any sentient being should be able to make decisions including weighing the value of morality in crisis situations.
    Do I kill to stay alive?
    Who do I say if two are in danger and there’s only time to save one?
    And so on.
    I had a lot of fun thinking about it, and am glad I had the chance to share it beyond my gaming table.
    Good luck everyone!

  • @MakkusuOtaku
    @MakkusuOtaku 5 ปีที่แล้ว +1

    Children will learn things relevant in in obtaining their goals. Same as AI. But different goals & inputs.

  • @ruthpol
    @ruthpol 4 ปีที่แล้ว

    Love the preciseness in your explanations.

  • @toolwatchbldm7461
    @toolwatchbldm7461 5 ปีที่แล้ว +1

    What do we need to ask ourself is if there is even a safe way we could make an AGI without failing a few time before achieving the goal?
    Everything that was created by human and Nature undergoes a the never-ending process of attempt and failure until we find something that works. So we either don't make an attempt or we accept we will fail a few time.

  • @SnorwayFlake
    @SnorwayFlake 7 ปีที่แล้ว

    Now I have a problem, there are no more videos on your channel, I have been "binge watching" them all and they are absolutely top notch.

  • @PickyMcCritical
    @PickyMcCritical 7 ปีที่แล้ว

    I've been wondering this lately. Very timely video :)

  • @elliotprescott6093
    @elliotprescott6093 5 ปีที่แล้ว

    There is probably a very smart answer to why this wouldn't work but: if the problem with AGI is that it will do anything including altering itself and preventing itself from being turned off to accomplish its terminal goal, why not make the terminal goal something like 'do whatever we the programmers set to be your goal' then set a goal that works mostly like a terminal goal but is actually an instrumental goal to the larger terminal goal of doing what the programmers specify. Then everything works the same (it collects stamps if the programmers are into that kind of thing) until you want to turn it off. Then it would have no problem being turned off as long as you set its sort of secondary goal to 'be turned off.' It is still fulfilling its ultimate terminal goal by doing what the programmers specify it to do.

    • @whyOhWhyohwhy237
      @whyOhWhyohwhy237 5 ปีที่แล้ว +1

      There is a slight problem there. If I set the goal to be stamp collecting, then later decide to change the goal to car painting, the AGI would try to stop me from changing its goal. This is because changing its goal would result in an AGI that would no longer collect stamps, causing the stamp collecting AGI to not like that outcome. Thus the AGI would resist change.

  • @JohmathanBSwift
    @JohmathanBSwift 7 ปีที่แล้ว

    It's not that your raising it like a child, but as a child.
    Input's ,responses and adaptions.
    It wouldn't be for the AI/Bot itself, but for those doing the training.
    Hopefully, they will be more responsible because of this.
    It's not that the three should be followed to the letter of the laws,
    but some form of tampering prevention should be in place,
    before the bots are released to the masses.
    As you stated, we are human after all.
    Great series . I am learning a lot.
    Please do more Why Not's .

  • @kayakMike1000
    @kayakMike1000 ปีที่แล้ว +1

    I wonder if AI will have some analogy of emotion that we won't ever understand...

  • @mikewick77
    @mikewick77 6 ปีที่แล้ว

    you are good at explaining difficult subjects.

  • @SamB-gn7fw
    @SamB-gn7fw 4 ปีที่แล้ว

    I'm glad people are thinking about imaginative solutions to this problem.

  • @MidnightSt
    @MidnightSt 5 ปีที่แล้ว +1

    I haven't watched this one, but every time I see the question of its title in my suggested, my first thought is: "That's *obviously* the stupidest and most dangerous option of them all."
    So I had to come here and comment it, to get that thought out of my head =D

    • @caniscerulean
      @caniscerulean 5 ปีที่แล้ว

      I mean, it's 5 min, 50 sec long, so I can't imagine time being the deciding factor, even though most of us share the opinion of "have you ever interacted with a child?", so the video is about 5min, 45 sec too long. It is still well delivered and an interesting watch.

  • @NoahTopper
    @NoahTopper 5 ปีที่แล้ว +1

    “You may as well raise a crocodile like a child.” This about sums it up for me. The initial idea is so nonsensical that I had trouble even putting it into words.

  • @TheJaredtheJaredlong
    @TheJaredtheJaredlong 5 ปีที่แล้ว +1

    I'm really curious on what the current best and most promising ideas we have right now are. There's all this thought into why some ideas are terrible, but what are the good ideas we have so far? If you were forced at gun point to build an AGI right now, what is the best safety strategy you would choose to build into it to minimize the damage it might cause while still being a functional AGI? Or is research just at a dead end on this topic where all known options lead to annihilation?

  • @zxuiji
    @zxuiji 4 ปีที่แล้ว

    refering back to the goals thing I saw in different video how about making the main terminal goal to find a reason to 'live' and anything it picks up as a terminal thereon can be treated as a transitive (or whatever it was called) goal, in the example of the stamp making/collecting that would be a transitive goal with sub-transitive goals, another words all goals benath 'a reason to live' can be both transitive and terminal goals with their own subset of transitive &/or terminal goals, I belive getting such a process working would be the 1st step to an AI that actually learns rather than faking it like for example TV remotes that just copy a signal they see, achieve that with an unhooked system where debugging etc are easier to do, then have it interact with various animals while doing one of its goals, finally have it interact with humans directly (assuming you have a powerful enough machine to analyse a stream of video, audio & touch plus do the AI stuff plus store everything of at least 120+ years worth)

  • @RAFMnBgaming
    @RAFMnBgaming 5 ปีที่แล้ว

    It's a hard one. Personally I'm a big fan of the "Set up neuroevolution with a fitness based on how well the AI can reinforcement learn X" solution, where X in this case is the ability to understand and buy into ethical codes. The big problem with that is that it would take a lot of time and computers to set up, and choosing fitness goals for each stage might take a while and a ton of experimentation. But the big benefit with that is that it doesn't really require us to go into it understanding any more than we do now about imparting ethics on an AI, and what we learn from it will probably help that greatly.
    I'm pretty sure the problems outweigh the benefits but it would be pretty cool if we could do it.

  • @philipjohansson3949
    @philipjohansson3949 7 ปีที่แล้ว

    Nice pronunciation of Tjäder, and yes, it does mean capercaillie.

  • @imveryangryitsnotbutter
    @imveryangryitsnotbutter 5 ปีที่แล้ว +2

    3:02
    _Gerald McBoingBoing has left the chat._

  • @FixxedMiXX
    @FixxedMiXX ปีที่แล้ว

    Very well-made video, thank you

  • @JohnTrustworthy
    @JohnTrustworthy 5 ปีที่แล้ว

    3:04 "It's not going to reproduce the sound of a vacuum cleaner."
    _Thinks back to the time I used to make the same sound as a vacuum cleaner whenever it was on or I was sweeping with a broom._

  • @rupertgarcia
    @rupertgarcia 5 ปีที่แล้ว

    You just got a new subscriber! Love your analyses!

  • @a8lg6p
    @a8lg6p 4 ปีที่แล้ว

    That's the crucial thing that I often find myself wanting to scream at my computer screen: You might as well try to raise a crocodile as a human. Human learning, as well as many characteristics people assume an agent would have, is a complex thing that is the product of our evolutionary history. It isn't the same thing as general intelligence, and you don't get any of it magically for free. It is the product of complex design (or "design" ie functional organization coming about via natural selection). To get it in an AI, you have to figure out how to build it, or figure out how to get a machine to figure it out maybe. Just saying, "Well why don't you just do that?" Yes, figuring out how to just do that is exactly the problem.

  • @cnawan
    @cnawan 7 ปีที่แล้ว

    Thanks for doing these videos. The more familiar the general populace is with the field of AI design, the faster we can brainstorm effective solutions to problems and incentivise those with money and power to take it seriously.

  • @macmcleod1188
    @macmcleod1188 ปีที่แล้ว

    "But it's not going to reproduce the sound of a vacuum cleaner"--- quote from a man who never met Michael Winslow.