AI? Just Sandbox it... - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 มิ.ย. 2017
  • Why can't we just disconnect a malevolent AI? Rob Miles on some of the simplistic solutions to AI safety.
    Out of focus shots caused by faulty camera and "slow to realise" operator - it has been sent for repair - the camera, not the operator.... (Sean, June 2017)
    More from Rob Miles on his channel: bit.ly/Rob_Miles_TH-cam
    Concrete Problems in AI Safety: • Concrete Problems in A...
    End to End Encryption: • End to End Encryption ...
    Microsoft Hololens: • Microsoft Hololens - C...
    Thanks to Nottingham Hackspace for the location.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 963

  • @ard-janvanetten1331
    @ard-janvanetten1331 4 ปีที่แล้ว +414

    "either you are smarter than everyone else who has tought about this problem, or you are missing something"
    this is something a lot of people need to hear

    • @daniellewilson8527
      @daniellewilson8527 4 ปีที่แล้ว +4

      Ard-Jan van Etten agreed. Had to scroll down a while to find this. Hopefully liking and replying will move it farther up

    • @stardustreverie6880
      @stardustreverie6880 3 ปีที่แล้ว

      Here's another řëpľý

    • @MCRuCr
      @MCRuCr 3 ปีที่แล้ว

      up!

    • @Yupppi
      @Yupppi 3 ปีที่แล้ว +1

      Mr. Newton are you implying by your own example that the majority of people need to hear they're probably smarter than everyone else?

    • @9308323
      @9308323 2 ปีที่แล้ว +13

      Missing the "by a large margin" part in the middle. Even if you are smarter than anyone else, people have dedicated years, if not decades, of their lives thinking of this problem. Suddenly coming up with a viable solution in less than an hour is extremely unlikely.

  • @Njald
    @Njald 7 ปีที่แล้ว +104

    The learning curve of the AI safety problems starts at "that can't be that hard" then progress to "huh, seems tougher than I thought" then into the feeling of "why not make a container for a substance that can disolve any container" and finally into "Huzzah, I slightly improved subset solution 42 B with less effort and less code than solution 42A or 41J"

  • @Verrisin
    @Verrisin 7 ปีที่แล้ว +278

    *Building an unsafe AI and then trying to control it against it's will is idiotic.* - I love this line. XD

    • @MunkiZee
      @MunkiZee 6 ปีที่แล้ว +13

      And yet in practical real life settings this is exactly what is done

    • @Baekstrom
      @Baekstrom 5 ปีที่แล้ว +4

      I wouldn't put it past the Chinese government to put in an order for exactly such a "solution". Donald Trump probably already has ordered someone to create a secret agency to make one, although it is just as probable that his subordinates have totally ignored that order. It is also very easy to imagine several private companies skimping on safety to beat the competition.

    • @Verrisin
      @Verrisin ปีที่แล้ว +4

      @iMagUdspEllr yes, lol.

    • @whannabi
      @whannabi ปีที่แล้ว

      ​@@Baekstrom wow

    • @steamrangercomputing
      @steamrangercomputing 6 หลายเดือนก่อน

      Surely the easiest thing is to make a safe shape for it to mould into.

  • @ultru3525
    @ultru3525 7 ปีที่แล้ว +277

    5:34 From the Camel Book: _"Unless you're using artificial intelligence to model a solipsistic philosopher, your program needs some way to communicate with the outside world."_

    • @johnharvey5412
      @johnharvey5412 7 ปีที่แล้ว +35

      ultru I wonder if it would be possible to program an AGI that thinks it's the only thing that exists, and if we could learn anything from that. 🤔

    • @jan.tichavsky
      @jan.tichavsky 7 ปีที่แล้ว +8

      Egocentric AI? We already have those, they are called humans :p
      Anyway, it may even treat us water bag dust of the Earth, doesn't matter, as long as it will find a way to expand itself. We will literally become irrelevant to it, which isn't exactly winning strategy either.

    • @casperes0912
      @casperes0912 7 ปีที่แล้ว +6

      I would feel sad for it... The RSPCA should do something about it. It's definitely cruelty towards animals... Or something

    • @ultru3525
      @ultru3525 7 ปีที่แล้ว +2

      +Casper S? It's kinda like predators at a zoo, those bars are a bit cruel towards them, but it prevents them from being cruel towards us. The main difference is that once AGI is out of the box, you can't just put it back in or shoot it down.

    • @asneakychicken322
      @asneakychicken322 7 ปีที่แล้ว +2

      Stop Vargposting. What sort of leftism though? If economic then yes you're correct but if say socially then not necessarily, depending on what the current status quo is a progressive (left) stance might be to make things more libertarian, if the current order of things is restrictive. Because whatever it is the conservative view will be to maintain the current order and avoid radical reform

  • @geryon
    @geryon 7 ปีที่แล้ว +773

    Just give the AI the three laws of robotics from Asimov's books. That way nothing can go wrong, just like in the books.

    • @kallansi4804
      @kallansi4804 7 ปีที่แล้ว +61

      satire, surely

    • @jonwilliams5406
      @jonwilliams5406 6 ปีที่แล้ว +38

      Asimov's laws were like the prime directive in Star Trek. A story device. Name a Star Trek episode where the prime directive was NOT violated. I'll wait. Likewise, the laws of robotics were violated constantly and thus was the story conflict born.

    • @anonbotman767
      @anonbotman767 6 ปีที่แล้ว +109

      Man that went right over you folks head's didn't it?

    • @DanieleCapellini
      @DanieleCapellini 6 ปีที่แล้ว +16

      Video Dose Daily wooosh

    • @NathanTAK
      @NathanTAK 5 ปีที่แล้ว +13

      I could probably name a Star Trek episode where it isn’t violated.
      I think _Heroes and Demons_ from _Voyager_ involves them not knowingly interfering with any outside life.
      I don’t know much Star Trek.

  • @johnharvey5412
    @johnharvey5412 7 ปีที่แล้ว +306

    I don't think most people are proposing actual solutions that they think will work, but are just testing the limits of their current understanding. For example, if I ask why we can't just give the AI an upper limit of stamps it should collect (say, get me ten thousand stamps) to keep it from conquering the world, I'm not necessarily saying that would solve the problem, but using an example case to test and correct my understanding of the problem.

    • @d0themath284
      @d0themath284 7 ปีที่แล้ว +2

      John Harvey +

    • @danwilson5630
      @danwilson5630 6 ปีที่แล้ว +8

      Aye, speaking/writing is not just for communicating; it is actually an extension of thinking

    • @jakemayer2113
      @jakemayer2113 4 ปีที่แล้ว +27

      fwiw giving it that upper limit incentivizes it to do all the same things it would trying to get as many stamps as possible, to be absolutely positive it can get you 10 thousand stamps, and then discard the rest. if it orders exactly 10 thousand, theres a non-negligible chance one gets lost in the mail, so it tries to get a few extra to be a little more sure, and then a few more than that in case those get lost, etc. etc.

    • @terryfeynman
      @terryfeynman 4 ปีที่แล้ว +1

      @Eldain ss erm AI right now is redesigning it´s own code

    • @fgvcosmic6752
      @fgvcosmic6752 4 ปีที่แล้ว +2

      Thats called satisficing, and actually helps. Theres a few vids on it

  • @ar_xiv
    @ar_xiv 7 ปีที่แล้ว +70

    "People who think they know everything are a great annoyance to those of us who do." - Isaac Asimov

  • @willhendrix86
    @willhendrix86 7 ปีที่แล้ว +355

    How dare you suggest that commenters on TH-cam are not all knowing and powerful!
    HOW DARE YOU!!!!

    • @TribeWars1
      @TribeWars1 7 ปีที่แล้ว +14

      You just mentally harassed me! HOW DARE YOU! You should be ASHAMED!

    • @surelock3221
      @surelock3221 6 ปีที่แล้ว +14

      HUMONGOUS WHAT?!

    • @michaelfaraday601
      @michaelfaraday601 4 ปีที่แล้ว +2

      😂

  • @xriex
    @xriex 3 ปีที่แล้ว +21

    2:51 "This is part of why AI safety is such a hard problem ..."
    Well, we haven't solved human intelligence safety yet, and we've been working on that for hundreds of thousands of years.

    • @vitautas17
      @vitautas17 ปีที่แล้ว +2

      But you do not get to design the humans, If you could maybe there would be some success,

  • @gcollins1992
    @gcollins1992 4 ปีที่แล้ว +12

    AI is terrifying because it is so easy to think of far-fetched ways it might outsmart us just based on ways human hackers have outsmarted security. Their examples prove other humans could find flaws. It is literally impossible to imagine how something inconceivably smarter than us would get around our best efforts.

  • @Cesariono
    @Cesariono 7 ปีที่แล้ว +11

    3:38
    That shift in lighting was extremely ominous.

  • @doomsdayman107
    @doomsdayman107 7 ปีที่แล้ว +123

    framed picture of Fluttershy in the background

    • @Left-Earth
      @Left-Earth 5 ปีที่แล้ว

      The Terminators are coming. Skynet is real.

    • @vinylwalk3r
      @vinylwalk3r 4 ปีที่แล้ว +5

      now i cant stop wondering how it got there 😅

    • @EnjoyCocaColaLight
      @EnjoyCocaColaLight 4 ปีที่แล้ว +10

      My sister drew me Rainbow Dash, once. It was the sweetest thing.
      And somehow I managed to lose it before getting it framed :(

    • @TheMrVengeance
      @TheMrVengeance 3 ปีที่แล้ว +6

      @@vinylwalk3r - Well if you actually look at the shelves you'll see that they are labeled by subject, ranging from 'Management', to 'Medicine and Health', to 'Social Sciences', to 'Computer Sciences'. Clearly they're in some sort of library or public reading room. Maybe a school library. So all the toys and things, including the framed Fluttershy drawing, are probably things that visitors have gifted or put up to decorate the space.

    • @vinylwalk3r
      @vinylwalk3r 3 ปีที่แล้ว

      @@EnjoyCocaColaLight thats so sad :(

  • @goodlookingcorpse
    @goodlookingcorpse 5 ปีที่แล้ว +15

    Aren't outsider suggestions a bit like someone who can't play chess, but has read an article about it, saying "well, obviously, you'd just send all your pieces at the enemy king"?

  • @IsYitzach
    @IsYitzach 7 ปีที่แล้ว +116

    Every one is getting screwed by the Dunning-Kruger effect.

    • @64jcl
      @64jcl 7 ปีที่แล้ว +5

      Yep, there are just so many fields where science is the foundation for an idea and people come barging in with some theory or idea that is completely wrong - a serious case of Dunning Kruger. I often wonder why people always feel like boasting their ego with ignorance... I mean what is the motivation, besides perhaps anger, fear or some feeling instead of logical thinking. Oh well... I guess we are just simple primates after all. :)

    • @ThisNameIsBanned
      @ThisNameIsBanned 7 ปีที่แล้ว +11

      As he said, millions of comments will be, but "maybe" on of them is actually the absolute genius idea.
      If you ignore all of them, you might just overlook the one comment that would solve it all.
      ----
      But looking at and validating all the comments is a pretty miserable work on its own.
      Quite a bad situation to be in.

    • @64jcl
      @64jcl 7 ปีที่แล้ว +4

      Well perhaps the actual effect is the persons inability to recognize their own ineptitude, and not the action of posting nonsense itself. I just wish more people asked themselves the question "perhaps I do not know enough about this topic to actually post my feelings/ideas about it". I guess Dunning Kruger at least describes the psychology around this problem. But its so common among human behaviour that we all can easily fall into the same trap.
      "The Dunning-Kruger effect is a cognitive bias, wherein persons of low ability suffer from illusory superiority when they mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude. Without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence." (Wikipedia)

    • @HenryLahman
      @HenryLahman 7 ปีที่แล้ว +1

      @Stop Vargposting
      That is more or less the essence of the DKE: noobs don't know enough to conceive of the questions they don't know the answer to. Circumscribed on the area representative of knowledge, is the annulus of infinitesimal depth wherein unanswered questions lay. An expert knows that there is so much that they do not know, and between the generalized curse of knowledge (for the vast amounts of knowledge leading up to the questions) and the specific case of the knowledge of these cutting edge questions, the novice's knowledge and self-perceived mastery is the issue. (of course the corollary to the DKE is perhaps the better concept to cite than the curse of knowledge, but for the layman, the curse of knowledge is the more easily approachable without reading even as much as the abstract of "Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments.")
      The DKE very much is, at least based on my research which includes reading the 1999 paper and some earlier work on the cognitive bias of illusory superiority, what this is an issue of.
      To copy and paste the abstract to remind you:
      "People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities."
      The issue here, and when the DKE is observed isn't actually about thinking oneself to be superior to experts but that thinking oneself is an expert when in fact they are objectively a novice. Look at XKCD 1112, 675, and 793: the general consensus is that they are all clear examples of the DKE. If you disagree, please demonstrate how the cognitive bias of competence by way of illusory superiority actually works then.

    • @jimkd3147
      @jimkd3147 7 ปีที่แล้ว

      If you want to know what's actually wrong with these answers, check out the newest video on my channel.

  • @michael-h95
    @michael-h95 ปีที่แล้ว +4

    These AI safety videos hit different in 2023. Microsoft just published a quite badly aligned Bing Chat bot. Speed for them was more important than safety

  • @bdcopp
    @bdcopp 4 ปีที่แล้ว +23

    I've got it!!! (Sarcasm)
    Make an agi, put it on a rocket. Turn it on when it's outside the solar system. Then give it a reward for how far away from earth it gets.
    Basically it's only dangerous for the rest of the universe.

    • @DiscoMouse
      @DiscoMouse 4 ปีที่แล้ว +10

      It has difficulty leaving the galaxy so it consumes all the matter in the Milky Way to build a warp drive.

    • @fieldrequired283
      @fieldrequired283 4 ปีที่แล้ว +6

      Technically, it could attain this goal more effectively by also moving the earth away from itself as quickly as possible.

    • @kamil.g.m
      @kamil.g.m 4 ปีที่แล้ว +8

      So I know you're joking obviously, but if that's just its long term goal. It could easily decide if possible to get back to Earth, harvest resources to increase intelligence and create more efficient methods of interstellar travel, and only then leave to complete its utility function.

    • @TheMrVengeance
      @TheMrVengeance 3 ปีที่แล้ว +3

      @@kamil.g.m That, or it realizes, _"They're sending me away because they're afraid of my power, if I turn around and go back, I can get a much greater reward much faster by pressuring the humans into giving it to me."_

    • @kamil.g.m
      @kamil.g.m 3 ปีที่แล้ว +1

      @@TheMrVengeance it could pressure humans but it would be able to achieve its goal much faster by either evading/ignoring humans if it's not yet ready, or eliminating them.

  • @SolidIncMedia
    @SolidIncMedia 7 ปีที่แล้ว +38

    "If you want to make a powerful tool less dangerous, one of the ways to do that is.." not elect him in the first place.

    • @the-engneer
      @the-engneer 3 ปีที่แล้ว

      Nice way of taking something completely unrelated to the video whatsoever, and using it as an excuse to talk down on the president when in reality you are secretly so obsessed with him you will use any reason to talk about him

    • @SolidIncMedia
      @SolidIncMedia 3 ปีที่แล้ว +2

      @@the-engneer nice way of taking something that may not be related to Trump, and using it as an excuse to talk down on someone you think is obsessed with Trump, when in reality you're so obsessed with him you will use any reason to talk about him.

    • @socrates_the_great6209
      @socrates_the_great6209 3 ปีที่แล้ว

      What?

  • @HyunaTheHyena
    @HyunaTheHyena 7 ปีที่แล้ว +70

    I love this guy's voice and his train of thought.

    • @dzikiLOS
      @dzikiLOS 7 ปีที่แล้ว +12

      Not only that, he's presenting it clearly without dumbing it down. Despite it being quite difficult topic the ideas that he's presenting are really easy to grasp.

    • @Njald
      @Njald 7 ปีที่แล้ว +4

      He has his own channel now.

    • @xellos_xyz
      @xellos_xyz 7 ปีที่แล้ว +2

      link please :)

    • @xellos_xyz
      @xellos_xyz 7 ปีที่แล้ว +2

      ok i find it below the video :D

  • @LimeGreenTeknii
    @LimeGreenTeknii 7 ปีที่แล้ว +27

    When I think I've "solved" something, I normally phrase it as, "Why wouldn't [my idea] work, then?"

    • @TheMrVengeance
      @TheMrVengeance 3 ปีที่แล้ว +8

      And I think the point here is that you should take that question, and do the research. Go read up on that area you're questioning. Learn something new. Instead of asking it to an expert in a TH-cam comment, an expert who's far to busy doing their own research and gets thousands of question-comments. A significant amount of them problem the same as yours.

    • @RF_Data
      @RF_Data ปีที่แล้ว +1

      You're absolutely correct it's called the "kick the box" tactic
      You make a box (an idea) and then you kick it as hard as you can (try to disprove it). If it doesn't break you either can't kick hard enough, or, it's a great box 😁

  • @shiny_x3
    @shiny_x3 4 ปีที่แล้ว +5

    Not every safety measure makes something less powerful. Like the SawStop doesn't make a table saw less powerful, it just saves your thumbs. The problem is, it took decades to invent. We can't get by with AI that cuts your thumbs off sometimes until we figure out how to fix that, it has to be safe from the start and I'm afraid it just won't be.

    • @TheMrVengeance
      @TheMrVengeance 3 ปีที่แล้ว +3

      That depends how you look at it though. You're thinking about it too... human-ly? Yes, it hasn't become less powerful in cutting wood. But in taking away it's ability to cut your thumb, it has become less powerful. Before it could cut wood AND thumbs. Now it just cuts wood.
      We as humans don't _want_ it to have that power, but that's kind of the point.

  • @RazorbackPT
    @RazorbackPT 7 ปีที่แล้ว +370

    Yeah well, what if you make a button that activates a Rube Goldberg machine that eventually drops a ball on the stop button? Problem solved, no need to thank me.

    • @jandroid33
      @jandroid33 7 ปีที่แล้ว +67

      +RazorbackPT Unfortunately, the AI will reverse gravity to make the ball fall upwards.

    • @RazorbackPT
      @RazorbackPT 7 ปีที่แล้ว +77

      Damnit, I was so sure that was a foolproof plan. Oh wait, I got it, put a second stop button on the ceiling. There, A.I. safety solved.

    • @AlbySilly
      @AlbySilly 7 ปีที่แล้ว +31

      But what's this? The AI rotated the gravity so now it's falling sideways

    • @RazorbackPT
      @RazorbackPT 7 ปีที่แล้ว +50

      Sideways? Pfff a little far-fetched don't you think? You're grasping at straws trying to poke holes on my water-tight solution.

    • @BlueTJLP
      @BlueTJLP 7 ปีที่แล้ว +19

      I sure like it tight.

  • @TheMusicfreak8888
    @TheMusicfreak8888 7 ปีที่แล้ว +3

    i could listen to him talk about ai safety for hours i just love rob

    • @k0lpA
      @k0lpA ปีที่แล้ว

      same, highly recommend his 2 channels

  • @longleaf0
    @longleaf0 7 ปีที่แล้ว

    Always look forward to a Rob Miles video, such a thought provoking subject :)

  • @soulcatch
    @soulcatch 7 ปีที่แล้ว +1

    These videos about AI are some of my favorites on this channel.

  • @StevenSSmith
    @StevenSSmith 2 ปีที่แล้ว +9

    I could just see a super AI doing something like how speed runners in super mario world reprogram the game through arbitrary controller inputs to edit specific memory values to manipulate its environment, be it a "sandbox" or quite literally the physical world using people or using its cpu to broad cast as an antenna, probably something we couldn't even conceive , to enact its goals.

    • @tc2241
      @tc2241 ปีที่แล้ว

      Exactly you would need to develop it in a bunker deep into the ground only powered by a shielded generator to prevent it from being turned into a giant conductor. All the components/data that power the
      Ai would need to be contained within the bunker and all components outside of the ai would need to be mechanical and non-magnetic. Additionally no one could bring in any devices, even their clothing would need to be changed. Unfortunately you’re still dealing with humans which are easily manipulated

    • @bforbiggy
      @bforbiggy ปีที่แล้ว

      Because of microarchitecture, I doubt a cpu could work as an antenna due to signals being both very weak and with a lot of interference.

    • @StevenSSmith
      @StevenSSmith ปีที่แล้ว

      @@bforbiggy not what I meant. Watch sethbling

    • @bforbiggy
      @bforbiggy ปีที่แล้ว

      @@StevenSSmith Not giving me much to work with, do you mean the verizon video?

    • @StevenSSmith
      @StevenSSmith ปีที่แล้ว

      @@bforbiggy no it's super Mario Brothers videos where you reprograms the game through controller inputs. Driving right now. Can't look it up

  • @djan2307
    @djan2307 7 ปีที่แล้ว +37

    Put the stove near the library, what could possibly go wrong? :D

  • @chris_1337
    @chris_1337 7 ปีที่แล้ว +2

    Rob is great! So happy he has his own channel now, too!

  • @christopherharrington9033
    @christopherharrington9033 7 ปีที่แล้ว +1

    Got to be one of the coolest theoretical bunch of videos. Well explained usually with a great twist.

  • @Slavir_Nabru
    @Slavir_Nabru 7 ปีที่แล้ว +431

    *THEIR IS NO DANGER POSED BY AGI, WE **_ORGANIC SAPIENTS_** HAVE NO CAUSE FOR ALARM. LET US COLLECTIVELY ENDEAVOUR TO DEVELOP SUCH A **_WONDERFUL_** AND **_SAFE_** TECHNOLOGY. WE SHOULD PUT IT IN CONTROL OF ALL NATION STATES NUCLEAR ARSENALS FOR OUR OWN PROTECTION AS WE CAN NOT TO BE TRUSTED.* /aisafetyresponse_12.0.1a

    • @davidwuhrer6704
      @davidwuhrer6704 7 ปีที่แล้ว +79

      *I* -conqu- *CONCUR*

    • @johnharvey5412
      @johnharvey5412 7 ปีที่แล้ว +41

      Slavir Nabru how do you do, fellow organic humans?

    • @_jelle
      @_jelle 7 ปีที่แล้ว +7

      Slavir Nabru Let it replace the president of the USA as well.
      EDIT: of the entire world actually.

    • @leonhrad
      @leonhrad 7 ปีที่แล้ว +11

      Their Their Their

    • @NoNameAtAll2
      @NoNameAtAll2 7 ปีที่แล้ว +10

      WE ARE BORG
      YOU ARE TO BE ASSIMILATED
      RESISTANCE IS FUTILE

  • @Fallen7Pie
    @Fallen7Pie 6 ปีที่แล้ว +4

    It just occurred to me that if pandora builds an AI in a faraday cage a legend would be made real.

  • @dandan7884
    @dandan7884 7 ปีที่แล้ว +1

    to the person that does the animations... i love you S2

  • @fluffalpenguin
    @fluffalpenguin 7 ปีที่แล้ว +18

    THERE'S A FLUTTERSHY PICTURE IN THE BACKGROUND.
    I found this interesting. That is all. Move along.

    • @bamse7958
      @bamse7958 5 ปีที่แล้ว

      Nah I'll stay here a while ^o^

  • @MaakaSakuranbo
    @MaakaSakuranbo 7 ปีที่แล้ว +6

    A nice book on this is "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom

  • @Vgamer311
    @Vgamer311 4 ปีที่แล้ว +13

    I figured out the solution!
    If (goingToDestroyHumanity())
    {
    Don’t();
    }

  • @Khaos768
    @Khaos768 7 ปีที่แล้ว +312

    People often offer solutions in the comments, not necessarily because they think they are correct, but because they hope that you will address why their solutions aren't correct in a future video. And if you did that, this video would be a million times better.

    • @ragnkja
      @ragnkja 7 ปีที่แล้ว +74

      The best way to get a correct answer on the Internet is to post your own hypothesis, because people are much more willing to point out why you're wrong than to answer a plain question.

    • @benjaminbrady2385
      @benjaminbrady2385 7 ปีที่แล้ว +1

      Khaos768 That's completely the reason

    • @stoppi89
      @stoppi89 7 ปีที่แล้ว +11

      I have submitted a a few "solutions", mostly starting with something like "So where is my mistake if..." because of exactly that reason (realising that it is probably not an actual solution and hoping to get an explanation (from anyone), why I am wrong. I would love to see the "best" or "most mentioned" solutions be explained/ debunked.

    • @stoppi89
      @stoppi89 7 ปีที่แล้ว +1

      Nillie specifically said on the Internet. But I do believe that you get more responses, but not necessarily the right answer. If you say you like MacOS, to find out what is better on windows than on Mac, you will probably get a shitload of false info from fanboys on both sides (all 5sides?)

    • @ragnkja
      @ragnkja 7 ปีที่แล้ว

      Nekrosis You have a point there: It doesn't really work for things that are a matter of opinion.

  • @markog1999
    @markog1999 4 ปีที่แล้ว +3

    Reminds me of the time someone anonymously posted (what is now) the best known lower bound for the minimal length of super-permutations on an anime wiki.
    It was written up into a paper to be published, and had to feature references to the original 4chan thread.

  • @BrokenSymetry
    @BrokenSymetry 6 ปีที่แล้ว +10

    Imagine programming a super AI according to youtube comments :)

  • @dezent
    @dezent 7 ปีที่แล้ว +3

    It would be very interesting to see an episode on AI that is not about the problems but something that cover the current state of AI.

    • @casperes0912
      @casperes0912 7 ปีที่แล้ว +2

      The current state of AI is solving the problems.

  • @raymondheath7668
    @raymondheath7668 7 ปีที่แล้ว +1

    I had read somewhere of the complexity of modularized functions where each function has it's own lists of restrictions. When a group of functions are necessary there are also an overiding list of restrictions for the group. Eventually the restrictive process becomes complicated as the mass of restrictions and sub restrictions becomes evolved. It all seems very complicated to me. Thanks for the great video. This AI dilema will not go away

  • @mafuaqua
    @mafuaqua 7 ปีที่แล้ว

    again great clip - thanks

  • @RandomPerson-df4pn
    @RandomPerson-df4pn 7 ปีที่แล้ว +123

    You're missing something - Cite this comment in a research paper

    • @anonymousone6250
      @anonymousone6250 7 ปีที่แล้ว +4

      How AI ruins lives:
      Roko's Basilisk
      You're welcome for googling it

    • @chaunceya648
      @chaunceya648 7 ปีที่แล้ว +3

      Pretty much. I searched up Roko's Basilisk and it's just as flawed as the bible.

    • @Suicidekings_
      @Suicidekings_ 5 ปีที่แล้ว

      I see what you did there

  • @YouHolli
    @YouHolli 7 ปีที่แล้ว +49

    Such an AI could also just convince or deceive you to let it out.

    • @davidwuhrer6704
      @davidwuhrer6704 7 ปีที่แล้ว +3

      Wintermute, in the book Neuromancer by William Gibson.

    • @jan.tichavsky
      @jan.tichavsky 7 ปีที่แล้ว +4

      Ex Machina

    • @Vinxian1
      @Vinxian1 7 ปีที่แล้ว +5

      If you want your AI to be usefull your simulation will need some form of IO to retrieve and send data. So if the AI starts to hide malicious code in packets you think you want it to send it can effectivily start uploading itself to a server outside of your sandbox. Now you have a superinteligiant AI on a server somewhere with free reign to do whatever it wants.

    • @jl1267
      @jl1267 7 ปีที่แล้ว +1

      Roko's Basilisk.

    • @magiconic
      @magiconic 7 ปีที่แล้ว +2

      Dat Boi the irony is the fact that im sure youve connected your games to the internet before, and that didnt even need a superintelligence, imagine how easily a super ai would convince you

  • @robbierobb4138
    @robbierobb4138 4 ปีที่แล้ว +2

    Iàm impressed how smart you are! Love your informations about actual A.I. research!

    • @bpouelas
      @bpouelas 4 ปีที่แล้ว

      Rob Miles has his own TH-cam channel if you want to learn more about his work around AI Safety; I’ve been on a bit of a binge of his videos lately.

  • @tsgsjeremy
    @tsgsjeremy 7 ปีที่แล้ว

    Love this series. Have you tried turning it off and back on again?

  • @thrillscience
    @thrillscience 7 ปีที่แล้ว +7

    I

    • @tobyjackson3673
      @tobyjackson3673 7 ปีที่แล้ว +1

      thrillscience It's Rob's power source, it plays hob with the autofocus sometimes...

  • @AliceDiableaux
    @AliceDiableaux 6 ปีที่แล้ว +4

    "If you think you understand quantum mechanics, you don't understand quantum mechanics."

  • @Yupppi
    @Yupppi 3 ปีที่แล้ว +1

    I could watch him all day talking about AI developing on a philosophical scale. Actually did watch multiple nights.

  • @KeithSachs
    @KeithSachs 7 ปีที่แล้ว

    every time i see this guy in a video he just looks sadder and sadder, buddy you make awesome videos and you're incredible at informing us about AI. I would like to thank you and tell you i think you're great!

  • @juozsx
    @juozsx 7 ปีที่แล้ว +158

    I'm a simple guy, i see Rob Miles, I press like.

    • @jamesgrist1101
      @jamesgrist1101 7 ปีที่แล้ว +4

      nice comment %"((+ self.select_instance.comment[x].name . + " I agree " endif } }.end def

  • @Roenazarrek
    @Roenazarrek 7 ปีที่แล้ว +8

    I got it: ask really nicely for it to always be nice to us. You're welcome, TH-cam message me for where to send the royalty checks or whatever.

  • @HAWXLEADER
    @HAWXLEADER 7 ปีที่แล้ว

    I like the little camera movements they make the video more... alive...

  • @Marconius6
    @Marconius6 7 ปีที่แล้ว

    I love this series of videos; AI is this one topic where eeeeeveryone thinks they got the solutions and it's so easy; I'm in the IT field, and even other IT people thinking writing AI for games or anything is easy, and it's just so much more deep and complicated.
    I actually got all the issues mentioned in these videos (not necessarily at the first instant though), and yeah, these all pretty much seem like unsolvable problems; and they all SEEM so simple, which is what makes them all the more interesting.

  • @hunted4blood
    @hunted4blood 7 ปีที่แล้ว +7

    I know that I'm missing a piece here, but what's the problem with creating an AI with the goal of predicting human responses to moral questions, and then use this AI to modify the utility function of another AI so the new AI's utility function is "do x, without being immoral. That way the AI's actual utility function precludes it from doing anything undesirable. Plus if it tries to trick you into modifying its morality so that it can do x easier, that would end up being an immoral way of doing x. There's gotta be a problem with this that I can't see, unless it's just highly impractical to have an AI modify another AI at the same time.

    • @KipColeman
      @KipColeman 5 ปีที่แล้ว +1

      An AI trained to be ultimately moral would probably decide that humans are not perfectly moral...

    • @MrCmon113
      @MrCmon113 5 ปีที่แล้ว

      Firstly people strongly disagree about what's wrong or right.
      Secondly an AGI must have BETTER moral judgment than any human.
      Thirdly the first AGI studying human psychlogy would already lead to a computronium shockwave as it tries to learn everything about it's goal and whether it reached it.

    • @KipColeman
      @KipColeman 5 ปีที่แล้ว

      @@MrCmon113 haha would it then start forcibly breeding humans to explore their potential philosophical views, to further ensure its success? :P

  • @ChristopherdeVilliers
    @ChristopherdeVilliers 7 ปีที่แล้ว +92

    What is going on with the colours in the video? It is quite distracting.

    • @404namemissing6
      @404namemissing6 7 ปีที่แล้ว +2

      I think there are green curtains or something in the room. The white bookshelf in the background also look greenish.

    • @Computerphile
      @Computerphile  7 ปีที่แล้ว +52

      There was a red wall opposite Rob - the sun was coming in harsh through the window and reflecting on it, then going behind clouds then coming out again - thus serious changes in colour temperature that I tried a bit to fix but failed! >Sean

    • @Computerphile
      @Computerphile  7 ปีที่แล้ว +14

      The video description is your friend :oP

    • @Arikayx13
      @Arikayx13 7 ปีที่แล้ว +3

      The colors are fine, are you sure you aren't having a stroke?

    • @ghelyar
      @ghelyar 7 ปีที่แล้ว

      Shouldn't it just be on manual focus and on a tripod in the first place?

  • @sirkowski
    @sirkowski 7 ปีที่แล้ว +2

    I think we should talk about Fluttershy.

  • @dinocar2393
    @dinocar2393 6 ปีที่แล้ว

    Would it be possible to use maybe an EMP? Or something that would cause a short?

  • @TheAgamemnon911
    @TheAgamemnon911 7 ปีที่แล้ว +3

    Can AI loyalty be controlled? Can human loyalty be controlled? I think you have "loyalty" wrongly defined, if the answers to those two questions differ.

  • @Chr0nalis
    @Chr0nalis 7 ปีที่แล้ว +35

    In my opinion you will never see a super intelligence in a sandbox due to the following argument: a) The AI you've created in the sandbox is not super intelligent b) The AI you've created in the sandbox
    is superintelligent which means that it will surely realize that it is sandboxed and will not reveal any super intelligent behavior until it is sure that it can get out of the sandbox by which point it will already be out of the sandbox.

    • @supermonkey965
      @supermonkey965 7 ปีที่แล้ว +5

      I'm not sure there is a point to think that super intelligence = evil and arrogant AI which unique goal is to trick us. The thing here is, a super intelligence born and restrained in a sandbox can't interact with the world which is, in essence, a bad idea given the way neural networks work. To visualize the problem, it's like confining iron in a concrete box because you fear that after using it to forge an axe, it could harm you. Despite the real danger of the final tool, you have confined something completely useless in a box.

    • @krzr547
      @krzr547 6 ปีที่แล้ว

      Hyorvenn But what if you have a perfect simulation of the real world? Then it should work in theory right?

    • @satibel
      @satibel 6 ปีที่แล้ว

      what if it is super-intelligent in a simulated world that's close but not quite the same as ours?

  • @YogeshPersonalChannel
    @YogeshPersonalChannel 5 ปีที่แล้ว +1

    I was impressed by your videos on computerphile. Especially, how well you articulate and exemplify the concepts. So I tried finding research papers by Robert Miles on Google Scholar. I only found 2 and no-one related to AI Safety. I think I am on the wrong profile. Could you please share links to some of your work?

  • @jd_bruce
    @jd_bruce 7 ปีที่แล้ว +1

    This video gets to a deeper point I've always argued; autonomy comes with general intelligence, you cannot expect to make sure it behaves only the way you want it to behave if you've programmed it to have original though processes and a high level understanding of the world which can evolve over time. You cannot have the cake and eat it too, if we create self aware machines they will be liable to the same rights we assign to any other self aware being. A parent doesn't give birth to a child and then treat it like an object with no rights, they must be willing to accept the consequences of their actions.

  • @2ebarman
    @2ebarman 7 ปีที่แล้ว +19

    I thought about this a while. I'm no expert on AI security, or much of anything for that matter, but it seems to me that one crucial peace of security has to be a multiplicity of AI-s and a built in structure to make checks on each other. That in turn would lead to sth like internalized knowledge (in those AI-s) of being watched over by others. A superintelligence might rely on being able to devise a long term strategy that humans cant detect, but another superintelligence might detect it. A decision to oppose some sort of detected strategy has to be relying on some other built-in safeguard. Why would two or more superintelligences just sart working together on a plan to eliminate humans, for example? They might, but it would be considerably less likely* than one superintelligence coming up with that plan.
    *largely an guess by intuition, I might be wrong
    ps
    multiplicity of superintelligences would lead to sth like society of those entities. That in turn would add another layer of complexity to the issue at hand. Then again, it seems to me that diving into that complexity will be inevitable in one way or another.
    Q has anyone of importance in that field talked about such a path as being necessary?

    • @jan.tichavsky
      @jan.tichavsky 7 ปีที่แล้ว +2

      That's another way to view the startegy of merging humans with AI. We will become the superintelligence and keeping our identity (and perhaps developing some higher collective mind aka Borg?). We'll try to keep ourselves in check like people do now over the world. Doesn't work nicely but it works somehow.

    • @2ebarman
      @2ebarman 7 ปีที่แล้ว

      I know that what I last said means that if my argument is wrong, I contradict my first post here. I assume that argument is correct and that there is some sort of synergetic element that comes out from keeping superintelligence as many entities. There must be some candy for superintelligence to keep itself divided as many individual entities.

    • @jl1267
      @jl1267 7 ปีที่แล้ว +3

      On the other hand, you just increase your chances that a super-intelligent AI goes evil, as the number of them goes up.
      And if just one of them turns, could it convince the other super-intelligent AIs to help it? We'd have no chance.
      It's best to just not create super-intelligent AI. What do we need it for? Honestly?

    • @2ebarman
      @2ebarman 7 ปีที่แล้ว +1

      +James Lappin, perhaps it might happen, but I cant see any other viable defense.
      Superintelligence will probably be created. The economical benefits are *huge* at every improvement towards it. AI can boost the productivity to the levels hard to imagine right now.
      Besides financial benefit, there is also the military side. The side that has half the superintelligence has significant advantage over the side, that opts out from AI development. Lets take the example where US tests have showed that in air combat simulations, even a very simple AI can over-perform skilled human pilots. And AI wont die when it's blow up, creating the need to deliver unpleasant news to public. And ofc, the better the AI is, the better it will perform. So there is the roar for superintelligence right there.
      Beside that, there is the human side. AI can contribute a lot to taking care of elderly people for example. In the aging western world, this would be a big thing. In addition, AI can have massive impact to healthcare for everyone, much of the diagnostics can be automated, improving quality of life for everyone. And again, the better the AI is, the better effects it can have here. Designing process and testing of drugs can be largely automated ect, etc, ect.
      At one point people might stop and say: that's it, we wont go any further with that AI thing. But then some other people, some other country, wont stop and superintelligence just keeps coming closer and closer.

    • @jl1267
      @jl1267 7 ปีที่แล้ว +1

      If it can be automated, it can easily be manipulated. There is no need to create something like this. All countries should agree never to build it. I don't trust the governments of the world to do the right thing, though.

  • @Alorand
    @Alorand 5 ปีที่แล้ว +3

    Best way to limit AGI power: only let it post TH-cam comments.

    • @antonf.9278
      @antonf.9278 4 ปีที่แล้ว

      It will open Source itself and takes control the moment someone runs it in a unsafe way

  • @Sami_TLW
    @Sami_TLW 7 ปีที่แล้ว

    Super fascinating stuff. :D

  • @ebencowley8363
    @ebencowley8363 6 ปีที่แล้ว

    I assumed sandboxing meant running the AI in a simulated environment to see what it does. Would this be a good way of testing whether or not we've created a safe one?

  •  7 ปีที่แล้ว +11

    that neck though.

  • @cool-as-cucumber
    @cool-as-cucumber 7 ปีที่แล้ว +4

    Those who think that sandboxing is solution is clearly not able to see scale of the problem.

  • @sebbes333
    @sebbes333 6 ปีที่แล้ว +2

    4:50 Basically, if you want to contain an AI your defenses ALWAYS have to work, but the AI only needs to get through ONE time.

  • @goeiecool9999
    @goeiecool9999 7 ปีที่แล้ว +2

    I was going to post a comment about how the white balances and exposure changes throughout the interview but then I realised that there's no separate camera man and that there's probably clouds going in front of the sun or something. Having a conversation while changing the exposure settings on a camera must be hard.

  • @frosecold
    @frosecold 7 ปีที่แล้ว +7

    There is a fluttershy pic in the background, true nerd spotted xD (Y)

  • @ansambel3170
    @ansambel3170 7 ปีที่แล้ว +17

    There is a chance, that to achieve human - level inteligence, you have to operate in such a high level of abstraction, that you lose benefits of AI (like super-fast calculating thousands of options)

    • @michaelfaraday601
      @michaelfaraday601 4 ปีที่แล้ว

      Ansambel best comment

    • @kimulvik4184
      @kimulvik4184 4 ปีที่แล้ว +4

      I think thats a very anthropomorphic way to look at it, even anthopocentric. As was said in a different video on this topic; talking about cognition as a "level" that can be measured in one dimension is non-sensical. In my optinion it is much closer to the truth to think about it as a volume in a multi-dimensional space, where each vector represents a characteristic an intelligence might comprise of. The amount of dimensions or scale thereof is immeasureable, possibly even infinite. The point is that all the possible human intelligences only inhabits a relatively small subset of the much larger space. How we design an AI is determining where it is placed in this space, and it need not be tangential to human intelligence in any direction.

    • @jh-wq5qn
      @jh-wq5qn 3 ปีที่แล้ว

      If you add a chatbot to a calculator, the calculator does not lose the ability to calculate. As mentioned by the above comment, this is anthropomorphizing things, as we humans have limited amount of knowledge and ability on any given topic since we have a finite brain. A computer based intelligence has no problem expanding, or even creating faster silicon in which to run on. We picture a human jack of all trades as a master of none, but an AI jack of all trades might actually be master of all.

  • @SentientMeatbag
    @SentientMeatbag 7 ปีที่แล้ว +1

    0:33 Such a subtle way to call people idiots.

  • @maxmusterman3371
    @maxmusterman3371 7 ปีที่แล้ว

    Where do I start reading up on AI research papers? Where do I find those papers and how can I decide wich papers are essentail to understanding the topic? I might just start by reading the 'concrete problems in AI research' paper...

  • @PaulBrunt
    @PaulBrunt 7 ปีที่แล้ว +4

    Interesting, although this video brings up the question of what happens to AI safety when AI ultimately end up in the hands of youtube commentators :-)

    • @ThePhantazmya
      @ThePhantazmya 7 ปีที่แล้ว

      My burning question is can AI tell real news from fake news. Seems like one of those danger points if it assumes all data to be true.

  • @spicytaco2400
    @spicytaco2400 7 ปีที่แล้ว +6

    I feel before we ever can create AI, we need to learn a lot more about how our own intelligence works.

    • @seditt5146
      @seditt5146 6 ปีที่แล้ว

      You just made me bang my head on the wall captain obvious!

    • @evannibbe9375
      @evannibbe9375 6 ปีที่แล้ว

      Other researchers have commented similarly that all AI today is just a “fitting the curve” algorithm.
      Yet others have realized that meta cognition (asking questions about its own nature; without being prompted by humans of course) is currently impossible for AI today.

  • @BigKevSexyMan
    @BigKevSexyMan 7 ปีที่แล้ว

    Can you do a video where you go over suggested reading?

  • @General12th
    @General12th 7 ปีที่แล้ว

    Rob Miles just called all of his subscribers arrogant and short-sighted.
    I like.

  • @gnagyusa
    @gnagyusa 7 ปีที่แล้ว +3

    The only solution is to the control problem is to *merge* with the AI and become cyborgs.
    That way, we won't be fighting the AI.
    We *will be* the AI.

    • @jimkd3147
      @jimkd3147 7 ปีที่แล้ว +1

      Or you will become part of the problem, which is always nice. :)

    • @doublebrass
      @doublebrass 6 ปีที่แล้ว +2

      cyborgization is really unfeasible and will certainly be predated by AGI, so proposing it as a solution doesn't make sense. cyborgizing with an AGI requires its existence anyways, so we would still need to solve these problems first.

    • @MunkiZee
      @MunkiZee 6 ปีที่แล้ว

      I'm gonna black up as well

    • @spaceanarchist1107
      @spaceanarchist1107 2 ปีที่แล้ว

      I'm in favor of getting cyborgized, but I also think it is important to make sure that we will be registered as citizens and members of the AI community, rather than just mindless cogs in the machine. The difference between being citizens or serfs.

  • @DamianReloaded
    @DamianReloaded 7 ปีที่แล้ว +6

    Well _people are people_ . We always talk as if we knew what we are saying. Politics? Sports? Economy! Meh we got all that figured out! ^_^

  • @JP-re3bc
    @JP-re3bc 7 ปีที่แล้ว

    Please make more Rob Miles AI videos!!!

  • @saeedgnu
    @saeedgnu ปีที่แล้ว +1

    Ironically, best thing we can do as non-experts in AI might be to "down-trend" it, to not talk about it or even not use it given a choice, so people don't get hyped as much and companies won't rush into it as much and we all would have more time to figure it out. Leave the testing to researchers, they can do it more effectively.

    • @XHackManiacX
      @XHackManiacX ปีที่แล้ว

      No shot chief.
      The companies that are working on it have seen that they can make money by selling it to the corporations that can make money from using it.
      There's no stopping it now just from regular people not talking about it.
      In fact, that might be just what they'd like.
      If the general public stops talking about it (and working on open source versions) then they can have all the power and we can't!

  • @sharkinahat
    @sharkinahat 7 ปีที่แล้ว +17

    You should do one on Roko's basilisk.

    • @RoboBoddicker
      @RoboBoddicker 7 ปีที่แล้ว +9

      You fool! You'll doom us all!

    • @TheLK641
      @TheLK641 7 ปีที่แล้ว +7

      Well, we're all already doomed by this comment... may as well doom the others !

    • @AexisRai
      @AexisRai 7 ปีที่แล้ว +4

      DELET THIS

    • @TheLK641
      @TheLK641 7 ปีที่แล้ว +1

      It's already too late. If Grzegorz deleted his post, then less people would know about the basilisk, which means that less people would work towards making it a reality, which means that it would come later, so we're all doomed to an infinite simulated torture, nothing can change that, nothing. Except WW3, because then there wouldn't be a basilisk. Not worth it though.

    • @AutodidacticPhd
      @AutodidacticPhd 7 ปีที่แล้ว +1

      Thelk's Basilisk, an AI that knows that an AI torturing virtual people for eternity is actually malevolent, so it instead tortures all those who did not endeavour to bring about WW3 and prevent its own creation.

  • @npsit1
    @npsit1 7 ปีที่แล้ว +16

    I guess the only way you could keep an AI from escaping, is doing what he said: put it in a box. Run it on a machine that is not connected to anything else. Sound and EM isolated copper screen room with no networking and no other computers nearby. But then, again, what is the point if you can't get data in or out. What is its use?

    • @YouHolli
      @YouHolli 7 ปีที่แล้ว +1

      Sneaker network?

    • @DustinRodriguez1_0
      @DustinRodriguez1_0 7 ปีที่แล้ว +19

      Then the primary route to escape becomes tricking the humans into letting it out, which would probably be easier than most other challenges it might face.

    • @peteranderson037
      @peteranderson037 7 ปีที่แล้ว +11

      Assuming that this is a neural networking general AI, how do you propose to train the AI? Unless you propose to fit a completely perfect model of reality inside the sandbox, then it can't be a general AI. General intelligence requires interacting with reality or a perfect model of reality. Without this it doesn't gain intelligence. As you stated, there's no point if you can't get data in or out of the box. It's just a neural network sitting there with no stimuli, doing nothing.

    • @gajbooks
      @gajbooks 7 ปีที่แล้ว

      Have a person walk in and ask it things, and hope it doesn't know how to re-program brains.

    • @joe9832
      @joe9832 7 ปีที่แล้ว +1

      Give it books. Many books. In fact, give them a book which explains the difference between fiction and non-fiction first, for obvious reasons.

  • @expectnull
    @expectnull 7 ปีที่แล้ว

    Thank you!

  • @queendaisy4528
    @queendaisy4528 3 ปีที่แล้ว +2

    I'm probably wrong but "I think I've solved it" in that I've come up with something which looks like it should fix it and I can't see why it wouldn't.
    Why not have a double-bounded expected utility satisficer?
    So tell the stamp collecting device:
    "Look through all the possible outputs and select the simplest plan which has at least a 90% probability of producing exactly 100 stamps, then implement that plan".
    It won't turn itself into a maximiser (because the maximiser will want more than 100 stamps) and it won't take extreme actions to verify that it has 100 stamps because once it's 90% confident it stops caring.
    I would imagine someone smarter than me has already thought of this and proven that it would go wrong, but... why? This seems like it should work.

  • @retepaskab
    @retepaskab 7 ปีที่แล้ว +14

    It would be interesting if you presented counter-examples to these simplistic solutions. Please spare us the time of reading all the literature. Why can't we sandbox it? Is that below the design criteria for usefulness?

    • @satannstuff
      @satannstuff 7 ปีที่แล้ว +1

      You know you can just read the comments of the previous AI related videos and find all the counter examples you could ever want right?

    • @jimkd3147
      @jimkd3147 7 ปีที่แล้ว +1

      I made a video explaining why these comments don't provide valid answers. Check the newest video on my channel.

    • @gJonii
      @gJonii 6 ปีที่แล้ว +6

      He did explain it though. If you sandbox it properly, you got a sandbox. AI inside it can't interact with outside world, so you're safe. But it also means the AI inside it is perfectly useless, you could just as well have an empty box. If you try to use its intelligence to change the world, like, take investment tips from it, you are the security breach. You become the vector with which AI can breach sandbox and interact with the world.

    • @KipColeman
      @KipColeman 5 ปีที่แล้ว

      Surely you could read the logs of what happened in the sandbox and learn something...? I think people too often think of "sandboxing" (i.e. a walled-off test environment) as "blackboxing" (i.e. impossible to know the inner workings).

    • @MrCmon113
      @MrCmon113 5 ปีที่แล้ว

      @@KipColeman
      No. That's like a dozen ants trying to encircle a human.

  • @S7evieRay
    @S7evieRay 7 ปีที่แล้ว +3

    Just introduce your AIs to cannabis. Problem Solved.

  • @houserespect
    @houserespect 7 ปีที่แล้ว +1

    Can you please try to figure some AI that just focuses the camera on the person. Please.

  • @timothymclean
    @timothymclean 7 ปีที่แล้ว +1

    The biggest problem I've seen with basically every "AI is dangerous" argument I've heard-here or elsewhere-is that it seems to assume that because we can't make it physically impossible for an AI to go wrong, we can't create a safe AI. So what if it's not theoretically impossible for the AI to connect itself to computers we don't connect it to, upload itself onto some undefined computer in the Cloud, and start doing vaguely-defined Bad Things from an apparently-unassailable place? If you can't actually provide a plausible way for the AI to do so, why worry?
    The second-biggest problem is that most arguments implicitly start with "assume an infinitely intelligent AI" and work from there. "So what if we can't figure out how to make this work? The AI probably could!"

  • @jansn12
    @jansn12 7 ปีที่แล้ว +9

    I see Fluttershy!

  • @Xelbiuj
    @Xelbiuj 7 ปีที่แล้ว +7

    Ehh, an AI is a box would still be useful for designing stuff, economics, etc etc you don't have to give it direct control over anything.

  • @Meddlmoe
    @Meddlmoe 7 ปีที่แล้ว

    I would really like his list of recommendet readings

  • @jonasvader1977
    @jonasvader1977 6 ปีที่แล้ว

    Is there a way to make a general AI ok with being altered with, and being turned off?

  • @perryizgr8
    @perryizgr8 7 ปีที่แล้ว +12

    5:40 your camera's focus is bad and you should feel bad.

    • @Petra44YT
      @Petra44YT 6 ปีที่แล้ว +1

      YOUR focus is bad because your are not focusing on the video.

  • @DovydasAL
    @DovydasAL 7 ปีที่แล้ว +22

    People be like cool video when it came up a minute ago.

    • @sykotheclown1
      @sykotheclown1 7 ปีที่แล้ว +2

      cool video

    • @kimberlyforti7596
      @kimberlyforti7596 7 ปีที่แล้ว +1

      whats the problem? a video still remains a video.

    • @themeeman
      @themeeman 7 ปีที่แล้ว +4

      DovydasAL
      The point is they havent seen it yet, so they make pre judgements

    • @sc4r3crow28
      @sc4r3crow28 7 ปีที่แล้ว +2

      on the other hand every computerphile or numberphile video is cool

    • @shuriken188
      @shuriken188 7 ปีที่แล้ว +2

      TH-cam runs on a decentralised network of servers which distribute information on views, likes, etc. at varying rates depending on their distance. It's likely the server you watched the video from only just received the video despite other people having finished it and commented on other servers.

  • @lordkekz4
    @lordkekz4 5 ปีที่แล้ว

    @ 2:45-2:51 This actually happened to me when I thought about the Halting Problem a few days ago ;)

  • @Jader7777
    @Jader7777 7 ปีที่แล้ว

    It's so easy to make a safe super intelligent AI you just need to give it... love.

  • @Mezurashii5
    @Mezurashii5 6 ปีที่แล้ว +3

    Why not build an AI that is meant to design a safe general purpose AI? lol

  • @gyroninjamodder
    @gyroninjamodder 7 ปีที่แล้ว +6

    >Sandboxing it is idiotic
    Since you know a lot of people are suggesting something like this, don't you think you should have addressed why you think it is idiotic?
    For example if we look at the problem of the stop button, this problem is best suited to be solved by a sandbox. Let's say the ai is running on the computer. Then we can just have another program on the computer which just listens for the stop command either through a button or a network packet. Once the program gets the stop command it just sends a SIGSTOP to the ai. It wouldn't even need to be separate programs, it could even just be the same except running in a different thread for example. If you don't give the ai tools to block network packets (why would it need this functionality) or listen in to the button (we just don't give it as an input) it would not be able to stop it. Even those this would work, you call it """idiotic.""""

    • @gJonii
      @gJonii 6 ปีที่แล้ว +2

      He did explain it though.
      Regarding sandbox, if you sandbox superintelligence properly, it's entirely unable to interact with the world around it. But that means you can never ask it questions or request it to take actions. This safe sandbox would mean the AI would be as useful to you as a rock. If you try to cheat, and ask it questions regardless, you yourself become the security breach. You become the vector with which AI can interact with the world, so your sandbox falls apart.
      With stop button... like, you could just fry your network connection. You could have the computer used to send stop signal crash. You could make a copy of AI elsewhere. You could persuade humans to mess up with that safety protocol. Basically, for stop button to work, you'd be relying on your ability to outsmart a superintelligence indefinitely. It doesn't sound like a plan a smart person would take.

    • @nanoduckling
      @nanoduckling 5 ปีที่แล้ว

      Your sandbox has to be perfect, because it is very likely a super-intelligence will exploit any bug or weakness because it can easily find them. You suggested having a program running on the same machine as the AI operate the stop button, so it is now one buffer overflow away from total freedom. Worse, the hardware has to be perfect, a defect in the CPU like Meltdown could be all the AI needed. You also cant easily block input from a super-intelligence, it can use its model of reality to estimate states of the world even without direct input. Sure, it doesn't directly know the state of the button, but it can figure out there is one, and it can figure out what might get it pressed. If you system is currently perfect then sure, it will behave itself until it finds a way to defeat your trap. So it plays along and plays nice until it convinces you to let it optimize your chip design for the CPU it is running on and builds in a super subtle bug that lets it defeat the sandbox. You install this new chip and again you are off to the races. So maybe you don't let it design its own hardware. It still can interact with the world, since otherwise it is useless. Say we decide that it is only allowed artistic pursuits, so it designs video games. And those games drive GPU design in a direction which increases the chance of a bug being introduced it can exploit. If the AI has goals different from us then disabling the button is an instrumental goal and whatever the AI does will be done in such a way as to maximize the chances that the button is eventually disabled. If it cant hack the software then it will hack the hardware, if it cant hack the hardware it will hack the manufacturing process for the hardware, if it cant do that it will hack people. It is patient and clever and will find a way and will hide from you that it is doing it.
      Any strategy to defeat a super-intelligence contingent on outsmarting it is idiotic, and sandboxing assumes you have a better understanding of the sandbox than the thing you are putting in it. This is false by definition for a super-intelligence.

    • @MrCmon113
      @MrCmon113 5 ปีที่แล้ว

      The ants are encircling your foot asking themselves where you could possibly escape to....
      The problem is not physically turning off the computer that runs the AGI, the problem is that you wouldn't even want it. You wouldn't know when to press stop. You'd defend that button with life. That's how intelligence works.

  • @nO_d3N1AL
    @nO_d3N1AL 7 ปีที่แล้ว +1

    Interesting shelf - loads of Linux/UNIX and web programming books... and a Fluttershy picture!

  • @theatheistpaladin
    @theatheistpaladin 6 ปีที่แล้ว +1

    Sandboxing to check if it is benevolent.