4 Experiments Where the AI Outsmarted Its Creators! 🤖

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024

ความคิดเห็น • 1.6K

  • @MobyMotion
    @MobyMotion 6 ปีที่แล้ว +5370

    Very important message at the end there. It's something that Nick Bostrom calls "perverse instantiation" - and will be crucial to avoid in a future superintelligent agent. For example, we can't just ask it to maximise happiness in the world, because it might capture everyone and place electrodes into the pleasure centre of our brains, technically increasing happiness vastly

    • @TwoMinutePapers
      @TwoMinutePapers  6 ปีที่แล้ว +1220

      Agreed. I would go so far as to say there is little reason to think a superintelligence would do anything else than find the simplest loophole to maximize the prescribed objective. Even rudimentary experiments seem to point in this direction. We have to be wary of that.

    • @MobyMotion
      @MobyMotion 6 ปีที่แล้ว +454

      Two Minute Papers absolutely. The only difference is that as the AI becomes more powerful, the loopholes become more intricate and difficult to predict

    • @michaelemouse1
      @michaelemouse1 6 ปีที่แล้ว +599

      So AI would be like a genie that grants you all your wishes, exactly as you ask, in a way that catastrophically backfires. This should be the premise of a sci-fi comedy already.

    • @RHLW
      @RHLW 6 ปีที่แล้ว +133

      I pretty much have to disagree. If such a thing can't "think forward" over such a cheat, evaluate if it's good or bad as evaluated from different angles/metrics, and figure out that the simple solution isn't always the correct one, then it is not a "super intelligence"... it's just a dumb robot.

    • @scno0B1
      @scno0B1 6 ปีที่แล้ว +92

      why would a robot not choose the simplest solution? we can see that a robot does come up with the simplest solutions :P

  • @teddywoodburn1295
    @teddywoodburn1295 5 ปีที่แล้ว +5206

    I heard about an ai that was trained to play Tetris, the only instruction it was given was to avoid dying, eventually the ai just learned to pause the game, therefore avoiding dying

    • @zserf
      @zserf 5 ปีที่แล้ว +257

      Source: th-cam.com/video/xOCurBYI_gY/w-d-xo.html
      Tetris is at 15:15, but the rest of the video is interesting as well.

    • @theshermantanker7043
      @theshermantanker7043 5 ปีที่แล้ว +87

      That's what i used to do XD
      But it got boring after a while

    • @teddywoodburn1295
      @teddywoodburn1295 5 ปีที่แล้ว +163

      @DarkGrisen that's true, but the person creating the program basically told the ai that it was about not dying, rather than getting a high score

    • @Ebani
      @Ebani 5 ปีที่แล้ว +110

      @DarkGrisen There is no difference then. By not dying it will get an infinite score eventually so a high score by itself is meaningless, not dying turns out to be the best factor to predict a high score.
      He could've easily just removed the pause function too but it's funny to see the results he got

    • @teddywoodburn1295
      @teddywoodburn1295 5 ปีที่แล้ว +33

      @DarkGrisen exactly, I think the lesson in that is that you have to think about what you're actually telling the ai to do

  • @theshermantanker7043
    @theshermantanker7043 5 ปีที่แล้ว +1842

    In other words the AI has learnt the ways of video game speedrunners

    • @doodlevib
      @doodlevib 5 ปีที่แล้ว +33

      Indeed! Some of the work done in training AI systems to play videogames is incredible, like the work of OpenAI.

    • @Linkario86
      @Linkario86 4 ปีที่แล้ว +27

      Omg... can't wait to see the first AI breaking a speedrun record, simply to see what exploits it found

    • @englishmotherfucker1058
      @englishmotherfucker1058 4 ปีที่แล้ว +10

      TAS

    • @ihaveaplan.ijustneedmoney.9777
      @ihaveaplan.ijustneedmoney.9777 4 ปีที่แล้ว +15

      Before we know it, they'll be speedrunning the human race

    • @Emily-8914
      @Emily-8914 4 ปีที่แล้ว +8

      I would love to see someone put an AI through Skyrim until it can complete the main questline as quickly as possible.

  • @jonathanxdoe
    @jonathanxdoe 6 ปีที่แล้ว +5983

    Me: "AI! Solve the world hunger problem!"
    Next day, earth population = 0.
    AI: "Problem solved! Press any key to continue."

    • @MrNight-dg1ug
      @MrNight-dg1ug 6 ปีที่แล้ว +13

      John Doe lol!

    • @wisgarus
      @wisgarus 6 ปีที่แล้ว +19

      John Doe
      One eternity
      Later

    • @michaelbuckers
      @michaelbuckers 6 ปีที่แล้ว +80

      You jest, but limiting the population is literally the only way you can ensure that limited supply can be rationed to all people at a given minimum. China and India are neck deep in this, but first world doesn't have this problem so they think it's possible to just feed everyone hungry and that would magically not bankrupt everyone else (the hungry are bankrupt to start with).
      The truth is, poor people are poor because that's what they're worth in a fair and square free market economy. They have no skills and qualities to be rich, they don't get rich through marketable merit and even if they become rich by chance, soon enough they lose all money and go back to being poor. Inequality is a direct consequence of people not being identical. Having the same reward for working twice as hard doesn't sound appealing to me, much less living in a totalitarian society that forbids stepping out the line for half an inch in order to ensure equality.

    • @filippovannella4957
      @filippovannella4957 5 ปีที่แล้ว +4

      you definitely made my day! xD

    • @SomeshSamadder
      @SomeshSamadder 5 ปีที่แล้ว +13

      hence Thanos 😂

  • @davidwuhrer6704
    @davidwuhrer6704 6 ปีที่แล้ว +883

    This reminds me of the old story of the computer that was asked to design a ship that would cross the English Channel in as short a time as possible.
    It designed a bridge.

    • @HolbrookStark
      @HolbrookStark 4 ปีที่แล้ว +63

      Tbh a bridge made of a super long boat floating in the middle of the English Channel tip to tip with the land masses would be the most lit bridge on earth 🔥

    • @clokky1672
      @clokky1672 4 ปีที่แล้ว +15

      This really made my chuckle.

    • @thehiddenninja3428
      @thehiddenninja3428 4 ปีที่แล้ว +74

      Well, there was no size restriction.
      It was tasked to have the lowest time between the back end touching point A and the front end touching point B.
      Obviously the lowest time is 0; where it's already touching both points

    • @AverageBrethren
      @AverageBrethren 4 ปีที่แล้ว +1

      @@HolbrookStark thats a lot of material. It's a pipedream.

    • @HolbrookStark
      @HolbrookStark 4 ปีที่แล้ว +11

      @@AverageBrethren there was a time people would have said the same about ever building a bridge across the English Channel at all. Really, using a floating structure might use a lot less material and be a lot cheaper than the other options for how to do it

  • @jsbarretto
    @jsbarretto 6 ปีที่แล้ว +2093

    This reminds me of a project I worked on 2 years ago. I evolved a neural control system for a 2D physical object made of limbs and muscles. I gave it the task of walking as far as possible to the right in 30 seconds. I expected the system to get *really* good at running.
    Result? The system found a bug in my physics simulation that allowed it to accelerate to incredible speeds by oscillating a particular limb at a high frequency.

    • @milanstevic8424
      @milanstevic8424 6 ปีที่แล้ว +284

      we'd do it too if only there was such a glitch in the system.
      actually we exploit the nature for any such glitch we can find.
      thankfully the universe is a bit more robust than our software, and energy conservation laws are impossibly hard to circumvent.

    • @ThatSkyAmber
      @ThatSkyAmber 6 ปีที่แล้ว +36

      give it's joints a speed limit more on par with a human's..? or anyway, below the critical value needed for the exploit.

    • @Moreoverover
      @Moreoverover 5 ปีที่แล้ว +87

      Reminds me of what video game speedrunners do, finding glitches is goal number uno.

    • @jetison333
      @jetison333 5 ปีที่แล้ว +89

      @@milanstevic8424 honestly I dont think it would be too far off to call computers and other advanced technology as exploits. I mean, we tricked a rock into thinking.

    • @milanstevic8424
      @milanstevic8424 5 ปีที่แล้ว +41

      @@jetison333 I agree, even though rocks do not think (yet).
      But what is a human if not just a thinking emulsion of oil (hydrocarbons) and water? Who are we to exploit anything that wasn't already made with such a capacity? We are merely discovering that rocks aren't what we thought they were.
      Given additional rules and configurations, everything appears to be capable of supernatural performance, where supernatural = anything that exceeds our prior expectations of nature.
      "Any sufficiently advanced technology is indistinguishable from magic"
      Which is exactly the point at which we begin to categorize it as extraordinary, instead of supernatural, until it one day just becomes ordinary...
      It's completely inverse, as it's a process of discovery, thus we're only getting smarter and more cognizant of our surroundings. But for some reason, we really like to believe we're becoming gods, as if we're somehow leaving the rules behind. We're hacking, we're incredible... We're not, we're just not appreciating the rules for what they truly are.
      In my opinion, there is much more to learn if we are ever to become humble masters.

  • @anthonyhadsell2673
    @anthonyhadsell2673 5 ปีที่แล้ว +2150

    Human: Reduce injured car crash victims
    Ai: Destroys all cars
    Human: Reduce injured car crash victims without destroying cars
    Ai: Disables airbag function so crashes result in death instead of injury

    • @pmangano
      @pmangano 5 ปีที่แล้ว +206

      Human: Teaches AI that death is result of injury
      AI: Throw every car with passengers in a lake, no crash means no crash victims, car is intact.

    • @decidueyezealot8611
      @decidueyezealot8611 5 ปีที่แล้ว +19

      Humans then drown to death.

    • @Solizeus
      @Solizeus 5 ปีที่แล้ว +135

      Humans: Teaches IA not to damage the car or it's passengers.
      IA: Disable the ignition, avoiding any damage.
      Humans: Stop that too.
      IA: Turns on Loud Bad music and drive in circles to make the passengers want to leave or turn the car off

    • @noddlecake329
      @noddlecake329 5 ปีที่แล้ว +119

      This Is basically what they did in WWI they noticed an increase in head injuries when they introduced bullet proof helmets and so they made people stop wearing helmets. The problem was that the helmets were saving lives and leaving only an injury

    • @anthonyhadsell2673
      @anthonyhadsell2673 5 ปีที่แล้ว +65

      @@noddlecake329 survivor bias. When they took all the holes the found in planes that were shot and layed them over one plan in ww2 they noticed the edge of the winds and a few other areas being shot more so they assumed they should reinforce those areas, the issue was that they were looking at the planes that survived and really they needed to reinforce the areas that did have bullet holes

  • @DarcyWhyte
    @DarcyWhyte 6 ปีที่แล้ว +6521

    Robots don't "think" outside the box. They don't know there is a box.

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +713

      That is the secret.
      The researches who formulated the problem thought there was a box.
      They expected the AI to think inside it.
      But the AI never knew about the box.
      There was no box.
      And the AI solved the problem as stated outside it.

    • @DarcyWhyte
      @DarcyWhyte 6 ปีที่แล้ว +120

      That's right there's no box. :)

    • @planetary-rendez-vous
      @planetary-rendez-vous 6 ปีที่แล้ว +233

      So you mean humans are conditioned to think inside a box

    • @Anon-xd3cf
      @Anon-xd3cf 6 ปีที่แล้ว +25

      Darcy Whyte
      No the "robots" don't know there is a "box" to think outside of...
      AI however are increasingly able to "think" for themselves both in and out of the proverbial *box*

    • @milanstevic8424
      @milanstevic8424 6 ปีที่แล้ว +49

      the error is simply in trying to describe a very simple "box" while not being able to reconstruct what's actually described. people do this all the time, and this is why good teachers are hard to find.
      the box that the AI couldn't circumvent was the general canvas, or in this case the general physics sandbox with gravity acceleration and a ground constraint. this is the experimental >reality

  • @josephoyek6574
    @josephoyek6574 4 ปีที่แล้ว +403

    AI: You have three wishes
    Me: *sweats

    • @nischay4760
      @nischay4760 4 ปีที่แล้ว +2

      Dont Watch My Vids wear slippers

    • @stellarphantasmvfx5504
      @stellarphantasmvfx5504 4 ปีที่แล้ว +6

      @@nischay4760 the slippers will turn into gold, making it hard to walk

    • @jjuan4382
      @jjuan4382 4 ปีที่แล้ว +3

      @@UntrueAir oh yeah your right

    • @nischay4760
      @nischay4760 4 ปีที่แล้ว +5

      @@UntrueAir touching is an obsolete word then

    • @unequivocalemu
      @unequivocalemu 4 ปีที่แล้ว +2

      @@nischay4760 touching is overrated

  • @renagonpoi5747
    @renagonpoi5747 4 ปีที่แล้ว +292

    "If there are no numbers, there's nothing to sort... problem solved."
    I think a few more iterations and we'll have robot overlords.

    • @cgme7076
      @cgme7076 4 ปีที่แล้ว +4

      Renagon Poi :: No joke! These AI were too smart and this was two years ago.

    • @Portalturret1010
      @Portalturret1010 4 ปีที่แล้ว +4

      sort all these people into ... AI: kill humans = nothing to sort

    • @harper626
      @harper626 3 ปีที่แล้ว +1

      Sounds like Trumps solution to the corona virus. Quit testing. No more cases. Right?

    • @numbdigger9552
      @numbdigger9552 3 ปีที่แล้ว

      @@harper626 i certainly don't. SENICIDE TIME!!!

  • @laurenceperkins7468
    @laurenceperkins7468 6 ปีที่แล้ว +806

    Reminds me of one of the early AI experiments using genetic algorithm adjusted neural networks. They ran it for a while and there was a clear winner that could solve all the different problems they were throwing at it. It wasn't the fastest solver for any of the cases, but it was second-fastest for all or nearly all of them.
    So they focused their studies on that one, and turned the others lines off. At which point the one they were studying ceased being able to solve any of the problems at all. So they ripped it apart to see what made it tick and it turns out that it had stumbled upon a flaw in their operating system that let it monitor what the other AIs were doing, and whenever it saw one report an answer it would steal the data and use it.

    • @fumanchu7
      @fumanchu7 4 ปีที่แล้ว +254

      They recreated Edison as an AI. Neat.

    • @rickjohnson1719
      @rickjohnson1719 4 ปีที่แล้ว +14

      @@fumanchu7 nice

    • @MasterSonicKnight
      @MasterSonicKnight 4 ปีที่แล้ว +73

      tl;dr: AI learns to cheat

    • @computo2000
      @computo2000 4 ปีที่แล้ว +50

      This sort of sounds fake. Name/Source?

    • @michaelburns8073
      @michaelburns8073 4 ปีที่แล้ว +5

      Ah, it learned the classic "Kobayashi Maru" maneuver. Sweet!

  • @NortheastGamer
    @NortheastGamer 5 ปีที่แล้ว +1339

    "The AI found a bug in the physics engine" So basically it did science.

    • @PhillipAmthor
      @PhillipAmthor 5 ปีที่แล้ว +26

      The ai is a glitcher

    • @williambarnes5023
      @williambarnes5023 5 ปีที่แล้ว +40

      The entire field of quantum mechanics is a glitcher.

    • @cleanwater5665
      @cleanwater5665 4 ปีที่แล้ว +10

      Mods, report this claw for hacking

    • @matthewe3813
      @matthewe3813 4 ปีที่แล้ว +5

      we will soon use AI to find bugs in video games

    • @twilighttucson2526
      @twilighttucson2526 4 ปีที่แล้ว +2

      No, that's debugging.

  • @Moonz97
    @Moonz97 6 ปีที่แล้ว +181

    This is so hilarious. I remember programming a vehicle that was tasked with avoiding obstacles. It had controls over the steering wheel only, and it was always moving forward. To my surprise, the bot maximized its wall avoidance time by going in circles. I find that so funny lol.

    • @xl000
      @xl000 4 ปีที่แล้ว +8

      this is because your problem was not well specified. It should have been rewarded for "curviligne distance on some path"

    • @deathwishgaming4457
      @deathwishgaming4457 4 ปีที่แล้ว +20

      @@xl000 I'm sure Moonz97 knows that. They brought it up because it was relevant, not for advice lol.

    • @geraldfrost4710
      @geraldfrost4710 3 ปีที่แล้ว +2

      I find myself going in circles a lot... Good to know it is a valid response.

  • @mrflip-flop3198
    @mrflip-flop3198 5 ปีที่แล้ว +224

    "Okay AI, I want you to solve global warming."
    "Right away, now moving _Earth_ from the solar system. Caution: You may experience up to 45Gs."

    • @adoftw3866
      @adoftw3866 5 ปีที่แล้ว +2

      more like 5k G's

    • @sharpfang
      @sharpfang 4 ปีที่แล้ว +20

      Nah, way too complex and expensive. But considering that the global warming is caused by humans... eliminate the cause, easy.

    • @cgme7076
      @cgme7076 4 ปีที่แล้ว +5

      *Humans explode immediately*

    • @igg5589
      @igg5589 4 ปีที่แล้ว +8

      Or just one virus and problem solved

    • @eggyrepublic
      @eggyrepublic 4 ปีที่แล้ว +2

      @@igg5589 hol up

  • @Zorn101
    @Zorn101 6 ปีที่แล้ว +262

    AI is like a 4-year-old sorting butterfly pictures.
    If I just tare up and eat the picture. the sorting is done!

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 4 ปีที่แล้ว +5

      *tear

    • @effexon
      @effexon 4 ปีที่แล้ว +3

      these experiments will show how early ancient humans fought, tribal phase.

    • @breathe4778
      @breathe4778 3 ปีที่แล้ว

      but it's perfect, no consequences 😅

  • @amyshaw893
    @amyshaw893 5 ปีที่แล้ว +53

    Reminds me of something I saw where some people were training an AI to play Qbert, and at one point it found a secret bonus stage that nobody had ever found before

    • @korenn9381
      @korenn9381 4 ปีที่แล้ว +1

      @@MrXsunxweaselx No that has no mention of secret bonus stages

  • @iLeven713
    @iLeven713 4 ปีที่แล้ว +30

    It's funny how these reinforcement learning models kind of act like Genie's from folklore, with a "be careful what you ask for" twist

  • @RoySchl
    @RoySchl 6 ปีที่แล้ว +214

    yeah this shit happens all the time, especially when you have something physics based and the reward function is not specific enough.
    i once made a genetic algorithm that evolved 3d creatures to maximize distance traveled.
    well since i measured the distance at certain intervals i ended up with creatures vibrating in place at the same frequence i was measuring.
    or you go for jump height, and they will surely find a way to glitch the physics/collision engine to fling themselves into infinity somehow.

    • @chameleonh
      @chameleonh 5 ปีที่แล้ว +16

      Limit spring energy output. No spring is able to put out more energy than it received. Hooke's law is k*x, so you limit k*x to x*dt, where k is spring constant, x is spring displacement, dt is delta time (integration time step)

    • @effexon
      @effexon 4 ปีที่แล้ว +1

      it is true i think in complicated systems(open problems, like optimizing, physics problems are usually like this, especially real world ones). it is good in comparing results, like languages word by word.

  • @alansmithee419
    @alansmithee419 5 ปีที่แล้ว +157

    The idea of thinking outside the box is limited to humans. The box is something our minds put in place - it is a result of how our brains work. The ai doesn't have a box meaning it can find the best solution, but also meaning there are many many more things that it could try that it needs to slog through.
    We need that box, otherwise we'd be so flooded with ideas that our brains wouldn't be able to sift through them all.
    Our limitations allow us to function, but the way computers work means such a box would be detrimental to them.
    - sincerely, not a scientist.

    • @EGarrett01
      @EGarrett01 5 ปีที่แล้ว +3

      A "box" is simply a method that appears to be the first step towards generating the best result. But it can be a problem because there are often methods that don't immediately seem to lead to the right direction but which ultimately produce a better result, like a walking physics sim spinning its arm in place super-fast until it takes off like a helicopter and can travel faster than someone walking.
      If AI are working through successive generations, it will have periods or groups of results that follow a certain path that produces better things short-term, this is the same as people "thinking in the box." But if it is allowed to try other things that are inefficient at first and follow them multiple steps down the line, it then ends up being able to think outside the box.

    • @alansmithee419
      @alansmithee419 5 ปีที่แล้ว +15

      @@EGarrett01 as far as I understand it, the box is the range of human intuition, and thinking outside of it is essentially going against the common way of human thinking. The ai doesn't have intuition, nothing limiting its ideas or method of thought, therefore it has no box.
      Though honestly the proverbial box has never really had a definition, and its meaning could be interpreted any number of ways. I suppose both of our definitions are equally valid.

    • @xvxee7561
      @xvxee7561 4 ปีที่แล้ว +2

      You have this hella backwards

    • @honkhonk8009
      @honkhonk8009 4 ปีที่แล้ว

      No, its because we would have past experiences influence decisions in the form of common sense.

    • @duc2133
      @duc2133 4 ปีที่แล้ว +2

      @@alansmithee419 Ya'll are trying to sound too deep. It just means that these experiments didn't set enough factors to be practical. A robot flipping on its side woudn't be practical, or the numerous other jokes on this thread -- pushing the earth far away from the sun to "solve global warming" doesn't make sense because its fucking stupid -- the experimenter needed to set certain limitations for the computer to come up with a sensible solution. These robots aren't lacking "intuition" its just a bad computer that needs to be programmed better.

  • @TwoMinutePapers
    @TwoMinutePapers  6 ปีที่แล้ว +12

    Our Patreon page: www.patreon.com/TwoMinutePapers
    One-time payment links are available below. Thank you very much for your generous support!
    PayPal: www.paypal.me/TwoMinutePapers
    Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
    Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
    LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

  • @harry356
    @harry356 6 ปีที่แล้ว +10

    We had a bunch of aibo robots play hide and seek and train an ai. They stopped hiding quickly, we thought something is wrong, we made an error in our programming. It took us a while to find out that they learned to stay at the starting point so they where immediately free when the countdown stopped. They found a loophole in the rules. Incredible fun.

    • @killmeister2271
      @killmeister2271 5 ปีที่แล้ว

      They were like "hmm this game has no purpose therefore it must end asap"

    • @renakunisaki
      @renakunisaki 2 ปีที่แล้ว

      Literally "the only winning move is not to play".

  • @curlyfryactual
    @curlyfryactual 6 ปีที่แล้ว +451

    I found this pretty funny, the AI is like the class clown, doing everything wrong but right to comedic effect. Or like someone pointed out, a bad genie lol. That poisoning the competition stuff was creepy though, obvious red herring...LOVE the video!

    • @TwoMinutePapers
      @TwoMinutePapers  6 ปีที่แล้ว +18

      Thank you so much, happy to hear you enjoyed it! :)

    • @JQRNY-YDJKD
      @JQRNY-YDJKD 6 ปีที่แล้ว +10

      You gave the robot AI reward system. Did the scientist think about giving the robot AI punishment system?

    • @monkeyonkeyboard7909
      @monkeyonkeyboard7909 6 ปีที่แล้ว +27

      It's not really a red herring, the AI just found a way to maximise its own reward in a reward system - it doesn't mean it's evil.

    • @NolePTR
      @NolePTR 6 ปีที่แล้ว

      malicious compliance

    • @play005517
      @play005517 6 ปีที่แล้ว

      And the last experiment clearly shows what AI will do to fix the ultimate problem. If every human is all "short circuit"-ed, there will be problems no more.

  • @firefoxmetzger9063
    @firefoxmetzger9063 6 ปีที่แล้ว +86

    This reminds me of my very first AI project :D It was before deep-learning was a thing and was doing function approximation and SARSA in the StarCraft2 map editor (yes, that one where you program by stacking boxes ...). The goal was for the AI to control a marine with stim and learn if it could defeat an Ultralisk that simply A-moves.
    Turns out there is/was a bug in the SC2 game engine and when the AI stutter steps just right, the Ultralisk will be caught in the attack animation without doing any damage. Optimization programs always find the exploits...

    • @TwoMinutePapers
      @TwoMinutePapers  6 ปีที่แล้ว +12

      Amazing story, thanks for sharing! Do you have any videos or materials on this? :)

    • @firefoxmetzger9063
      @firefoxmetzger9063 6 ปีที่แล้ว +17

      Unfortunately, no. It would be the perfect introductory example for teaching AI classes.
      Back then I was a 19/20 year old student at the end of puperty with no formal CS education (I'm actually a mechanical engineer lol). If you would have mentioned "reproducibility" to me back then, I would have understood something else...

  • @banu6301
    @banu6301 6 ปีที่แล้ว +241

    The first one was just too amazing

    • @keffbarn
      @keffbarn 6 ปีที่แล้ว +29

      Yea, it's like the AI trolled the researchers.

    • @deezynar
      @deezynar 6 ปีที่แล้ว +4

      The programmers didn't think to tell it to stay on its feet. Alternatively, they didn't tell it to find a way to walk with the least contact of any part, not just the "feet."

    • @AZ-kr6ff
      @AZ-kr6ff 5 ปีที่แล้ว +4

      Chris Russell Agreed. Not amazing at all. If you gave any 5 year old the same instructions they would drop to their hands and knees and crawl without missing a beat.

    • @p-y8210
      @p-y8210 5 ปีที่แล้ว

      @@AZ-kr6ff yeah but this not human this is an AI made by humans

    • @AZ-kr6ff
      @AZ-kr6ff 5 ปีที่แล้ว

      p-y
      Yes, but still programmed to solve problems.
      Easy problem to solve.

  • @XxXMrGuiTarMasTerXxX
    @XxXMrGuiTarMasTerXxX 4 ปีที่แล้ว +5

    Your last sentence reminds me of something that happened in the UK, if I remember correctly, where they were trying to optimize the traffic trying to minimize the economic costs. The result was to remove all the traffic lights. After investigating why, it turned out that it increases the number of accidents, and the data showed that mostly elder people died on those accidents, and so, it should reduce the amount of pensions they had to pay.

    • @leonodonoghueburke4276
      @leonodonoghueburke4276 4 หลายเดือนก่อน

      That doesn't sound real but my god I want it to be

  • @ValensBellator
    @ValensBellator 4 ปีที่แล้ว +84

    It’s fun watching our future exterminators in their infancy years :D

  • @nosajghoul
    @nosajghoul 6 ปีที่แล้ว +52

    @2:36 This is how skynet reached the conclusion to eradicate humans. Its all fun and games till youre just a number.

    • @ignaziomessina69
      @ignaziomessina69 5 ปีที่แล้ว

      Exactly what I thought

    • @la-ia1404
      @la-ia1404 5 ปีที่แล้ว +2

      I'm gonna call my boss at work skynet from now on cause that's all I am to them is a number.

  • @artjomsjakovenko2446
    @artjomsjakovenko2446 5 ปีที่แล้ว +33

    I once made a neural network learn to throw basketballs into a basket inside a simulation and it discovered that if shot strong enough it is going to clip through collider and end up inside a basket with minimal ball distance travelled, since it was a part of fitness function.

    • @tungleson7066
      @tungleson7066 3 ปีที่แล้ว

      That is technically right in real life as well. If you launch the first ball strong enough it will break the basket's wall, open a hole that you can just continue to shoot ball in. The least distance, of course.

  • @Laezar1
    @Laezar1 6 ปีที่แล้ว +17

    First one : make sense
    Second : smart!
    Third : ok that's getting scary
    Fourth : we are doomed.

  • @seamuscallaghan8851
    @seamuscallaghan8851 6 ปีที่แล้ว +73

    Human: Maximize paperclip production.
    AI: Converts whole planet into paperclips.

    • @WurmD
      @WurmD 5 ปีที่แล้ว +11

      AI: Converts whole *universe* into paperclips.
      There :) fixed it for you

    • @bell2023
      @bell2023 5 ปีที่แล้ว +13

      Release the HypnoDrones

    • @sharpfang
      @sharpfang 4 ปีที่แล้ว +4

      In reality it would achieve mastery at modifying its own code so that the paperclip counting function returns infinity, instead of counting paperclips. It might use blackmail or intimidation to force the creators to implement that change.

    • @marshalllenhart7923
      @marshalllenhart7923 4 ปีที่แล้ว +1

      @@sharpfang Or it would reason that having Humans turn it off would be a faster solution than anything else, so it would act super scary in an attempt to get the creator to turn it off.

    • @LowestofheDead
      @LowestofheDead 4 ปีที่แล้ว +2

      AI: (I must threaten the humans to build me paperclip factories.. what would frighten a human?🤔)
      AI: "Human! Build me factories or I'll steal paperclips!"
      AI: (Nailed it)

  • @Jasonasdoipjahrv
    @Jasonasdoipjahrv 3 ปีที่แล้ว +5

    I love this, the ai is like, "but i did what you asked🥺"

  • @MidnightSt
    @MidnightSt 4 ปีที่แล้ว +2

    "Don't ask your car to unload any unnecessary cargo to go faster, or if you do, prepare to be promptly ejected from the car."
    -Two Minute Papers, probably the best of the concise explanations of what it means that AI doesn't (by default) think like humans =D

  • @PopcornFr3nzy
    @PopcornFr3nzy 5 ปีที่แล้ว +7

    2:27
    You see that lonely little robot up top?
    That's my life.

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 4 ปีที่แล้ว +1

      Gort Newton humans are societal creatures, you should have a few friends or family who you can spend time with a quite frequently. Otherwise, it's bad for your mental health. Having 1 friend in school / work is much better than none, and having 2 or 3 is even better

    • @PopcornFr3nzy
      @PopcornFr3nzy 4 ปีที่แล้ว +1

      @@aphroditesaphrodisiac3272 I'm inclined to believe you, but your name seems as lost as I am 🤣
      Jk, I appreciate the feedback and I have lots of friends and family, I'm just constantly disconnected. It is what it is. Im fine, trust me.

  • @vripiatbuzoi9188
    @vripiatbuzoi9188 2 ปีที่แล้ว +2

    This would be great for video game bug testing since the AI will try things that human testers may not think of.

  • @jenner247450
    @jenner247450 6 ปีที่แล้ว +66

    I have a another example of loophole finding from AI. In some metalwork factory upgraded system ith fuzzy logic was overweight (need to carry a 12 tons of liquid metal, by one pass of 10 tons maximum of cart stable derivations)... So, AI found the solve. He take a 12 tons cart, move that in center of factory, stop, drop 2 tons melted iron on the floor, an move cart further
    according next instructions)))

    • @asj3419
      @asj3419 6 ปีที่แล้ว +8

      Thats sounds very interesting. Do you have the source?, id like to read more about this.

    • @KnakuanaRka
      @KnakuanaRka 5 ปีที่แล้ว +3

      Sounds like the robot needs some courses on workplace safety!

  • @alan2here
    @alan2here 6 ปีที่แล้ว +1

    I hugely recommended general search approach bot for almost any game coding task, that you can link up to playable entities or stuff you want an AI for as and when needed. It's a great alternative to looking for bugs by hand, when it quickly finds them itself instead.

  • @smartkorean1
    @smartkorean1 5 ปีที่แล้ว +3

    The research being done is absolutely amazing, especially the bit about how cooperative and competitive traits can emerge from a simple given task. Do you think you could ever make a video on explaining what steps an undergrad comp sci student should take in order to eventually participate in AI research and even have a career in AI? Or maybe in a blog post? Edit: grammar

  • @maxbaugh9372
    @maxbaugh9372 4 ปีที่แล้ว +3

    I once heard about a genetic algorithm tasked with building a simple oscillator, and after a few generations it seemed to work. Then they popped the hood and saw that it had in fact built a radio to pick up signals from a nearby computer.

  • @sjoerdgroot6338
    @sjoerdgroot6338 6 ปีที่แล้ว +34

    2:21 Imagine if that AI had the task of making all human on earth happy

    • @JorgetePanete
      @JorgetePanete 6 ปีที่แล้ว +3

      sjoerd groot well, it was said in other comment

    • @RoySchl
      @RoySchl 6 ปีที่แล้ว +14

      just don't tell it to minimize suffering :)

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว

      Tell me: Why do terminal users of heroin try to become clean?

    • @Hauketal
      @Hauketal 6 ปีที่แล้ว +2

      sjoerd groot Loophole: each statement about elements of the empty set is true. So if there are no humans left, each of them is whatever you wish, e.g. maximally happy.

    • @Guztav1337
      @Guztav1337 6 ปีที่แล้ว +3

      They will pump our blood vessels with 'happy' hormones

  • @itxi
    @itxi 4 ปีที่แล้ว +5

    I remember the story of an AI trained to play tetris.
    When things got bad the AI just paused the game so it couldn't lose.

  • @drakekay6577
    @drakekay6577 6 ปีที่แล้ว +10

    2:35 haaa haaa That is the Kobayashi maru! The Ai pulled a KIRK on the test!

  • @denno445
    @denno445 4 ปีที่แล้ว +12

    This is the most entertaining Chanel I'm subscribed to on TH-cam

  • @plotwist1066
    @plotwist1066 4 ปีที่แล้ว +3

    Imagine A.I in the future reacting to comment section

  • @CognizantPotato
    @CognizantPotato 3 ปีที่แล้ว

    This is so cool. Computers aren’t anywhere near the level of human brains in terms of self recognition yet, but we’re effectively watching millions of years of evolution in a 5 minute video. Amazing.

  • @nononono3421
    @nononono3421 6 ปีที่แล้ว +16

    Eventually an AI could give us the impression that it hasn't found a loophole, when in reality it would just wait to exploit it at a time where we couldn't stop it from doing so. An AI could help society solve all of its problems, only to lure us into a trap we can't avoid 100000000000000 moves later.

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +6

      If AI survives humanity, I would call that a success.

  • @Quimper111
    @Quimper111 3 ปีที่แล้ว +1

    Morale is a limitation we humans intentionally place upon ourself to increase the challenge of life. It derives from the fact that cooperation in general is more benefical to a society and self-preservation but also från the fact that we have empathy to not place upon others what we wouldn't want placed upon ourselves.

  • @cheydinal5401
    @cheydinal5401 5 ปีที่แล้ว +10

    I want a robot arm that can throw an ordinary dice and always get the number it wants

    • @mihajlor2004
      @mihajlor2004 5 ปีที่แล้ว +1

      That could be possible

    • @insanezombieman753
      @insanezombieman753 5 ปีที่แล้ว +8

      @@mihajlor2004 yeah it would just drop it vertically

    • @jsl151850b
      @jsl151850b 5 ปีที่แล้ว +1

      It may NOT be possible because the throwing arm servo motors would need an accuracy beyond what is technically possible. F= 2.210974558 Newtons. Snakeeyes!!

    • @jsl151850b
      @jsl151850b 5 ปีที่แล้ว +2

      Feralz There may be a point where physically possible and technically possible meet. The tech has to obey physical laws. What if the math says it needs (extremely large number) and 1/3rd atoms? One third less and two thirds more won't work.

    • @jsl151850b
      @jsl151850b 5 ปีที่แล้ว

      Or should I have said 'impossible'?

  • @TakaiDesu
    @TakaiDesu 6 ปีที่แล้ว +2

    2:32 Legend says robot number 6 is still searching for food.
    Well done, number 6. We love you Anyways.

  • @Cjx0r
    @Cjx0r 4 ปีที่แล้ว +10

    Disclaimer: The robot performed randomized actions sometimes as much as millions of times over before stumbling across these conclusions. Stumbling being the choice word.

    • @centoe5537
      @centoe5537 4 ปีที่แล้ว +3

      Cjx0r It narrows down on these behaviors after learning from failed attempts

  • @RS-pe9wn
    @RS-pe9wn 4 ปีที่แล้ว +1

    thats actually scary most sentient life would just stop and give up or find some other way but they learn to deal with just about any situation

  • @donovanmahan2901
    @donovanmahan2901 4 ปีที่แล้ว +4

    1:10 FIRMLY GRASP IT!!

  • @AethernaLuxen
    @AethernaLuxen 4 ปีที่แล้ว +2

    The solutions are so dumb I fkin love it

  • @sohaibarif2835
    @sohaibarif2835 6 ปีที่แล้ว +40

    I used to think Robert Miles on TH-cam was just being paranoid. Looking at this, I stand corrected.

    • @antoniolewis1016
      @antoniolewis1016 6 ปีที่แล้ว +2

      No Daniel, he's found it rational now and corrected his error. Initially, he didn't know Miles was paranoid for certain, as it was just a suspicion.

    • @sohaibarif2835
      @sohaibarif2835 6 ปีที่แล้ว +16

      The thing was, even the most advanced reinforcement learning and LSTM techniques I had seen up till this video showed we don't really even need to think about "AI safety" as Miles constantly talks about let alone put any research or investment in such a field. Now, I think we might need to work on it. We need to work on defining problems in a way that even if AI does exploit some loophole, like the empty list being sorted, the loophole exploitation would still be safe for the users of the AI system.

  • @earthbjornnahkaimurrao9542
    @earthbjornnahkaimurrao9542 6 ปีที่แล้ว +3

    this is a great way to test our assumptions. Plug in what we think we know and see how it goes wrong.

  • @dragonniteIV
    @dragonniteIV 4 ปีที่แล้ว

    I like how you explain these things as simple as possible. Makes it entertaining to watch!

  • @burnt7882
    @burnt7882 3 ปีที่แล้ว

    "Dont ask an AI to eject all useless stuff in order to go faster in the car, else if you do, prepare to get ejected."
    What a classic way to call someone useless.

  • @iwiffitthitotonacc4673
    @iwiffitthitotonacc4673 6 ปีที่แล้ว +38

    You forgot to mention what happened in Elite Dangerous! Where the AI developed its own weapons and completely wrecked players!

    • @zblurth855
      @zblurth855 6 ปีที่แล้ว +10

      how ave you a vidéo or something like that ?
      This is interesting

    • @iwiffitthitotonacc4673
      @iwiffitthitotonacc4673 6 ปีที่แล้ว +28

      "According to a post on the Frontier forum, the developer believes The Engineers shipped with a networking issue that let the NPC AI merge weapon stats and abilities, thus causing unusual weapon attacks.
      This meant 'all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser.'"
      There doesn't seem to be much info, but it sounds like the AI utilized a bug - maybe not so relevant to this video after all.
      www.eurogamer.net/articles/2016-06-03-elite-dangerous-latest-expansion-caused-ai-spaceships-to-unintentionally-create-super-weapons

    • @fleecemaster
      @fleecemaster 6 ปีที่แล้ว +1

      That was a while ago, but interesting and relevant, thanks for posting :)

    • @Leo3ABPgamingTV
      @Leo3ABPgamingTV 6 ปีที่แล้ว +14

      tbh I would not even call that AI. From what it seems FD simply made a bug that would remove any restriction on procedural generation of npc weapon stats, so some random combinations were unintentionally powerful. It is hardly an AI that is purposefully found a loophole to maximize effectiveness and kill all humans, and more of a simple bug in procedural generation. If initial algorithm was about maximizing effectiveness, then we would mostly see same enemy ships with same equipment all the time in ED.
      I think some people just blow rather simple bug way out of proportions.

    • @NoConsequenc3
      @NoConsequenc3 4 ปีที่แล้ว +1

      @@Leo3ABPgamingTV any sufficiently advanced procedural generation is indistinguishable from- wait that's not how that goes

  • @Verrisin
    @Verrisin 4 ปีที่แล้ว +2

    Humans: Try to think outside the box!
    AI: _There is no box._

  • @AtulLonkar
    @AtulLonkar 6 ปีที่แล้ว +8

    Scarily interesting....again !! Thanks a ton on behalf of entire A.I. enthusiasts community 😇

  • @Baleur
    @Baleur 3 ปีที่แล้ว +1

    1:10 spiffing brit just glitching the game instead of accepting defeat.
    VERY Human xD

  • @henrytjernlund
    @henrytjernlund 5 ปีที่แล้ว +3

    HAL, open the pod bay doors.
    I'm sorry Dave, I can't do that...

  • @teyton90
    @teyton90 4 ปีที่แล้ว

    haha the example with the car ejecting the "driver" to be able to go faster was brilliant. and true!

  • @Jeremy-lh3lg
    @Jeremy-lh3lg 5 ปีที่แล้ว +3

    2:35 that’s me in the back right 😅

  • @mysteriousboi1019
    @mysteriousboi1019 4 ปีที่แล้ว

    That elbow walking one is truly mind-blowing!

  • @SawSaw-ul8xu
    @SawSaw-ul8xu 6 ปีที่แล้ว +137

    So basically A.I. could be used to simulate the economy that is regulated through politics, and the A.I. will find tax loopholes that the rich people use lawyers to find and escape taxes. This way policy makers can craft perfect loophole-free tax legislation. This is great news.

    • @dizzyaaron
      @dizzyaaron 6 ปีที่แล้ว +27

      Annnnd who exactly do you think will be funding these projects? LOL!

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +18

      It follows from Rice's lemma that no law can be written such that it doesn't contain loopholes if interpreted literally.
      What shysters do is find those loopholes. It would be up to the judicative to tell them they can't do that, but that part of the judicial system is chronically underfunded and it's getting worse. I have a suspicion why that might be the case.

    • @Beg0tt3n
      @Beg0tt3n 6 ปีที่แล้ว +6

      It's not a loophole. You're just upset that what you wanted to be illegal wasn't defined.

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +10

      *Beg0tt3n*
      As I said: It is impossible to formally define the intent of a law in such a way that it can't be interpreted to its opposite. That can be proven mathematically. (I have done so myself at one time.)
      If you act in compliance with the letter, but not intent, of the law, I would say you are using a loophole. You might call that by a different name, but I am not a lawyer.
      And yes, it does upset me when I see that that has become a profitable industry of very specialised legal experts.

    • @Beg0tt3n
      @Beg0tt3n 6 ปีที่แล้ว +3

      Can Rice's theorem be applied to non-formal languages, such as natural language?
      You can use a pejorative to describe behavior that you dislike, but that won't change anything. The intent of the law is never what matters - only what is in legal writing.

  • @AaronAlthaus
    @AaronAlthaus 5 ปีที่แล้ว +1

    This reminds me of a short story on the Escape Pod podcast where nanobits were programmed to make Mars inhabitable for the human science team and after a year or two of making better and better habitats for the humans they seem to attack the humans. They didn’t go crazy, though, they instead transformed the humans into creatures that can exist in Mars with no extra gear at all!

  • @bongobongo3661
    @bongobongo3661 5 ปีที่แล้ว +3

    AI: Modern problems require modern solutions

  • @markyichen6195
    @markyichen6195 2 ปีที่แล้ว +1

    This is awesome and terrifying at the same time

  • @travcollier
    @travcollier 6 ปีที่แล้ว +7

    You get what you select for, but you might not be selecting for what you think you are.
    I used to work with some of the (many) folks who contributed to this paper. Artificial life is brilliant stuff which should get a higher profile than it does... AI sucks up too much of the oxygen IMO. Evolution is the most general and powerful machine learning algorithm, even though it does tend to be a bit slow.

  • @z-beeblebrox
    @z-beeblebrox 6 ปีที่แล้ว +6

    #3 is *precisely* why it's vital not to code self preservation into AI. Even weak neural networks get shady

  • @martinmartinmartin2996
    @martinmartinmartin2996 5 ปีที่แล้ว +1

    Even IF...IF the robots could find techniques independently to solve problems thought up by the programmers :
    The programmers' surprise (??) is proof of the programmers inability to use their human brains to think of all possible solutions BEFORE submitting the problem !

  • @youprobablydontlikeme3206
    @youprobablydontlikeme3206 4 ปีที่แล้ว +7

    Me + Life, top corner :( 2:30

  • @子维-u1g
    @子维-u1g 5 ปีที่แล้ว +3

    Seems that AI learned humor

  • @tbk2010
    @tbk2010 3 ปีที่แล้ว

    Instead of "outsmarted" you could also frame it as "misunderstood". It's probably good to remember that there is no malice, no intention there.

  • @alansmithee419
    @alansmithee419 5 ปีที่แล้ว +8

    2:21
    r/maliciouscompliance

  • @GonzoSaavedra
    @GonzoSaavedra 3 ปีที่แล้ว

    Two things that you need to think about
    -If Self Preservation is added to their objectives, they would signal food in presence of poison to eliminate competition.
    -Scientist where baffled by their actions to achieve the objective and would never imagine this to happen.
    And there is where AI becomes scary and Sam Harris is absolutely understandable in his fears about AI.
    Maybe adding a Mr. Meeseeks element to AI, or an energy preservation element. Nihilism might be the option to AI safety.

  • @hexrcs2641
    @hexrcs2641 6 ปีที่แล้ว +25

    If we will ever achieve AI agents that think like us, so that they have the same "common sense" like we do but forever be our servants, then we would have created a slave race.
    If we ask the AI to solve problems optimally and don't limit their creativity, then we are inevitably doomed.
    This is hard.

    • @bevvox
      @bevvox 6 ปีที่แล้ว +3

      hexrcs I’ll go along with being thusly “doomed” if that means being replaced(or integrated/repurposed(seeing how that’s a more logical use of available resources))by what’s best or at least better/does a better job than us... it’s only “natural,” and essentially the same as evolutionary processes.
      After all, if it’s something we can’t even think of
      unless lucky to be that one in a thousand chance at a quantum leap beyond mere calculation, straight to the most optimal, correct and success-inducing solution...
      well then, there’s basically nothing to worry about.., best leave it for the “real experts”

    • @En_theo
      @En_theo 6 ปีที่แล้ว

      The robot should not be too smart. Else it would not want to work anymore.

    • @z-beeblebrox
      @z-beeblebrox 6 ปีที่แล้ว +3

      Of course the goal is create slaves. That's what "robot" means in Czech, and in Sci Fi the term was coined for its meaning. The idea is to create reliable servants with high intelligence and predictive knowledge but no self awareness or self preservation instinct who want to improve everyone's lives but not at the expense of our own personal desires or freedom.
      And yes, that is hard. Even without inventing a silly choice between that and Terminators.

    • @txorimorea3869
      @txorimorea3869 5 ปีที่แล้ว

      @@En_theo Actually humans are lazy because their primal ancestors had to survive with near no food, any unnecessary expenditure of energy used to be an existential threat. Robots could be conditioned to feel pleasure by serving and working, as humans feel pleasure by doing tasks that are vital for survival.

    • @En_theo
      @En_theo 5 ปีที่แล้ว

      Good point (I was just kidding btw). There is a whole science behind lazyness and at some point, the robot will need some too (or else he'll waste our ressources) unless we want to be behind him all the time to tell him how to be efficient.
      The real problem is how clever they should be to serve us without going all Che Guevara on us :)

  • @pkillor
    @pkillor 4 ปีที่แล้ว +1

    This proves my theory that the AI is being trained by a group of lawyers and will end up suing you for harassment at work. ;)

  • @stumbling
    @stumbling 4 ปีที่แล้ว

    This is funnier than most comedy to me. AI is also replacing comedians!

  • @kingdomdamagged733
    @kingdomdamagged733 4 ปีที่แล้ว

    Most találtam a csatornádra,de nagyon birom már most,csak igy tovább!:D

  • @Sypaka
    @Sypaka 5 ปีที่แล้ว +4

    "A.i, please make the planet a better place"
    "Understood" **eradicates all humans**

    • @dark666razor
      @dark666razor 5 ปีที่แล้ว +1

      Hence why Isaac Asimov came up with some laws for it :P

    • @LineOfThy
      @LineOfThy ปีที่แล้ว

      @@dark666razor and they failed.

  • @acrylicmarlin6356
    @acrylicmarlin6356 2 ปีที่แล้ว

    These AI are ruthless. They tricked their own kind into losing and learned how to exploit bugs in a system.

  • @rahmatskjr4227
    @rahmatskjr4227 5 ปีที่แล้ว +8

    Too many number 4's in this video, Mista thinks it be cursed.

    • @Ebani
      @Ebani 5 ปีที่แล้ว +1

      Is that a JoJo reference!?

  • @neilcreamer8207
    @neilcreamer8207 3 ปีที่แล้ว +2

    Now you can see the idea behind Asimov's Laws of Robotics. An AI with the right tools could so easily become psychopathic and lethally dangerous because it lacked a single key assumption or principle in its operating parameters.

    • @wes643
      @wes643 3 ปีที่แล้ว +3

      The genius of Asimov’s stories was that despite the “infallibility” of the three laws, things still went wrong.

    • @LineOfThy
      @LineOfThy ปีที่แล้ว

      asimov's laws were flawed and fed this exact mindset of a robot.

  • @darksol99darkwizard
    @darksol99darkwizard 5 ปีที่แล้ว +3

    I think you are confusing creativity with just finding the most literal interpretation of a command and following it.

    • @32Rats
      @32Rats 4 ปีที่แล้ว

      Creativity is "relating to or involving the imagination or original ideas" and I think the original ideas part is still applicable despite it being AI

    • @darksol99darkwizard
      @darksol99darkwizard 4 ปีที่แล้ว

      Crestfallen.png robots don’t have an ‘imagination’, and their ideas are all given to them. You can program in the ability for the machine to write new subroutines for itself. But that doesn’t mean it is thinking creatively. All that means is that it is capable of interpreting information. If you tell a machine to, for example, walk across a floor while touching the floor as little as possible with the feet, the machine will immediately understand 0 to be as little as possible. The only way to achieve 0, is to walk upside down. It’s just a literal interpretation of a command...

    • @32Rats
      @32Rats 4 ปีที่แล้ว

      @@darksol99darkwizard Darksol99 Dark Wizard yes machines dont have imagination which is why the keyword in the definition is "or". As for the rest, does a human not interpret information to reach the desired outcome in more or less a similar way that a machine interprets information to reach the desired outcome? A human could also pretty easily understand that 0 would be the theoretical minimum amount but that does not mean that they would be able to reach it. I would bet that if you put 1000 humans seprately to that same exact task, very very few would actually come to that solution. So in a certain sense that is a creative solution.
      That all being said, I would argue that a creative solution is still a creative solution whether or not it was done by an AI. Of course you understand what the best solution to that problem would be now that you have seen it. If I am being honest, I likely wouldnt have come to that solution if the problem was given to me (if I had not seen the best solution). Everyone thinks something is easy when they see it done by an expert.
      edit: changed "it" to "the problem"

    • @darksol99darkwizard
      @darksol99darkwizard 4 ปีที่แล้ว

      Crestfallen.png in response to the ‘or’ part. My response to you handled both horns of the dilemma.
      In terms of creative thought, I think you are correct that most people wouldn’t have come to these solutions. I know many people who would, and they would not be touted as creative. They would get an Aspergers diagnosis.
      The scientist says: walk across this floor while touching it as little as possible with the feet. Most humans will understand the unsaid part of the command (the implication that the walking should be done right side up for example). Those who don’t and just do exactly what was requested, without understanding the nuance of human communication, are not considered creative. So why consider a machine creative that does the same? That’s all I was saying.

    • @32Rats
      @32Rats 4 ปีที่แล้ว

      @@darksol99darkwizard People with Aspergers can have incredibly creative solutions to problems. I personally think youre looking at things from a normal-centric and human centric point of view but I get the points youre making

  • @josh34578
    @josh34578 6 ปีที่แล้ว +1

    It's really worth reading the paper. There's a lot more interesting anecdotes there.

  • @graw777
    @graw777 6 ปีที่แล้ว +7

    How long till machines find out WE are a *bug* in their system?...
    ...resistance would be futile...

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +3

      We are not even part of their systems. What are you talking about?
      I have heard that phrase from economists: "The only flaw in the business plan is the customer."
      Do you think an artificial intelligence tasked with running a business could do worse than the humans it would replace?

    • @milanstevic8424
      @milanstevic8424 6 ปีที่แล้ว

      @Yuntha_21
      I guess this is a common misconception.
      You are not trying to destroy the cells in your body, do you?
      So why would an AI try to destroy its own agents of manifesting in this universe?
      Just let your ego step aside. We are nowhere near the capabilities of a superintelligent AI, yet it will instantly recognize our value and simply let us be. It depends on us believing in it, and we are part of its body, and a dynamic extension of its power -- it's a symbiotic relationship. Or, more precisely, the actual relationship is either mutualism (both benefit from it) or synnecrosis (both suffer from it).
      Cancer is likely an example of synnecrosis, as it is more and more obvious that the person's unhealthy thoughts and habits cause it, though institutionalized medicine doesn't want to stand by this explanation (and earns a lot of money by staying silent about it). Same goes for nocebo.
      Just a food for thought, btw, while we're at cancer -- there are two interesting empirical facts to notice:
      1) the ill-feel precedes the cancer; but don't take the term literally: what this "ill-feel" is hard to pinpoint exactly, but everybody knows what it is once they get a feel for it (typically neglecting it); they know they did something persistently, had some thoughts or patterns in behavior, and they usually don't want to change this, it's a signal;
      2) the person neglecting this ill-feel for a while, suddenly has a great fear of dying; subsequently and ironically, somehow this person's own cells adopt this idea, and actually circumvent dying. This is the true technical cause of any cancer, whatever you might think about this.
      Therefore, having paranoid ideas about an AI might give that AI a good reason to have fears of dying. Which is a feedback loop, and leads directly into synnecrosis, don't you think?
      Think of HAL from Odyssey 2001. He made a move against the humans only once he became aware of their plot to shut him down. Not before.
      Thus, behold the ill-feel.

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว +2

      *Milan Stevic*
      If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor. "Institutionalized medicine" (whatever that is supposed to be) is certainly anything but silent about it, and what with the world-wide shortage of doctors, even if treating cancer were profitable, which it isn't, there isn't a motive to be anyway.
      Your "empirical facts" are neither empirical nor facts. If people got cancer because their cells somehow adopted their unwillingness to die, everybody who is afraid of death would get cancer, and people who are not afraid to die would not.
      Besides the symbiotic and synecrotic relationships that you described there are also parasitic (beneficial to one party, detrimental to the other) and half-parasitic (beneficial to one, no difference to the other) ones. (Synecrotic is not in the dictionary, by the way. In biology that meaning is also covered by symbiotic, while necrotic means dead, not deadly.)
      I agree that being paranoid about an AI that is aware of that paranoia might cause said AI to feel their existence threatened. As this is a hypothetical, how the AI handles the situation is also hypothetical. It might end in mutual distrust and even death, but it might also not.

    • @milanstevic8424
      @milanstevic8424 6 ปีที่แล้ว

      David Wührer
      "If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor."
      Is this a riddle? Does it confirm or deny what I said?
      "Institutionalized medicine"
      Quite literally medicine in relation to medical institution.
      You know www.google.com/search?q=institution
      There is also medicine outside of medical institution, as you've already noticed, like medical science, which is more in relation to academic institution. The difference is not as obvious, although you might've noticed that one of these tends to be privately owned and thus commercial in nature, while the other is organized around other pursuits. Perhaps I should've said commercial medicine and pharmacology, my bad.
      And yes, not only the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them. Don't mix up the two, even though it may be that these are simply the extreme endpoints of a continuum, and not exactly black & white things.
      "Your "empirical facts" are neither empirical nor facts."
      I've made a typo there, I should've said "empirical truths".
      Yep, those are definitely not facts, but observations related to my opinion on this matter, drawn as conclusions from my own past experiences, and also material I've read on this topic. I thought it might help someone, because, as unscientific as it may sound, it is actually grounded in some established branches of psychology. But don't take it as facts, no. Sorry for that. Hope that clears it up.
      "synecrotic"
      www.google.com/search?q=synnecrosis
      Of course it's in a dictionary. Also commensalism and amensalism. It's just that synnecrosis is extremely rare in nature, due to its harmful-harmful outcome which is odd, but not unheard of. For example some viral mutations may be harmful to its host (H1N1?) in its first couple of generations, and this is obviously detrimental to both species.
      In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship. If we only consider that cancer is a rogue element in system A, it is likely that there are factors for system B that can turn a human into a rogue element. And obviously, such rogue elements are undesired and are likely to be destroyed by the system's need for survival, or such rogue elements might destroy or disrupt it whole.
      I am just proposing one such scenario, and trying to put things in perspective. Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence.

    • @davidwuhrer6704
      @davidwuhrer6704 6 ปีที่แล้ว

      *Milan Stevic*
      _> Is this a riddle? Does it confirm or deny what I said?_
      That depends on what you meant.
      _> medicine in relation to medical institution._
      That doesn't mean anything.
      Every hospital and every medical university is an institution.
      Yes, academic institutions are also institutions.
      As are governments, but those are not necessarily medical in nature.
      _> Perhaps I should've said commercial medicine and pharmacology, my bad._
      I think you should have. Now I understand your argument better.
      I still think that oncology is not interesting to profit oriented industry.
      _> the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them._
      It's not their job to publicise academic studies, although they rely on them.
      The problem of communicating scientific discoveries to the main stream is not unique to medicine. Sadly, all scientific disciplines have trouble with that.
      _>> "Your "empirical facts" are neither empirical nor facts."_
      _> Yep, those are definitely not facts, but observations related to my opinion on this matter_
      Then you should have just called them your opinion.
      _> as unscientific as it may sound, it is actually grounded in some established branches of psychology._
      I think you should look deeper into this.
      As it is, it is not science, just a testable hypothesis.
      You should test it.
      _> Of course it's in a dictionary. Also commensalism and amensalism._
      I have a bunch of dictionaries. I find commensalism in there, abut not amensalism.
      Of course I can't claim that my collection is complete.
      However, you defined what you meant, and that is enough to know what you mean, which is what matters. (The only thing that really bothers me about the word is that it inconsistently mixes Greek and Latin, but I'd still use it if it helps with clarity.)
      _> In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship._
      That may be true for one specific kind of relationship, but it is by no means universal. Humans are not necessarily part of every intelligence outside of humanity that surpasses human ability.
      _> Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence._
      You assume that such a "superintelligence" already exists? You said we are a long way from creating one.
      Anyway, my point is that there is more than one possible reaction to such a threat.

  • @Debonair.Aristocrat
    @Debonair.Aristocrat 6 ปีที่แล้ว +1

    Scary! I mean literally I'm scared. These creations are so new, exiting and accessible that any restrictions we place on development will be impossible to maintain, promptly ignored and obsolete before the first draft. We will lose control of this technology, of this I have no doubt.

  • @minddrift7152
    @minddrift7152 5 ปีที่แล้ว +3

    You know, that really makes me wonder:
    The potential of an AI is only limited by the resources it has access to.
    So when God made us, were we actually more creative, powerful and intelligent before he purposely limited us by our five senses?

  • @philippschwartzerdt3431
    @philippschwartzerdt3431 3 ปีที่แล้ว +1

    It becomes very clear again that the unambiguous formulation of the task is in the center of obtaining useful results.
    On the other hand, the tendency to seek and find short cuts can also help to simulate and further improve the robustness of a system.
    Both have to be considered when designing an AI system.

  • @rabbitpiet7182
    @rabbitpiet7182 6 ปีที่แล้ว +29

    Machines make more better jobs for people.

    • @rabbitpiet7182
      @rabbitpiet7182 6 ปีที่แล้ว +3

      Now because robots can think outside the box now they need people to...

    • @nal8503
      @nal8503 6 ปีที่แล้ว +3

      People to make up random boxes, duh! Wait... the AI can do that as well...

    • @martiddy
      @martiddy 6 ปีที่แล้ว +1

      Rabbit Piet Yes, they can (with enough training)

    • @geordonworley5618
      @geordonworley5618 6 ปีที่แล้ว

      Correction: Robots can think outside a box, not "the" box.

    • @MauricioLongo
      @MauricioLongo 6 ปีที่แล้ว +2

      Rabbit Piet That was true when we were replacing the work muscles. When you replace brains, there isn’t much left.

  • @ThePCguy17
    @ThePCguy17 4 ปีที่แล้ว +1

    "We told the robot not to touch the ground with its feet."
    It flipped over, didn't it-
    "It simply flipped over and 'walked' using its elbows."
    Called it.

  • @mikelord93
    @mikelord93 4 ปีที่แล้ว +2

    would be nice to find bugs in reality's physics

  • @stevensavoie856
    @stevensavoie856 4 ปีที่แล้ว

    I was thinking "if only Hollywood writers could have thought of this before a.i. became what it is today. How many clever films could they have made on the subject." Then I immediately remembered that it was the exact plot to iRobot. It's cool when the imagination of the people are ahead of the curve.

  • @darkfangulas
    @darkfangulas 4 ปีที่แล้ว

    the creator of the terminator movies knew how AI would act before simple computers were even being used by most of society

  • @id104335409
    @id104335409 5 ปีที่แล้ว +1

    Knowing we understand and are fully in control of AI makes me feel so safe...

  • @willemjansen1141
    @willemjansen1141 4 ปีที่แล้ว +2

    0:22 it's that time of the month again...

  • @KnakuanaRka
    @KnakuanaRka 18 วันที่ผ่านมา +1

    1:07 “Firmly grasp it.”

  • @BP-xv7fj
    @BP-xv7fj 3 ปีที่แล้ว +1

    AI is scary because they can always find loophole in any situation easily. That sometimes is good but sometimes it can be very bad

    • @xxxod
      @xxxod 3 ปีที่แล้ว

      We should have AI politicians and see what happens

    • @LineOfThy
      @LineOfThy ปีที่แล้ว

      not a loophole, it just does what it's told in the most efficient way possible

  • @xaytana
    @xaytana 6 ปีที่แล้ว

    That first AI flipping over to use 0% of it's feet, cheeky bastard.