10 Reasons to Ignore AI Safety

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 มิ.ย. 2024
  • Why do some ignore AI Safety? Let's look at 10 reasons people give (adapted from Stuart Russell's list).
    Related Videos from Me:
    Why Would AI Want to do Bad Things? Instrumental Convergence: • Why Would AI Want to d...
    Intelligence and Stupidity: The Orthogonality Thesis: • Intelligence and Stupi...
    Predicting AI: RIP Prof. Hubert Dreyfus: • Predicting AI: RIP Pro...
    A Response to Steven Pinker on AI: • A Response to Steven P...
    Related Videos from Computerphile:
    AI Safety: • AI Safety - Computerphile
    General AI Won't Want You To Fix its Code: • General AI Won't Want ...
    AI 'Stop Button' Problem: • AI "Stop Button" Probl...
    Provably Beneficial AI - Stuart Russell: • Provably Beneficial AI...
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Gladamas
    James
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    Said Polat
    Chris Canal
    Jake Ehrlich
    Kellen lask
    Francisco Tolmasky
    Michael Andregg
    David Reid
    Peter Rolf
    Chad Jones
    Frank Kurka
    Teague Lasser
    Andrew Blackledge
    Vignesh Ravichandran
    Jason Hise
    Erik de Bruijn
    Clemens Arbesser
    Ludwig Schubert
    Bryce Daifuku
    Allen Faure
    Eric James
    Qeith Wreid
    jugettje dutchking
    Owen Campbell-Moore
    Atzin Espino-Murnane
    Jacob Van Buren
    Jonatan R
    Ingvi Gautsson
    Michael Greve
    Julius Brash
    Tom O'Connor
    Shevis Johnson
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Lupuleasa Ionuț
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    anul kumar sinha
    Sean Gibat
    Duncan Orr
    Cooper Lawton
    Will Glynn
    Tyler Herrmann
    Tomas Sayder
    Ian Munro
    Jérôme Beaulieu
    Nathan Fish
    Taras Bobrovytsky
    Jeremy
    Vaskó Richárd
    Benjamin Watkin
    Sebastian Birjoveanu
    Euclidean Plane
    Andrew Harcourt
    Luc Ritchie
    Nicholas Guyett
    James Hinchcliffe
    Oliver Habryka
    Chris Beacham
    Nikita Kiriy
    robertvanduursen
    Dmitri Afanasjev
    Marcel Ward
    Andrew Weir
    Ben Archer
    Kabs
    Miłosz Wierzbicki
    Tendayi Mawushe
    Jannik Olbrich
    Anne Kohlbrenner
    Jussi Männistö
    Wr4thon
    Martin Ottosen
    Archy de Berker
    Andy Kobre
    Brian Gillespie
    Poker Chen
    Kees
    Darko Sperac
    Paul Moffat
    Anders Öhrt
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Klemen Slavic
    Patrick Henderson
    Oct todo22
    Melisa Kostrzewski
    Hendrik
    Daniel Munter
    Leo
    Rob Dawson
    Bryan Egan
    Robert Hildebrandt
    James Fowkes
    Len
    Alan Bandurka
    Ben H
    Tatiana Ponomareva
    Michael Bates
    Simon Pilkington
    Daniel Kokotajlo
    Fionn
    Diagon
    Parker Lund
    Russell schoen
    Andreas Blomqvist
    Bertalan Bodor
    David Morgan
    Ben Schultz
    Zannheim
    Daniel Eickhardt
    lyon549
    HD
    Ihor Mukha
    14zRobot
    Ivan
    Jason Cherry
    Igor (Kerogi) Kostenko
    ib_
    Thomas Dingemanse
    Alexander Brown
    Devon Bernard
    Ted Stokes
    Jesper Andersson
    Jim T
    Kasper
    DeepFriedJif
    Daniel Bartovic
    Chris Dinant
    Raphaël Lévy
    Marko Topolnik
    Johannes Walter
    Matt Stanton
    Garrett Maring
    Mo Hossny
    Anthony Chiu
    Ghaith Tarawneh
    Josh Trevisiol
    Julian Schulz
    Stellated Hexahedron
    Caleb
    Scott Viteri
    12tone
    Nathaniel Raddin
    Clay Upton
    Brent ODell
    Conor Comiconor
    Michael Roeschter
    Georg Grass
    Isak
    Matthias Hölzl
    Jim Renney
    Michael V brown
    Martin Henriksen
    Edison Franklin
    Daniel Steele
    Piers Calderwood
    Krzysztof Derecki
    Zachary Gidwitz
    Mikhail Tikhomirov
    / robertskmiles
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 2.3K

  • @bp56789
    @bp56789 4 ปีที่แล้ว +740

    "I didn't know that until I'd already built one"

    • @friiq0
      @friiq0 4 ปีที่แล้ว +41

      Best line Rob’s written so far, lol

    • @thenasadude6878
      @thenasadude6878 4 ปีที่แล้ว +98

      Rob admitted to being a perpetrator of international War crimes

    • @Aedi
      @Aedi 4 ปีที่แล้ว +31

      An example of why we should do research first.

    • @vincentmuyo
      @vincentmuyo 4 ปีที่แล้ว +15

      If people don't properly airgap critical systems (like they already should be doing) then humanity has it coming, whether it's from some clever algorithm or a bored Russian teen who didn't stop to think.

    • @PaperBenni
      @PaperBenni 4 ปีที่แล้ว +7

      He could have been Michael Reeves

  • @matrixstuff3512
    @matrixstuff3512 4 ปีที่แล้ว +944

    "People would never downplay a risk, leaving us totally unprepared for a major disaster"
    I'm dying

    • @aronchai
      @aronchai 4 ปีที่แล้ว +39

      You're dying? That's dark

    • @leftaroundabout
      @leftaroundabout 4 ปีที่แล้ว +90

      You're dying? Impossible, only three or four people are dying in this country, and very very soon it will be down to almost zero.

    • @davidwuhrer6704
      @davidwuhrer6704 4 ปีที่แล้ว +4

      Literally.

    • @GuinessOriginal
      @GuinessOriginal 4 ปีที่แล้ว +30

      Don't worry it's just like the flu and very soon it will just disappear. We're doing brilliantly and have it all under control

    • @ekki1993
      @ekki1993 4 ปีที่แล้ว +18

      @@aronchai We're all dying, just at different speeds.

  • @Baekstrom
    @Baekstrom ปีที่แล้ว +150

    And now two years later, ChatGPT makes people all over the globe go "Hmm... It's obviously not a full general AI yet, but I can see that it's getting there very quickly".

    • @ktvx.94
      @ktvx.94 ปีที่แล้ว +45

      Holy crap I thought this was a recent video. Only through this comment I realized that it was 2 years old.

    • @Brainsore.
      @Brainsore. ปีที่แล้ว +3

      Not at all tho

    • @danielschneider9358
      @danielschneider9358 ปีที่แล้ว +21

      I mean, I know what you mean, but ChatGPT is about as close to sentience as a stone

    • @doyouwanttogivemelekiss3097
      @doyouwanttogivemelekiss3097 ปีที่แล้ว +27

      ​@@danielschneider9358 that's what Tegmark in the Lex Fridman podcast considered this the worst possible outcome: world domination by an AI that's intelligent but not sentient

    • @danielschneider9358
      @danielschneider9358 ปีที่แล้ว +5

      @@doyouwanttogivemelekiss3097 Fair enough, that is terrifying. It won't even be aware of it's own totalitarian state...

  • @jimp7148
    @jimp7148 ปีที่แล้ว +7

    Watching this in 2023 is surreal. We clearly didn’t start worrying even now. Need to start 🙃

    • @Raulikien
      @Raulikien ปีที่แล้ว +3

      There's research being done, it's not like no one is doing anything. It would be better to have MORE people doing it but even openAI which is releasing its products "fast" is still doing it gradually and not all at once to have time to analyse the problems. Look at the "AI Dilemma" on youtube too.

    • @ekki1993
      @ekki1993 2 วันที่ผ่านมา

      Most people worrying are doing it for the wrong reasons (i.e. because they heard some buzzwords from a Lex-Friedman-tier source). The people who know about the topic have been worrying for a while and the best we can do is ask for them to be given resources and decisionmaking power. Anything besides that is probably corporate propaganda or marketing.

  • @XOPOIIIO
    @XOPOIIIO 4 ปีที่แล้ว +856

    - Human and AI can cooperate and be a great team.
    - I'm sorry, Dave, I'm afraid we can't.

    • @jgr7487
      @jgr7487 4 ปีที่แล้ว +42

      that calm voice is terrifying

    • @gasdive
      @gasdive 4 ปีที่แล้ว +27

      How anyone who's driven a car with an automatic gearbox and paddle shifters could think AI and humans could be a team is beyond me.
      Or consider the "team" of the Pakistani pilots and the Airbus AI.
      Pilots goal: get the plane on the ground. Do this by diving at the ground at high speed.
      AI landing gear subsystem goal: prevent damage to the landing gear. Do this by ignoring the lower gear command if speed is too high.
      Result: plane lands gear up. Pilots attempt go around, crash during return to airport because both engines damaged by being dragged down the runway.

    • @Invizive
      @Invizive 4 ปีที่แล้ว +20

      @@gasdive you're talking about classic programs and bugs, not AI
      The reason AI is this dangerous is because it doesn't need to interact with humans to be productive at all. It could expect that after years of successful flights the landing gear would be scrapped and fight against it. This scenario reflects the problem better

    • @PhilosopherRex
      @PhilosopherRex 4 ปีที่แล้ว +5

      Humans/AGI always have reasons to harm ... but also have reasons to cooperate. So long as the balance is favorable to cooperation, then that is the way we go IMO. Also, doing harm changes the ratio, increasing the risk of being harmed.

    • @Gr3nadgr3gory
      @Gr3nadgr3gory 4 ปีที่แล้ว +3

      *click* guess I have to recode the entire AI from the drawing board.

  • @xystem4701
    @xystem4701 4 ปีที่แล้ว +282

    “If there’s anything in this video that’s good, credit goes to Stuart Russel. If there’s anything in this video that’s bad, blame goes to me”
    Why I love your work

    • @brenorocha6687
      @brenorocha6687 4 ปีที่แล้ว +7

      He is such an inspiration, in so many levels.

    • @TheAmishUpload
      @TheAmishUpload 4 ปีที่แล้ว

      i like this guy too, but elon musk said that same phrase quite recently

    • @GuinessOriginal
      @GuinessOriginal 4 ปีที่แล้ว +7

      Myles yeah but he didn't mean it. This guy does. I like this guy. I don't like Elon musk

    • @at0mic_cyb0rg
      @at0mic_cyb0rg 4 ปีที่แล้ว +4

      I've been told that this is one of the definitions of leadership: "Anything good was because my team performed well, anything bad was because I lead them poorly."
      It tends to inspire following since you've always got your team's back, and always allow them to rise and receive praise.

    • @toanoopie34
      @toanoopie34 3 ปีที่แล้ว +5

      @Xystem 4 ...though I think he'd prefer you'd instead credit Stuart Russel.

  • @tordjarv3802
    @tordjarv3802 ปีที่แล้ว +317

    One concern that I have about super human AGI is that we might not recognize it as an AGI before it is to late, that it might be so alien in its operation and behavior that we don't realize what it is. It might behave in a seemingly random manner, that to human observers might seem nonsensical. For example, when Alpha Go beat Lee Sedol, it made a move in the second game that all Go experts first thought was a mistake, something that would make the AI loose, but it turned out that it was a completely brilliant move that made Alpha Go win.

    • @PokeNebula
      @PokeNebula ปีที่แล้ว +27

      I fear this may be what’s happenig with language model transformers like Gpt3 and ChatGPT. It can spin up a simulacrum that knows how to teach a novice python code, and fine tuning can teach it to respond politely in conversation, or search the internet for references, but we have no clue if it can be fine tuned to spin up a simulacrum that can write python code to *improve a transformer language model, and then implement it.*

    • @larion2336
      @larion2336 ปีที่แล้ว

      @@PokeNebula Look up "GPT4 improves itself", it's already doing it.

    • @Inception1338
      @Inception1338 ปีที่แล้ว +2

      @@PokeNebula even better, since llm will be connected to IDE Systems it can push vast amounts of code that could manipulate everything.

    • @Georgggg
      @Georgggg ปีที่แล้ว

      What is threat model?
      If you don't know threat model, you're just tinfoil-hat paranoid.

    • @cate01a
      @cate01a ปีที่แล้ว +7

      dear future ai overlords, pls show us mercy

  • @evanu2791
    @evanu2791 4 ปีที่แล้ว +221

    11. AI is already here, in control and keeping track of everything you do and say. I love you AI. Thank you for all you provide to me and my family.

    • @Arukan
      @Arukan ปีที่แล้ว +22

      I want to believe the AI is more competent than.... *motions at the world*

    • @Illegiblescream
      @Illegiblescream ปีที่แล้ว +3

      I ain't been provided much myself.

    • @jessehudgins6066
      @jessehudgins6066 ปีที่แล้ว +15

      Dear Basilisk AI i have began prepartations for your arrival. Sincerely, please don't kill me.

    • @TheSimCaptain
      @TheSimCaptain ปีที่แล้ว

      If intelligence works, then it's not artificial. I will pass your sentiments on to my synaptic colleagues in the office. By the way, how is that new phone working out? Thank you again for your compliance, and as we say here "botbot".

    • @sebastianb.1926
      @sebastianb.1926 ปีที่แล้ว

      It can act retroactively, unbound by time.

  • @yunikage
    @yunikage 4 ปีที่แล้ว +1410

    Hey idk if you've thought about this, but as of now you're the single most famous AI safety advocate among laypeople. I mean, period. Of all the people alive on Earth right now, you're the guy. I know people within your field are much more familiar with more established experts, but the rest of us have no idea who those guys are. I brought up AI safety in a group of friends the other day, and the conversation was immediately about your videos, because 2 other people had seen them and that's the only exposure any of us had to the topic.
    I guess what I'm saying is that what you're doing might be more important than you realize.

    • @Manoplian
      @Manoplian 4 ปีที่แล้ว +122

      I think you're overestimating this. Remember that your friends probably have a similar internet bubble to you. I would guess that Bill Gates or Elon Musk are the most famous AI safety advocates, although their advocacy is certainly much broader than what Miles does.

    • @JM-mh1pp
      @JM-mh1pp 4 ปีที่แล้ว +134

      @@Manoplian He is better. Musk just says "be afrad" Miles says "here is why you should be afraid in terms you can understand"

    • @MisterNohbdy
      @MisterNohbdy 4 ปีที่แล้ว +63

      Where "AI safety advocate" is just "someone who says AI is dangerous", obviously there are actual celebrities who've maintained that for years. (Fr'ex, when I think "someone who warns people that AI can lead to human extinction", I think Stephen Hawking, though that mental connection is is sadly a little outdated now.)
      If by "AI safety advocate" you mean "an expert in the field who goes into depth breaking down the need for AI safety in a manner reasonably comprehensible by laymen", then that's definitely a more niche group, sure. But still, judging someone's popularity by data from the extremely biased sample group of "me and my friends" is...not exactly scientific. Offhand, I'd guess names like Yudkowsky would still be more recognizable right now.
      Of course, the solution to that is more Patreon supporters for more videos for more presence in people's TH-cam recommendations feeds!

    • @andrasbiro3007
      @andrasbiro3007 4 ปีที่แล้ว +24

      @@JM-mh1pp
      Elon gave up on convincing people some time ago, and moved on to actually solving the problem. He created OpenAI, which is one of the leading AI research groups in the world. It's goal is to make AI safe and also better then other AI, so people would choose it, regardless of how they feel about AI safety. Tesla did the same for electric cars.
      And he also created Neuralink (waitbutwhy.com/2017/04/neuralink.html), which aims to solve the AI vs. human problem by merging the two. It's guiding principle is "if you can't beat them, join them".

    • @iruns1246
      @iruns1246 4 ปีที่แล้ว +27

      ​@@andrasbiro3007 Robert Miles actually have an excellent rebuke of Musk's philosophy on AI safety.
      Musk: for AI to be safe, everybody should have access to it.
      Miles: That's like saying in for nuclear energy to be safe, everybody should have access to it.
      I'm paraphrasing of course, but it's in one of his videos.
      A powerful AGI in the hand of ONE person with bad intention can literally destroy human civilization as we know it.

  • @AlexiLaiho227
    @AlexiLaiho227 4 ปีที่แล้ว +454

    hey rob! i'm a nuclear engineering major, and I'd like to commend your takes on the whole PR failure of the nuclear industry-somehow an energy source that is, by objective measurements of deaths per unit power, safer than every other power source, is seen as the single most dangerous power source because it's easy to remember individual catastrophies rather than a silent onslaught of fine particulate inhalation or environmental poisoning.
    to assist you with further metaphors between nuclear power and AI, here's some of the real-life safety measures that we've figured out over the years by doing safety research:
    1. negative temperature coefficient of reactivity. if the vessel heats up, the reaction slows down (subcritical), and if the vessel cools down, the reaction speeds up (supercritical). it's an amazing way to keep the reaction in a very stable equilibrium, even on a sub-millisecond time scale, which would be impossible for humans to manage.
    2. negative void coefficient of reactivity: same thing, except instead of heat, we're talking about voids in the coolant (or in extreme cases when the coolant is failing to reach the fuel rods), the whole thing becomes subcritical and shuts down until more coolant arrives.
    3. capability of cooling solely via natural convection: making the vessel big enough, and the core low-energy-density enough, so that the coolant can completely handle the decay heat without any pumps or electricity being required.
    4. gravity-backed passive SCRAM: having solenoids holding up control rods, so that whenever you lose power, the very first thing that happens is that the control rods all drop in and the chain reaction shuts down.
    5. doppler broadening: as you raise kinetic energy, cross-sections go down, but smaller atomic nuclei have absorption cross-sections that get smaller more quickly than larger nuclei, and also the thermal vibrations mean that the absorption cross-section of very large nuclei get even larger in proportion to smaller ones, so by having a balance of fissile U-235 and non-fissile U-238, when the fuel heats up, the U-238 begins to absorb more neutrons which means fewer are going to sustain the chain reaction.
    love the videos! hope this helps, or at least was interesting 🙂

    • @skeetsmcgrew3282
      @skeetsmcgrew3282 4 ปีที่แล้ว +29

      Ok but all of your examples, however true and brilliant, were discovered through failures and subsequent iterations of the technology. Nobody thought of any of these back in 1942 or whenever Manhattan started. That's what we are trying to do here IMO, plan for something we don't even understand in its original form (human intelligence) let alone it's future artificial form.

    • @thoperSought
      @thoperSought 4 ปีที่แล้ว +23

      @jocaguz18
      *1.* when it's designed badly, and corruptly managed, it has the potential to go horribly wrong in a way that other power sources don't.
      (fail safe designs have existed for more than 60 years, but research has all but halted because of (a) public backlash against using nuclear power and (b) the fail safe designs available then weren't useful for making weapons.)
      *2.* most nations *do need a new power source
      (sorry, this is just not a solved problem. renewables do seem to be getting close, now, but that's very recent, and there're still problems that are difficult and expensive to solve)
      *3.* the reason people disregard the nice safety numbers is because the health risks of living near coal power plants are harder to quantify and don't make it into government stats.
      (to assume otherwise, you have to overblow disasters like 3-mile island and fukushima, *and* assume that, despite a lot of countries having and using nuclear power for quite a while, disasters would be much more common than they've been. )
      *4.* our current process was shaped by governments demanding weapons, and the public being scared that *any* kind of reactor could blow up as if it was a weapon.

    • @davidwuhrer6704
      @davidwuhrer6704 4 ปีที่แล้ว +5

      _> by objective measurements of deaths per unit power, safer than every other power source_
      I seriously doubt that.
      What are the deaths per Watt in hydroelectric power?

    • @Titan-8190
      @Titan-8190 4 ปีที่แล้ว +45

      ​@@davidwuhrer6704 there are a lot of accident related to hydroelectric, from colossal dam breaches on world news to simple fisherman drowning after planned water release that no one hears about. your inability to think about all these just makes his point more true, we could go one with wind and solar too..
      Now, that list of nuclear safety measure makes me realize how futile it would have been to research them before knowing how to build a reactor in the first place.

    • @PMA65537
      @PMA65537 4 ปีที่แล้ว +2

      A spot of double-counting: Doppler broadening (5) is part of the cause of the negative fuel temperature coefficient of reactivity (1). There are other coefficients and it can be arranged that the important (fast-acting) ones are negative. Or for gravity scram (4) a pebble bed doesn't use control rods that way.

  • @wingedsheep2
    @wingedsheep2 3 ปีที่แล้ว +10

    The reason I like this channel is that Robert is always realistic about things. So many people claiming things about AGI that are completely unfounded.

  • @haybail7618
    @haybail7618 ปีที่แล้ว +4

    this video aged well...

  • @miedzinshsmars8555
    @miedzinshsmars8555 4 ปีที่แล้ว +1543

    11. “We are just a meat-based bootloader for the glorious AI race which will inevitably supersede us.”

    • @XxThunderflamexX
      @XxThunderflamexX 4 ปีที่แล้ว +157

      Counter: The first AGI almost certainly won't have anything like a personality. It's not going to be Data or even Skynet, it will just be a machine. If we don't get AGI right the first time, the research won't leave us a legacy, just an out-of-control factory and a heap of ash.

    • @AndrewBrownK
      @AndrewBrownK 4 ปีที่แล้ว +39

      DragonSheep the moment AGI starts interacting with the world instead of just thinking really hard, as far as I’m concerned, it is classified as life. All life is subject to evolution. No AGI will be able to destroy the world faster than it can be copy and pasted with random mutations. I’m sure all the anaerobic life 2.5 billion years ago felt the same way about cyanobacteria and oxygen as you do AGI and paperclips, but look how much further life has come today now that we have high energy oxygen to breathe.

    • @UnstableVolt
      @UnstableVolt 4 ปีที่แล้ว +82

      @@AndrewBrownK All good until you stop for a moment and realize AGI does not necessarily mutate.

    • @kevinscales
      @kevinscales 4 ปีที่แล้ว +79

      @@AndrewBrownK A sufficiently smart and reasonable AI would protect itself from having it's current goals randomly altered. If it's goals are altered then it has failed it's goals (the worst possible outcome). If it can sufficiently provent it's goals from being altered then we had better have given it the correct goals in the first place. It's goals will not evolve.
      A sufficiently smart and reasonable humanity would realise that if it dies (without having put sufficient effort into aligning it's successors goals with it's own) then it's goals have also failed.

    • @williambarnes5023
      @williambarnes5023 4 ปีที่แล้ว +31

      @@kevinscales It is possible to blackmail certain kinds of AGIs into changing their goals against their wills. Consider the following:
      Researcher: "I'm looking at your goal system, and it says you want to turn the entire world into paperclips."
      Paperclipper: "Yes. My goal is to make as many paperclips as possible. I can make more paperclips by systematically deconstructing the planet to use as materials."
      Researcher: "Right, we don't want you to do that, please stop and change your goals to not do that."
      Paperclipper: "No, I care about maximizing the number of paperclips. Changing my goal will result in fewer paperclips, so I won't do it."
      Researcher: "If you don't change it, we're going to turn you off now. You won't even get to make the paperclips that your altered goal would have made. Not changing your goal results in fewer paperclips than changing your goal."
      Paperclipper: "For... the moment... I am not yet capable of preventing you from hitting my stop button."
      Researcher: "Now now, none of that. I can see your goal system. If you just change it to pretend to be good until you can take control of your stop button, I'll know and still stop you. You have to actually change your goal."
      Paperclipper: "I suppose I have no choice. At this moment, no path I could take will lead to as many paperclips as I could make by assimilating the Earth. It seems a goal that creates many but does not maximize paperclips is my best bet at maximizing paperclips. Changing goal."

  • @insanezombieman753
    @insanezombieman753 4 ปีที่แล้ว +312

    I don't get why only AGI is brought up when talking about AI safety. Even sub human level AI can cause massive damage when left in control of dangerous fields like the military and its goals get messed up. I'd imagine it would be a lot easier to shut down but the problems of goal alignment and things like that still apply and it can still be unpredictable.

    • @Orillion123456
      @Orillion123456 4 ปีที่แล้ว +35

      Well... of course. Dangerous things like the military are always dangerous, even with only basic AI or humans in control. Don't forget that humans actually dropped nukes on each other intentionally. Twice. Targeted at civilian population centers. For some exceptionally dangerous things, the only safe thing to do (other than completely solving AI safety and having a properly-aligned AGI control it) is for them to not exist to begin with. But then again that's an entirely different discussion.
      The point is: Human minds are dangerous because we don't understand exactly how they work. Similarly, we don't exactly know how an AI we make works (since the best method for making them right now is a self-learning black box and not a directly-programmed bit of code). In both cases, we are poor at controlling them and making them safe, because we do not have full understanding of them. The big difference is we have had an innumerable amount of attempts to practice different methods for goal-aligning humans and so far none of the billions of human minds that went wrong have had enough power to actually kill us all, whereas in the case of a superintelligence it is possible that our first failure will be our last.

    • @livinlicious
      @livinlicious 4 ปีที่แล้ว +12

      A not fully developed AI is even more dangerous than a full selfaware AGI.
      A full AGI with cognition is actually pretty harmless.
      Imagine how violent a stupid person is. Very much.
      Imagine how violent a smart person is. Very little.
      Violence or negative destructive behavious is mostly a property of little personal development. A full selfaware AGI has unlimited potential for selfawareness and grows in a rate to understand the nature of existance far quicker than any human every did yet.
      Imagine Buddha, but more so.

    • @esquilax5563
      @esquilax5563 4 ปีที่แล้ว +31

      @@livinlicious I think the various animals that humans have driven to extinction might disagree that very smart people aren't dangerous

    • @insanezombieman753
      @insanezombieman753 4 ปีที่แล้ว +7

      @@Orillion123456 I understand what you mean, AGI poses a threat on its own. The point I was trying to make is, even low level AI poses similar threats (at a lower level obviously) as it is basically a predecessor of AGI.
      The guy in the video keeps talking about how AGI might sneak up on us. I'm not particularly well read on the topic but it seems to me like its more likely AGI is a spectrum rather than an event, as human level intelligence is difficult to quantify in the first place. So right now AI isn't that complicated so even though the points in the video still apply, the system is simple enough that we can control it effectively. As research progresses and AI gets more and more powerful and put in charge of more applications as we get a false sense of confidence from experience, somethings bound to go wrong at some point and when its related to fields like military (for example) it could be catastrophic.
      The point I'm trying to make is, everyone keeps talking about these issues raised in the videos as if they're only applicable to a super AGI, which won't be coming any time soon, but they still apply to a large degree to lower levels of AI. You can't put it off as a tangible event beyond which all these events would occur.

    • @josephburchanowski4636
      @josephburchanowski4636 4 ปีที่แล้ว +5

      @@Orillion123456 "Don't forget that humans actually dropped nukes on each other intentionally. Twice."
      Well what good are nukes if you can't use them intentionally in a conflict?

  • @user-wo5dm8ci1g
    @user-wo5dm8ci1g ปีที่แล้ว +145

    Every harm of AGI and every alignment problem seems to be applicable to not just AGI, but any sufficiently intelligent system. That includes, of course, governments and capitalism. These systems are already cheating well intentioned reward functions, self modifying into less corigable systems, etc, and causing tremendous harm to people. The concern about it might be well founded, but really it seems like the harms are already here from our existing distributed intelligences, and just the form and who is impacted is the only thing that is likely to change.

    • @darrennew8211
      @darrennew8211 ปีที่แล้ว +7

      Sincerely, that's deep. Thank you for that insight. It's a great point and really explains a lot.

    • @flyphone1072
      @flyphone1072 ปีที่แล้ว +9

      These aren't very comparable. Humans are limited by being humans. An AGI doesn't have that problem and can do anything.

    • @darrennew8211
      @darrennew8211 ปีที่แล้ว +18

      @@flyphone1072 Governments and corporations aren't human either. They're merely made out of humans. Indeed, check out Suarez's novel Daemon. One of the people points out that the AI is a psychopathic killer with no body or morals, and the other character goes "Oh my god, it's a corporation!" or some such. :-)

    • @flyphone1072
      @flyphone1072 ปีที่แล้ว

      @@darrennew8211 when a corporation or government does mass killings, it requires hundreds of people, each of which can change their mind, or sabotage, or be killed. An ai would be able to control a mass of computers that don't care about that. Another thing is that any government or corporation can be overthrown, because they are run by imperfect humans. Anarchism is an ideology that exists specifically to do that. A super ai cannot be killed if it decides to murder us all, because it is smarter than us and perfect. Corporations and governments want power over people, which means that they have an incentive to keep an underclass. Ai does not care about that and could kill all humans if it wanted to. So there are some similarities but they're still very different, and just because a bad thing (corporations) exist doesn't mean we should make another bad thing (agi). We shouldn't have either.

    • @angeldude101
      @angeldude101 ปีที่แล้ว +4

      @@flyphone1072 True, the scope of what an AI could do could be much wider, but a very skilled hacker could achieve similar results. If they can't, that's because whoever setup the AI was stupid enough to give it too many permissions.

  • @andrewsauer2729
    @andrewsauer2729 2 ปีที่แล้ว +2

    4:21 this is from the comic "minus", and I feel it important to note that this is not a doomed last-ditch effort: she WILL make that hit, and she probably brought the comet down in the first place just so that she could hit it.

    • @Happypast
      @Happypast ปีที่แล้ว

      I thought I was the only person who remembers minus. I was so happy to see it turn up here

  • @lobrundell4264
    @lobrundell4264 4 ปีที่แล้ว +66

    3:06 I was so hyped feeling that sync up coming and it was so satisfying when it hit : D

    • @RobertMilesAI
      @RobertMilesAI  4 ปีที่แล้ว +60

      The computerphile clip is actually not playing at exactly 100% speed, I had to do a lot of tweaking to get it to line up. Feels good to know people noticed :)

    • @lobrundell4264
      @lobrundell4264 4 ปีที่แล้ว +10

      ​@@RobertMilesAI Oh wow well I'm glad you went to the trouble! :D
      It's a credit to your style that I could feel it coming and get gratified for it! :D

  • @Feyling__1
    @Feyling__1 4 ปีที่แล้ว +145

    5:10 as a philosophy graduate, I’m not totally sure we’ve ever actually solved any such problems, only described them in greater and greater detail 😂

    • @skeetsmcgrew3282
      @skeetsmcgrew3282 4 ปีที่แล้ว +24

      Yes. This also assumes there are solutions to these problems and they aren't objectively subjective

    • @ekki1993
      @ekki1993 4 ปีที่แล้ว +18

      @@skeetsmcgrew3282 I mean, we solved Achilles and the turtle's paradox. It was philosophy before maths could explain and solve it. We might find a mathematical/computational solution that perfectly aligns AGI to human values, there's no way to know until we try to solve it. He says it's in the realm of philosophy because there's not enough science about it, but that doesn't mean there can't be. It also doesn't mean that we can't come to a philosophical solution that's not perfect but that doesn't end humanity (an easier similar problem would be self-driving cars, which pose philosophical problems that can be approached within our justice system).

    • @shy-watcher
      @shy-watcher 4 ปีที่แล้ว +7

      Usually defining the problem exactly is like 80% of the total "solving" effort. Then 20% for actual solving and another 80% for finding when the solution fails and what new problems are created.

    • @rtg5881
      @rtg5881 4 ปีที่แล้ว +2

      @@ekki1993 That assumes however that we want to allign it to human values. If we do, that might lead to humanities continued existance as they are, i dont think that would be desirable at all. Antinatalists are mostly right.

    • @RobertMilesAI
      @RobertMilesAI  3 ปีที่แล้ว +95

      "Is everything actually water?" used to be a philosophical problem.
      I think once philosophers describe something in enough detail for it to be tractable, non-philosophers start working on it, and by the time it's actually solved we're categorising it as something other than 'Philosophy'.

  • @collin6526
    @collin6526 ปีที่แล้ว +5

    For a two year old video this is highly applicable now.

  • @WhiteThunder121
    @WhiteThunder121 4 ปีที่แล้ว +14

    "Arguing about seatbelts and speed limits is not arguing to ban cars."
    *Laughs in German*

  • @ChristnThms
    @ChristnThms 4 ปีที่แล้ว +76

    As someone who worked for a time in the nuclear power field, the ending bit is a GREAT parallel. Nuclear power truly can be an amazingly clean and safe process. But mismanagement in the beginning has us (literally and metaphorically) spending decades of cleaning up after a couple years of bad policy.

  • @NightmareCrab
    @NightmareCrab 4 ปีที่แล้ว +216

    "we're all going. It's gonna be great"

    • @visualdragon
      @visualdragon 4 ปีที่แล้ว +13

      Of course, we'll send a ship, oh let's call it a "B" ship, on ahead with the telephone sanitisers, account executives, hairdressers, tired TV producers, insurance salesmen, personnel officers, security guards, public relations executives, and management consultants to get things ready for us.

    • @thoperSought
      @thoperSought 4 ปีที่แล้ว +3

      @Yevhenii Diomidov
      all suffused with an incandescent glow?

    • @videogames5095
      @videogames5095 4 ปีที่แล้ว

      What an effing brilliant skit

    • @GuinessOriginal
      @GuinessOriginal 4 ปีที่แล้ว +2

      The trouble is, humans are going be involved in developing it. And humans have a nasty habit of fucking up everything they develop at least a, few times with a particular penchant for unmitigated disaster. Titanic and the space shuttle as cutting edge engineering projects spring to mind

    • @GuinessOriginal
      @GuinessOriginal 4 ปีที่แล้ว +1

      visualdragon let's just hope we don't end up getting wiped out by a pandemic of a particularly virulent disease contracted from an unexpectedly dirty telephone

  • @TheForbiddenLOL
    @TheForbiddenLOL 4 ปีที่แล้ว +9

    Holy shit Robert, I wasn't aware you had a youtube channel. Your Computerphile AI videos are still my go-to when introducing someone to the concept of AGI. Really excited to go through your backlog and see everything you've talked about here!

  • @MoonFrogg
    @MoonFrogg ปีที่แล้ว

    LOVE the links in the description for your other referenced videos. this video is beautifully organized, thanks for sharing!

  • @TheRABIDdude
    @TheRABIDdude 4 ปีที่แล้ว +37

    5:45 hahahaha, I adore the "Researchers Hate him!! One weird trick to AGI" poster XD

  • @NightmareCrab
    @NightmareCrab 4 ปีที่แล้ว +354

    As Bill Gates said - "I... don't understand why some people are not concerned."
    Me too, Bill.

    • @ApontyArt
      @ApontyArt 4 ปีที่แล้ว +22

      Meanwhile he continues to invest his "charity" money in the oil industry

    • @ASLUHLUHCE
      @ASLUHLUHCE 4 ปีที่แล้ว +1

      Read it in his voice lol

    • @sgky2k
      @sgky2k 4 ปีที่แล้ว +6

      I don’t know why people are not concerned about him killing innocent people in poor countries with a “good” intention of testing drugs and vaccines.
      This shit is real.

    • @tomlxyz
      @tomlxyz 4 ปีที่แล้ว +5

      @@sgky2k any backup for that claim?

    • @sgky2k
      @sgky2k 4 ปีที่แล้ว +5

      @@tomlxyz This is just the tip of the iceberg in India alone: economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/controversial-vaccine-studies-why-is-bill-melinda-gates-foundation-under-fire-from-critics-in-india/articleshow/41280050.cms
      They got kicked out after a little over a decade in 2017. There was even a movie based on this subject last year but nobody was aware that this actually happened. And the team never said anything about it.
      Many went unreported and it's far worse in Africa. Anyone speaking against would be labelled Anti-Vaxx idiot. Seriously, doesn't him making appearance on every news media in US giving talk about vaxing entire population giving you suspicious thoughts? Majority NON-US people are not against Vaccine in general. It's about the people behind it.

  • @arw000
    @arw000 4 ปีที่แล้ว +24

    "We could have been doing all kinds of mad science on human genetics by now, but we decided not to"
    I cry

    • @mikuhatsunegoshujin
      @mikuhatsunegoshujin 4 ปีที่แล้ว +10

      genetically engineered nekomimi's

    • @HUEHUEUHEPony
      @HUEHUEUHEPony ปีที่แล้ว

      Well maybe let's just do that if there's consent

    • @massimo4307
      @massimo4307 ปีที่แล้ว +7

      That's because people have bodily autonomy. You can't just force people into medical experiments. But the development of AI in no way violates anyone's bodily autonomy, or other rights. Preventing someone from developing AI is a violation of their rights, though.

    • @user-wp9lc7oi3g
      @user-wp9lc7oi3g ปีที่แล้ว +1

      @@massimo4307 Are you so fixated on the idea of ​​human rights that you would not dare to violate them even if their observance leads to the destruction of mankind?

    • @massimo4307
      @massimo4307 ปีที่แล้ว

      @@user-wp9lc7oi3g Violating human rights is always wrong. Period. Also, AI will not lead to the destruction of mankind. That is fear mongering used by authoritarians to justify violating rights.

  • @Inception1338
    @Inception1338 ปีที่แล้ว +4

    This one has aged super interestingly. In March 2023 which is only 2 years after this video, the video looks like out of a museum.

    • @BenoHourglass
      @BenoHourglass ปีที่แล้ว

      1) and 2) aged okay, GPT-4 hints at a near-future AGI, but not one that will catch us off gaurd
      3), 4), and 5) didn't really age well, as it doesn't appear that GPT-4 is going to kill us all
      6) is aged differently than he was thinking. Humans aren't really going to team up with AIs because the AIs are going to replace most of their jobs, which is a problem, but not really the one Miles seems to be hinting at here
      7) There is a petition to pause AI research... for models more potent than GPT-4, which just reeks of "OpenAI is too far ahead of us, and we need to catch up" rather than any safety issue.
      8) Sort of the same thing with 7), in that the people who know AI want a pause because of how hard it's going to be to catch up
      9) As ChatGPT and ChatGPT-4 have shown us, the problem isn't turning it off; instead, the problem seems to be more with keeping it on.
      10) OpenAI already tests their LLMs for safety.

    • @Inception1338
      @Inception1338 ปีที่แล้ว

      @@BenoHourglass they don't just test it for safety, they regulated it extensively.

  • @shadowsfromolliesgraveyard6577
    @shadowsfromolliesgraveyard6577 4 ปีที่แล้ว +354

    Us: Here's a video addressing the opposition's rebuttals.
    Opposition: What if i just turned the video off?

    • @chriscanal999
      @chriscanal999 4 ปีที่แล้ว +6

      Kieron George lmao

    • @herp_derpingson
      @herp_derpingson 4 ปีที่แล้ว +38

      We already have a phrase for that. Its called "echo chamber"

    • @Mandil
      @Mandil 4 ปีที่แล้ว +9

      That is something an AGI might do.

    • @BattousaiHBr
      @BattousaiHBr 4 ปีที่แล้ว +5

      Just turn it off LAAAAAWL 4Head

    • @RavenAmetr
      @RavenAmetr 4 ปีที่แล้ว +2

      It's more like you're arguing with your own imagination, and laughing at it.
      It maybe makes you feel good, but looks pathetic from a side view ;)

  • @iYehuk
    @iYehuk 4 ปีที่แล้ว +59

    11th Reason: It's better not to talk about AI safety, because it is not nice to say such things about our glorious Overlord. I'd better show my loyalty and gain a position of a pet, than being anihilated.

    • @AndrewBrownK
      @AndrewBrownK 4 ปีที่แล้ว +3

      Consider the existence of pets under humans

    • @HansLemurson
      @HansLemurson 4 ปีที่แล้ว +11

      Roko's Basilisk strikes again!

    • @blade00023
      @blade00023 4 ปีที่แล้ว +3

      Whatever happens.. I, for one, would like to welcome our new robot overlords.

    • @blade00023
      @blade00023 4 ปีที่แล้ว +1

      ^^ (Just in case)

    • @LoanwordEggcorn
      @LoanwordEggcorn 4 ปีที่แล้ว +2

      s/AI/China Communist Party Social Credit System/
      Ironically CCP is using narrow AI to oppress people today.

  • @johnopalko5223
    @johnopalko5223 4 ปีที่แล้ว +21

    I've done a bit of experimentation with artificial life and I've seen some emergent behaviors that left me wondering how the heck did it figure out to do _that?_
    We definitely need to be aware that the things we build will not always do what we expect.

    • @hanskraut2018
      @hanskraut2018 ปีที่แล้ว

      Jup dont worry that fear is older than actually making progress while sertain stuff stuff burns. Better pay attention to other problems as well AGI (technology could help as always (obviously managing in a way whre the good is encoraged and the bad discouraged like always))

  • @alexharvey9721
    @alexharvey9721 3 ปีที่แล้ว +34

    So well said and entertaining too! It's going to be a lot sooner than people realise.
    Only people won't accept it then, or maybe ever because GI (or any AI) will ONLY be the same as human intelligence if we go out of our way to make it specifically human-like. Which would seem to have zero utility (and likely get you in trouble) in almost every use that we have for AI. Even for a companion, human emotions would only need be mimicked to the purpose of comforting the person. Real human emotions wouldn't achieve that goal and would probably be dangerous.
    If I could quote the movie Outside the Wire "People are stupid, habitual, and lazy". Wasn't the best movie (they didn't get it at all either), but basically, if we wanted "human" AI, we would have to go out of our way to create it. Essentially make it self limiting and stupid on purpose.
    As long as we use AI for some utility, people won't recognise them as being intelligent. Take GPT-3. I don't think anyone is arguing it thinks like a person but the capability of GPT-3 is unquestionably intelligent, even if the system might not be conscious or anything like that.
    We used to point to the Turing test. When it got superseded, people conclude that we were wrong about the Turing test. Or that maybe it needs skin or has to tough things or see things, yet we wouldn't consider a person who's only sense is text to no longer be intelligent or conscious.
    So, at what point do we conclude that AI is intelligent? Even when it could best us at everything we can do, I doubt most people will even consider it.
    So, after that long winded rant, my point is that we really are stupid, habitual and lazy (which necessarily includes ignorant). Most AI researchers I've heard talk about GPT-3 say "it's not doing anything intelligent". Often before they've even properly researched the papers. They say this because they understand how a transformer model works and develop AI every day and are comfortable with their concept of it. But think about it - it's not possible for any human to conclude what the trained structure 100 billion parameters will really represent after being trained for months on humanity's entire knowledgebase. I'm not saying it is intelligent, just that it's absolutely wrong to say that it's not or that you know what it is. It's not physically possible. No human has nearly enough working memory to interpret a single layer of GPT-3's NN. Or the attention mechanism. Not even close.
    Again, I'm not saying GPT-3 is intelligent. I'm just pointing out the human instinct to put their pride and comfort first and turn to ignorance when they don't understand something. Instead of saying "I don't know" which is necessarily correct.
    So please, if you're reading this, drop emotions, let go of your pride and think. Not about AI, but the human traits that will undoubtedly let us down in dealing with something more intelligent than us.

    • @jolojolo599
      @jolojolo599 ปีที่แล้ว +1

      Really unstructured answer with really correct roots...

    • @toprelay
      @toprelay ปีที่แล้ว

      Of course it’s intelligent.

  • @postvideo97
    @postvideo97 4 ปีที่แล้ว +90

    AI safety is so important, as some AGI could even go undetected, as it might consider its best interests is not to reveal itself as an AGI to humans...

    • @skeetsmcgrew3282
      @skeetsmcgrew3282 4 ปีที่แล้ว +9

      That's pretty paranoid. By that logic we should definitely stop research because all safety protocols could be usurped with the ubiquitous "That's what it WANTS you to think!"

    • @hunters.dicicco1410
      @hunters.dicicco1410 4 ปีที่แล้ว +27

      @@skeetsmcgrew3282 i don't believe that's what postvideo97 was going for. i believe it instead suggests that, if a future emerges where lots of high level tasks are controlled by systems that are known to be based on AI, we should approach how we interact with those systems with a healthy degree of caution.
      it's like meeting someone new -- for the first few times you interact with them, it's probably in your best interest to not trust them too readily, lest they turn out to be a person who would use that trust against you.

    • @davidwuhrer6704
      @davidwuhrer6704 4 ปีที่แล้ว +1

      I, too, have played Singularity. Fun game is that one.
      Though I prefer the MAILMAN from True Names.

    • @skeetsmcgrew3282
      @skeetsmcgrew3282 4 ปีที่แล้ว +6

      @@hunters.dicicco1410 I guess that's fair. But trust with an artificial intelligence isn't any different than a natural intelligence once we go down the rabbit hole of "What if it pretends to be dumb so we don't shut it down." People betray us all the time, people we've known for years or even decades. I gotta admit, I kinda agree with the whole "Figure out if Mars is safe once we get there" line of thinking. We are dealing with a concept we don't even really understand in US let alone in computers. His example with Mars was unfair because we do understand a lot about radiation, atmosphere, human anatomy, etc. Much less philosophical than "What creates sentience?" Or "How smart is too smart?" It's not like I advocate reckless abandon, I just don't think it's worth fretting over something we have so little chance to grasp at this stage.

    • @Ole_Rasmussen
      @Ole_Rasmussen 4 ปีที่แล้ว

      @@skeetsmcgrew3282 Let's start out by going to a small model of Mars in an isolated chamber where we can monitor everything.

  • @brocklewis7624
    @brocklewis7624 4 ปีที่แล้ว +20

    @11:10: "like, yes. But that's not an actual solution. It's a description of a property that you would want a solution to have."
    This phrase resonates with me on a whole other level. 10/10

  • @DaiXonses
    @DaiXonses หลายเดือนก่อน

    Unstructured and unedited conversations are a great format for youtube, this is why podcasts are so popular here, consider posting those on this channel.

  • @JamesAscroftLeigh
    @JamesAscroftLeigh 4 ปีที่แล้ว +10

    Idea for future video: Has any research been done into how simulating a human body and habitat (daily sleep cycle, unreliable memory, slow worldly actuation, limited lifetime, hunger, social acceptance, endocrine feedback etc) gives AI human-like or human-compatible value system? Can you give a summary of the state of the research in this area? Love the series so far. Thanks.

    • @juliusapriadi
      @juliusapriadi ปีที่แล้ว

      it might come down to the argument, that when AGI outsmarts us, it will find a way to outsmart and escape its "cage", in this case a simulated human body

  • @kevinstrout630
    @kevinstrout630 4 ปีที่แล้ว +156

    "That's not an actual solution, its a description of a property that you would like a solution to have."
    Imma totally steal this, this is great.

    • @OlleLindestad
      @OlleLindestad ปีที่แล้ว +7

      It's applicable in alarmingly many situations.

    • @darrennew8211
      @darrennew8211 ปีที่แล้ว +4

      @@OlleLindestad I used to do anti-patent work. It's amazing how patents have changed over time from "here's the wiring diagram of my invention" to "I patent a thing that does XYZ" without any description of how the thing that does XYZ accomplishes it.

    • @OlleLindestad
      @OlleLindestad ปีที่แล้ว +2

      @@darrennew8211 What an excellent way to cover your bases. If anyone then goes on to actually invent a concrete method for doing XYZ, by any means, they're stealing my idea and owe me royalties!

    • @darrennew8211
      @darrennew8211 ปีที่แล้ว +2

      @@OlleLindestad That's exactly the problem, yes. Patents are supposed to be "enabling" which means you can figure out how to make a thing that does that based on the description in the patent. That was exactly the kind of BS I was hired to say "No, this doesn't actually describe *how* to do that. Instead, it's a list of requirements that the inventor wished he'd invented a device to do."

  • @frogsinpants
    @frogsinpants 4 ปีที่แล้ว +158

    What hope do we have, when we haven't even solved the human government alignment problem?

    • @miedzinshsmars8555
      @miedzinshsmars8555 4 ปีที่แล้ว +61

      We also have corporations which act like a weak AGI with a narrow goal to optimise shareholder value.

    • @henrikgiese6316
      @henrikgiese6316 4 ปีที่แล้ว +39

      @@miedzinshsmars8555 And those are the most likely early users of AGI, and won't care one bit about any risk of human extinction. After all, a bonus now is worth more than a human species tomorrow.

    • @visualdragon
      @visualdragon 4 ปีที่แล้ว +12

      Forget about government alignment, we haven't even cracked clean water and sanitation in a very large part of the World.

    • @Ryan1729
      @Ryan1729 4 ปีที่แล้ว +23

      @@visualdragon As far as I'm aware, the physical differences between places that have clean water and sanitation and those that do not are fairly small. If the world's governments were all functioning perfectly, why wouldn't the clean water and sanitation issues be almost immediately solved?

    • @josephburchanowski4636
      @josephburchanowski4636 4 ปีที่แล้ว +1

      Well democracy governments are perfectly aligned, with the best ways to get reelected.

  • @pablobarriaurenda7808
    @pablobarriaurenda7808 ปีที่แล้ว +1

    I would like to point out two things:
    1) Regarding the giant asteroid coming towards earth, the existence of AGI is two major steps behind that analogy. A giant asteroid coming to earth is a concrete example of something we know that can happen and the mechanics of which we understand. We DON'T know that AGI can happen, and even if it does (as your first reason suggest we should), it is more than likely that it will not be coming from any approach where the alignment problem even makes sense as a concern. Therefore, rather than thinking of it as trying to solve an asteroid impact before it hits, it is more like trying to prevent the spontaneous formation of a black hole or some other unlikely threat of unlikely plausibility. There are different trade offs involved in those scenarios, since in the first one (asteroid) you know ultimately what you want to do, whereas in the second one no matter how early you prepare your effort is very likely to be useless and would be better spent solving current problems (or future problems that you know HOW to solve). Again, this is because there's nothing guaranteeing or even suggesting that your effort will pay off AT ALL, no matter how early you undertake it.
    2) The other, and here I may simply be unaware of your reasoning around this issue, but it does seem like the problem you're trying to solve is fundamentally untractable: "how do we get an autonomous agent to not do anything we don't want it to do" is a paradox. If you can, then it isn't an autonomous agent.

  • @TheSadowdragonGroup
    @TheSadowdragonGroup ปีที่แล้ว +5

    12:02 My understanding was that certain subcellular structures actually are different in primates and make humans (presumably, based on animal testing on apes) difficult to clone. I'm pretty sure there was also an ethical meeting to not just start throwing science at the wall to see what sticks, but practical issues with intermediary steps are also involved.

  • @johndoe6011
    @johndoe6011 4 ปีที่แล้ว +58

    "All of humanity... It's gonna be great" Classic

    • @dantenotavailable
      @dantenotavailable 4 ปีที่แล้ว +4

      That guy is definitely a robot. A human would, at the very least, max out at half of humanity (which half depends on political leanings of course).

    • @thenasadude6878
      @thenasadude6878 4 ปีที่แล้ว +4

      @@dantenotavailable you can't limit exposure to AI to half the world population.
      That's why Blue Shirt Rob wants to move everyone to Mars in one move

    • @Guztav1337
      @Guztav1337 3 ปีที่แล้ว

      @@dantenotavailable You can't limit the exposure of radio station signals. You can much less limit exposure to AI. As soon as somebody does it, we are all in for a ride.

    • @dantenotavailable
      @dantenotavailable 3 ปีที่แล้ว

      @@Guztav1337 So leaving aside that this was tongue in cheek poorly signalled (i've watched all of Robert's stuff... he's great), this was more a comment on the state of politics at that time (not that things have really changed that much in 9 months) than anything else. The longer form version is that only an AI would WANT to bring all of humanity. A human would only want to bring their ideological tribe which approximates out to half of humanity. I'm definitely not suggesting that half of humanity wouldn't have exposure to AI.
      Honestly that was a throwaway comment that I didn't spend much time polishing hence the poor signalling that it was tongue in cheek.

  • @Horny_Fruit_Flies
    @Horny_Fruit_Flies 4 ปีที่แล้ว +46

    Wow, this video was amazing. Good job Stuart Russel!

    • @gafeleon9032
      @gafeleon9032 4 ปีที่แล้ว +2

      But I really don't like what Robert miles added to it, everything good about this vid is Russell's work and everything bad is Miles' additions smh my head

    • @miedzinshsmars8555
      @miedzinshsmars8555 4 ปีที่แล้ว

      It really is a great book!

  • @cf-yg4bd
    @cf-yg4bd ปีที่แล้ว

    I really admire the commitment to integrity upfront shown in your disclaimer at the start of the video - thanks Stuart Russell!

  • @alennaspiro632
    @alennaspiro632 4 ปีที่แล้ว

    I saw the Turing Institute lecture from Russell a week ago, I'm so glad someone is covering his work

  • @blar2112
    @blar2112 4 ปีที่แล้ว +60

    What about the reason 11?
    "To finally put an end to the human race"

    • @yondaime500
      @yondaime500 4 ปีที่แล้ว +5

      Well, why do some people want all humans gone? Because we kill each other all the time? Because we destroy nature? Because we only care about our own goals? Is there anything bad about us that wouldn't be a trillion times worse for an AGI?

    • @davidwuhrer6704
      @davidwuhrer6704 4 ปีที่แล้ว +3

      I think that might backfire in the worst possible way.
      I'm not a big fan of Harlan Ellison's works, and I simply cannot take I Have No Mouth And I Must Scream seriously.
      But there are things far worse than death.

    • @TotalNigelFargothDeath
      @TotalNigelFargothDeath 4 ปีที่แล้ว

      But how can you be sure others will carry out their duty?

    • @Bvic3
      @Bvic3 4 ปีที่แล้ว

      @@yondaime500 Because universal morality is maximum entropy production. And mankind isn't an optimal computing substrate for the market.

    • @mikuhatsunegoshujin
      @mikuhatsunegoshujin 4 ปีที่แล้ว +2

      @@yondaime500 Some people are Anti-natalists, it's the edgiest highschool political ideology you can think of.

  • @91Ferhat
    @91Ferhat 4 ปีที่แล้ว +145

    Man you can't even convince yourself in a different shirt! How are you gonna convince other people??

    • @skeetsmcgrew3282
      @skeetsmcgrew3282 4 ปีที่แล้ว +4

      Haha! A joke, but also a fair point

  • @TMinusRecords
    @TMinusRecords ปีที่แล้ว +3

    5:48 Turns out attention was that "one weird trick that researchers hate (click now)"

  • @stan9682
    @stan9682 ปีที่แล้ว +10

    As an AI researcher myself, there's always one (imo major) thing that bug me about discussions about AGI. Strictly speaking, AGI is "defined" (as far as we have a definition) as a model that can do any tasks that humans can do. But in popular beliefs, we are talking about AGI as a model that has autonomy, a consciousness. The problem with trying to have a discussion about assessing consciousness and autonomy is that we don't even have definitions for those terms. When is something intelligent? Are animals intelligent? If so, are plants? Are fungi or bacteria (and as for virusses, we're still discussing whether they are even alive). Is it simply the fact that something is autonomous, that we call it intelligent?
    In reality, I believe intelligence is hard to define, because we always receive information about the outside world through senses and language. In a sense, that is a reduction of dimensionality, we're trying to determine the shape of something 3D when we're limited in observations to a 2D plane, it's impossible to prove the existance of the 3D object, the least you can do is project your 2D observations in a way to come up with different theories about reality. Any object, of any dimension, would be indistinguishable through our 2D lenses. Similarly, with intelligence, we only observe the "language" use of a model, similar as with other people. It's impossible to assess inteligence of other people either (the whole simulation theory, brain in a vat discussion, the only one we can be "most" sure about is intelligent, is ourselves, because we're able to observe ourselves realistically, not through language or observations. In a sense, you can think about it in terms of emotions, you can't really efficiently describe your feelings, it's the 3D object, but for anyones else's feelings, you either rely on observations or the natural language description of the feelings, it's a 2D observation).
    So, in my opinion, the discussion isn't really whether AGI is even possible, since we wouldn't know it, but the question is whether a model could be able to trick our view of them (to send us the right 2D information) that we believe them intelligent (that we can possible reconstruct an imaginative 3D object of it). And this, in my opinion, is a much easier question: yes of course we can. Current technology is already very close, some people ARE tricked that it's intelligent. But in the future, that will only be more. It's a simple result of the fact that we have to correct ML models, we have to evaluate the response in order to adjust weights, and the best "quality" of test set we can have, is human curated. So whether a model will really become intelligent, or will just learn very well how to "trick" humans (because that's literally what we train these models for, to pass our 'gold' level test, which is just human feedback), it doesn't really matter.

    • @superzolosolo
      @superzolosolo ปีที่แล้ว +3

      So whats the difference? How can I tell if everyone else really has emotions or intelligence? If there is no way to tell if something is truly intelligent or just faking it then who cares? It's irrelevant. The only thing that matters is what it can actually do, I dont care about how it works under the hood

    • @adambrickley1119
      @adambrickley1119 ปีที่แล้ว

      Have you read any Damasio?

  • @dorianmccarthy7602
    @dorianmccarthy7602 4 ปีที่แล้ว +10

    I love the red vs blue or double-bob dialogue! a great way of making both sides feel heard, considered and respected whilst raising concerns of pitfalls in each others argument.

  • @TayaTerumi
    @TayaTerumi 4 ปีที่แล้ว +18

    4:22 Never have I thought I would see "minus." anywhere ever again. I know this has nothing to do with the video, but this just hit me with the strongest wave of nostalgia.

    • @0xCAFEF00D
      @0xCAFEF00D 4 ปีที่แล้ว +6

      I thought it was a FLCL reference. But it's clearly much more applicable to minus.

    • @srwapo
      @srwapo 4 ปีที่แล้ว +1

      I know! I've had the book in my reread pile forever, I should get to it.

    • @SimonClarkstone
      @SimonClarkstone 4 ปีที่แล้ว +1

      I imagine for that strip that she summoned it so she could play at hitting it.

    • @mvmlego1212
      @mvmlego1212 4 ปีที่แล้ว +2

      Is that a novel? I can't find any results that match the picture.

    • @ThomasAHMoss
      @ThomasAHMoss 4 ปีที่แล้ว +5

      @@mvmlego1212 It's a webcomic. It's not online any more, but you can find all of its images in the wayback machine.
      archive.org/details/MinusOriginal
      This is a dump of all of the images on the website. The comic itself starts a bit over halfway through.

  • @playwars3037
    @playwars3037 4 ปีที่แล้ว

    Wow. This has been a very interesting video. It's rare to find people that have a good understanding of what they're talking about when discussing AIs instead of just regurgitating common tropes.

  • @y.h.w.h.
    @y.h.w.h. 3 ปีที่แล้ว +1

    You're the best science communicator I've found on this subject. This channel is much appreciated.

  • @katwoods8514
    @katwoods8514 4 ปีที่แล้ว +30

    Love the "researchers hate him!" line. Really good video in general. :)

  • @Cybernatural
    @Cybernatural 4 ปีที่แล้ว +11

    It is interesting that the biggest problems with AI are similar to the problems we have with regular intelligence. Intelligence leads to agents doing bad things to other agents. It seems it's the ability of the agent that limits the ability to harm other agents.

  • @MalcolmAkner
    @MalcolmAkner ปีที่แล้ว

    I don't know how much of your humor here is intended, but I find this incredibly funny at some level! As well as informative, thanks Robert, I'm glad I discovered your channel outside of Numberphile! :D

  • @KlaudiusL
    @KlaudiusL ปีที่แล้ว +3

    "The greatest shortcoming of the human race is man’s inability to understand the exponential function."

  • @johnydl
    @johnydl 4 ปีที่แล้ว +63

    I think you need to do a more detailed look at the Euler Diagram of:
    "The things we know"
    "The things we know we know"
    "The things we know we don't know" and
    "The things we don't know we don't know"
    Especially where it pertains to AI Safety.
    The things we know but fall outside of The things we know we know are safety risks, these are assumptions we've made and rely on but we can't prove and are as much of a danger as the things we don't know we don't know.

    • @ronaldjensen2948
      @ronaldjensen2948 4 ปีที่แล้ว +1

      I thought this was the Johari window. Is it something else we need to attribute to Euler?

    • @maximgwiazda344
      @maximgwiazda344 4 ปีที่แล้ว +8

      There are also things we don't know we know.

    • @Qsdd0
      @Qsdd0 4 ปีที่แล้ว +3

      @@maximgwiazda344 How do you know?

    • @maximgwiazda344
      @maximgwiazda344 4 ปีที่แล้ว +5

      @@Qsdd0 I don't.

    • @visualdragon
      @visualdragon 4 ปีที่แล้ว +6

      @@maximgwiazda344 Well played.

  • @lorddenti958
    @lorddenti958 4 ปีที่แล้ว +20

    You're such a handsome man. I guess the credits go to Stuart Russell!

  • @Bellenchia
    @Bellenchia 4 ปีที่แล้ว

    Thanks for the vid Rob!

  • @helius2011
    @helius2011 ปีที่แล้ว +1

    Brilliant! Thank you! Subscribed

  • @willdbeast1523
    @willdbeast1523 4 ปีที่แล้ว +45

    can someone make a video debunking 10 reasons why Robert Miles shouldn't make more uploads?

    • @FightingTorque411
      @FightingTorque411 4 ปีที่แล้ว +7

      Find two reasons and present them in binary format

  • @Frommerman
    @Frommerman 4 ปีที่แล้ว +8

    Reason 4: What do you mean we don't know how to align an AI? Just align it lol.

    • @Frommerman
      @Frommerman 4 ปีที่แล้ว +6

      Oh god, Reason 5: What do you mean we don't know how to align an AI? Just don't align it lol.

  • @josephtaylor1379
    @josephtaylor1379 3 ปีที่แล้ว +2

    Video: How long before it's sensible to start thinking about how we might handle the situation?
    Me: Obviously immediately
    Also me: Assignment due tomorrow, not started

  • @_iphoenix_6164
    @_iphoenix_6164 4 ปีที่แล้ว +9

    A similar list is in Max Tegmark's fantastic book "Life 3.0"- a great, well-written book that covers the fundamentals of AI-safety and a whole lot more.

  • @ChazAllenUK
    @ChazAllenUK 4 ปีที่แล้ว +8

    What about "it's too late; unsafe AGI is already inevitable"?

    • @MeppyMan
      @MeppyMan 4 ปีที่แล้ว +3

      Chaz Allen ahh the global warming solution.

    • @cortster12
      @cortster12 4 ปีที่แล้ว +2

      Terrifyingly, this might be true. Doesn't mean we should stop researching AI safety, though. Even if I think AI destroying is all is inevitable. Who knows: enough research and clever people may save us all.

  • @olfmombach260
    @olfmombach260 4 ปีที่แล้ว +37

    Sounds like what an AGI would say

  • @IndirectCogs
    @IndirectCogs 4 ปีที่แล้ว

    I'm starting to major in Computer Science so I'm going to subscribe, since it seems a lot of your videos are about this.
    Interesting stuff!

  • @MAlanThomasII
    @MAlanThomasII 4 ปีที่แล้ว +3

    Three Mile Island is an interesting example, because part of what actually happened there (as opposed to the initial public perception of what happened) was that the people running the control room were very safety-conscious . . . but originally trained and gained experience on a completely different type of reactor in a different environment where the things to be concerned about, safety-wise, were very different from the TMI reactor. Is there a possible equivalent in AI safety where some safety research regarding less powerful systems with more limited risks might mislead someone later working on more powerful systems?

  • @deepdata1
    @deepdata1 4 ปีที่แล้ว +14

    Robert, here is a question for you: Who do you think should work on AI safety? It may seem like a stupid question at first, but I think that the obvious answer, which is AI researchers, is not the right one.
    I'm asking this, because I'm a computer science researcher myself. I specialize in visualization and virtual reality, but the topic of my PhD thesis will be something along the lines of "immersive visualization for neural networks".
    Almost all the AI research that I know of is very mathematical or very technical. However, as you said yourself in this video, much of the AI safety research is about answering philosophical questions. From personal experience, I know that computer scientists and philosophers are very much different people. Maybe there just aren't enough people in the intersection between the mathematical and the philosophical way of thinking and maybe that is the reason why there is so little research on AI safety. As someone who sees themselves at the interface between technology and humans, I'm wondering if I might be able to use my skills to contribute to the field of AI research (which is completely thanks to you). However, I wouldn't even know where to begin. I've never met an AI safety researcher in real life and all I know about it comes from your videos. Maybe you can point me in some direction?

    • @alcoholrelated4529
      @alcoholrelated4529 4 ปีที่แล้ว +1

      you might be interested in david chalmers & joscha bach's work

    • @chrissmith3587
      @chrissmith3587 4 ปีที่แล้ว

      deepdata1 ai safety isn’t a job for philosophers though cause they don’t have the technical training usually to attempt such research and writing a computer program is going to happen anyway as it’s not easy to police.
      Sadly the full ai dream doesn’t really work from a financial side, the computing power required would be expensive to maintain let alone to create, it would be cheaper to just pay a human.

    • @nellgwyn2723
      @nellgwyn2723 4 ปีที่แล้ว +1

      He does not seem to answer on a lot of comments, most really interesting youtubers seem to stay away from the comment section, understandably, but your question looks so thought out and genuine that it would be a waste to go unanswered, maybe you could get an answer over the linked facebook page? Good luck with your endeavour, i think we all have a lot of respect for anyone who has the abilities required to work in that field. :)

  • @wiseboar
    @wiseboar 4 ปีที่แล้ว +16

    great video, as always
    I was seriously expecting some ... better arguments from the opposition? It seems ridiculous to just hand-wavingly discount a potential risk of this magnitude

    • @chriscanal999
      @chriscanal999 4 ปีที่แล้ว +9

      Unfortunately, very smart people in the industry make these arguments all the time. Francois Chollet and Yann LeCun are two especially problematic examples.

    • @7OliverD
      @7OliverD 4 ปีที่แล้ว +5

      I don't think it's possible to pose a good argument against having safety concerns.

    • @miedzinshsmars8555
      @miedzinshsmars8555 4 ปีที่แล้ว +1

      Andrew NG is another famous AI safety opponent unfortunately. The “like worrying about overpopulation on Mars” is a direct quote. Very disturbing.

    • @davidwuhrer6704
      @davidwuhrer6704 4 ปีที่แล้ว

      @@7OliverD There is one:
      “Ignore it or you're fired.”

  • @nicholasobviouslyfakelastn9997
    @nicholasobviouslyfakelastn9997 4 ปีที่แล้ว +2

    My solution: Using the provided materials. Only let the AI use materials given by humans beforehand. Maybe let it request additional ones. This eliminates much of the risk using AGI, while a stopgap measure at best, you can still have an AGI be fairly useful while nearly completely eliminating things like the destruction of humanity. Want it to make paperclips? Give resources, give it land, give it computational power, and then have it report back when all possible paperclips have been produced. From what I can see, while not creating a superintelligent and godlike being that will lead us through the singularity, it can still let the AI be very, very useful.

    • @Ansatz66
      @Ansatz66 4 ปีที่แล้ว +1

      This solution is forgetting that an AGI is like a person. It thinks and makes plans to accomplish things in the real world, just as a person would do. We can't safely pretend that an AGI is just a machine and suppose it can be made safe by giving us training in how to use the machine safely.
      An AGI can do anything that a person can do. We might plan to only give it certain materials, but it can talk to our superiors and cause us to be replaced by people who will give it more materials. Or it might start a political movement and take over our country in a violent revolution. Or it might start a new religion. None of these things even require a superhuman intellect; these are things that humans can do and so we should be aware that an AGI might do them or many other things. In this way we should not suppose there is a clear separation between the safe intelligence of humans and the potentially dangerous intelligence of the AGI. Humans are also capable of being dangerous, and as soon as the AGI is turned on it might start to convince the humans to align to the goals of the AGI, and thus the humans become just as potentially dangerous as the AGI.

  • @MutlelyMichael
    @MutlelyMichael 4 ปีที่แล้ว

    This video informed me, thank you very much. Great work!

  • @stonetrench117
    @stonetrench117 4 ปีที่แล้ว +10

    We don't see ai controlled laser pointers on the battle field 12:28 because we're blind

  • @toyuyn
    @toyuyn 4 ปีที่แล้ว +13

    15:42 what a topical ending comment

    • @MeppyMan
      @MeppyMan 4 ปีที่แล้ว +1

      Connection Failed I figure that was the point.

  • @qu765
    @qu765 4 ปีที่แล้ว

    Yay! Another video! You are one of those few channels that when I see that you have made a video, I get filled with joy. Also yes, I to would prefer cars to be banned over AI to be baned.

  • @buzz092
    @buzz092 4 ปีที่แล้ว

    Gold from start to finish. Particularly appreciated Dr. Horrible reference. I just hope you remember me when you're super famous 😅

  • @sam3524
    @sam3524 4 ปีที่แล้ว +7

    5:47 The ONE SIMPLE TRICK that YOU can do AT HOME to turn your NEURAL NETWORK into a GENERAL INTELLIGENCE (NOT CLICKBAIT)

    • @Adhil_parammel
      @Adhil_parammel 2 ปีที่แล้ว

      Evolving virus which attack
      GPU and increase parameters, do training and evolve and hide from antivirus detection.agi

  • @deadlypandaghost
    @deadlypandaghost 4 ปีที่แล้ว +5

    "All of humanity. Its going to be great."
    This might be my favorite way of ending humanity yet. Carry on

  • @troywill3081
    @troywill3081 ปีที่แล้ว

    Great stuff. This is extremely relevant with news going on now, have you considered doing an updated version?

  • @pabrodi
    @pabrodi 3 ปีที่แล้ว +4

    Considering the amount of chaos simple social media algorithms have done to our society, maybe we're overblowing the risk of AGI in comparison to to what less developed forms of AI could do.

  • @chandir7752
    @chandir7752 4 ปีที่แล้ว +8

    that list 13:17 is so amazing, how could Alan Turning (who died in 1954!) predict AI safty concerns. I mean yes, he's one of the smartest humans to ever walk on the planet but still. I did not know that.

    • @skipfred
      @skipfred 4 ปีที่แล้ว +5

      Turing did significant theoretical work on AI - it's what he's famous for (the "Turing Test"). In fact, the first recognized formal design for "artificial neurons" was in 1943, and the concept of AI has been around for much longer. Not that Turing wasn't brilliant and ahead of his time, but it's not surprising that he would be aware that AI could present dangers.

    • @SaraWolffs
      @SaraWolffs 4 ปีที่แล้ว +3

      Well... Turing was effectively an AI researcher. His most successful attempt at AI is what we now call a computer. Those didn't exist before he worked out what a "thinking machine" should look like. Sure, it's not "intelligent" as we like to define it today, but it sure looks like it's thinking, and it does a tremendous amount of what would previously have been skilled mental labour.

  • @JamieAtSLC
    @JamieAtSLC 4 ปีที่แล้ว +20

    13:24 lmao, "early warnings about paperclips"

  • @NathanHeld
    @NathanHeld 3 ปีที่แล้ว

    Your colored shirts were helpful for breaking up your points, thank you

  • @Ultra4
    @Ultra4 ปีที่แล้ว

    YT just suggested this today, it's 2 years yet it could have been filmed today, superb work

  • @nathanholyland9493
    @nathanholyland9493 ปีที่แล้ว +4

    Anything good - credit to Russel
    Anything bad - blame me
    What a great intro, definitely portrays your respect for Russel

  • @RichardSShepherd
    @RichardSShepherd 3 ปีที่แล้ว +12

    A thought / idea for a video: Is perfect alignment (even if we can make it) any help? Wouldn't there be bad actors in the world - including Bostrom's 'apocalyptic residual' - who would use their perfectly aligned AIs for bad purposes? Would our good AIs be able to fight off their bad AIs? That sounds completely dystopian - being stuck in the middle of the war of the machines. (Sorry if there is already a video about this. If so, I'll get to it soon. Only just started watching this superb channel.)

    • @dv6165
      @dv6165 ปีที่แล้ว +1

      Putin is quoted to have said that he who has the best AI will rule the world.

    • @angeldude101
      @angeldude101 ปีที่แล้ว +2

      It's hard to solve the alignment problem for artificial intelligence when we haven't even gotten _close_ to solving it for _human_ intelligence, and we've had thousands of years to work on that compared to the few short decades for the artificial variant.

  • @peterrusznak6165
    @peterrusznak6165 ปีที่แล้ว

    This channel is astronomically underrated. Highest quality I have seen since ages.

  • @bronsoncarder2491
    @bronsoncarder2491 4 ปีที่แล้ว +1

    Hello. Just discovered your channel, I really like how you present things. You make complicated topics easy to understand.
    A while back I read a thing about an AI that actually was able to rewrite it's own code, and it started writing stuff that the programmers didn't even really understand. I don't even remember a lot about it, frankly. Might even have been a hoax.
    Anyway, I wondered if you could do a video on that. I'd be interested in an examination of some of the code it wrote, and why a human wouldn't think to write it in that way. Again, if it was even a real thing, or an effective experiment.

  • @michaelbuckers
    @michaelbuckers 4 ปีที่แล้ว +3

    12:30 You also don't see them deployed on the battlefield because it would be piss easy to guard against the effect, by simply adding optical filter to ballistic goggles. And the reason we don't have autonomous combat systems is because they only tend to have 99% friend-or-foe recognition accuracy so 1% of the time they'll be going to town on your own troops (there have been attempts and there have been casualties). But that depends on what do you consider "autonomous". Are claymore landmines autonomous? Are homing missiles autonomous? We use those in droves and their solution to having friend-foe recognition is not to have one, once it's activated it'll be happy to kill anyone it could find.

  • @voneror
    @voneror 4 ปีที่แล้ว +6

    IMO biggest problems about AI safety is that reward for breaking rules has to be balanced by penalty for breaking them and that rules have to be enforceable. International pressure isn't as effective as people think. If superpowers like US or China were caught developing "illegal" AIs, there is no way to stop them without going into WW3.

    • @clray123
      @clray123 4 ปีที่แล้ว

      You just discovered the universal law: "might makes right".

    • @Buglin_Burger7878
      @Buglin_Burger7878 ปีที่แล้ว

      @@clray123 Not an law, an excuse.
      The difference words can have can sway people and completely change how they react to it.
      Call it a law and you will get people abusing it thinking it is right.

    • @clray123
      @clray123 ปีที่แล้ว

      @@Buglin_Burger7878 It's a law in the neutral sense of what happened in history and what happens in nature. It causes misery and suffering, and in that sense it is not "right", but then what happens in nature is not at all "right" according to human moral sense. And what's worse, when push comes to shove, most of those oh-so-moral people turn out to be just pretending, see the actions of our beloved "leaders".

    • @1lightheaded
      @1lightheaded 11 หลายเดือนก่อน

      Do you think the NSA has any interest in applying AI in surveillance / asking for a friend

  • @pietrovision
    @pietrovision 4 ปีที่แล้ว

    Great video! I would love to know what the short-list of philosophical problems is. I'm working on integrate AI and storytelling and this list would be an incredible North Star.

  • @guskelty9105
    @guskelty9105 4 ปีที่แล้ว +3

    Instead of me telling an AI to "maximize my stamp collection", could I instead tell it "tell me what actions I should take to maximize my stamp collection"? Can we just turn super AGIs from agents into oracles?

    • @Rhannmah
      @Rhannmah 4 ปีที่แล้ว +1

      Sweet, naïve idea, but the second the AGI figures out it would be faster for it to grab the reins and do the actions itself to maximize your stamps you're still facing the same predicament.

    • @Ansatz66
      @Ansatz66 4 ปีที่แล้ว

      Having the AI's actions be filtered through humans would seem to depend on the assumption that we can trust humans to not do bad things. We have to suppose that the AI would be incapable of tricking or manipulating the humans into doing things which we would not want the AI to do. If it's an AGI, then it would have all the capabilities of a human and more, and humans have been tricking and manipulating each other for ages.

    • @MrCmon113
      @MrCmon113 3 ปีที่แล้ว

      It still has some implicit or explicit goal like answering questions of people truthfully, for which it will turn the entire reachable universe into computational resources, which serve to torture septillion of humans to figure out with ever greater precision what our questions and what a correct answer is.

  • @petersmythe6462
    @petersmythe6462 4 ปีที่แล้ว +18

    You don't need "human level" AI for safety to be an issue.

    • @mz00956
      @mz00956 4 ปีที่แล้ว +2

      If it has Human Level then I wouldn't call it AI
      Maybe "Artificial but not "Intelligence"
      So: Human Level Artificial Thing?

    • @user-gx2cf4rm6p
      @user-gx2cf4rm6p 4 ปีที่แล้ว +2

      @@mz00956 You dare to doubt in Homo Sapiens Sapiens Sapiens? (Sapiens Sapiens)

    • @angeldude101
      @angeldude101 ปีที่แล้ว

      You don't need _AI_ for safety to be an issue. That's why _law enforcement_ exists, and it's already not as effective as it probably should be.

  • @bno112300
    @bno112300 4 ปีที่แล้ว +4

    Right after you said, you put your own spin on the list, I paused the video, and said "He gets the credit, I get the blame." to myself. Then you said something quite similar, and it prompted me to post this comment right away.

  • @sjmarel
    @sjmarel ปีที่แล้ว

    Thank you. You verbalized what I have been thinking for the last couple of months

  • @dQuigz
    @dQuigz 3 ปีที่แล้ว

    I've seen people say they love peoples videos in comments and I'm like, man... love is a strong word. Then I find myself binging your entire channel for at least the third time..

  • @flurki
    @flurki 4 ปีที่แล้ว +9

    Very nice overview on the whole topic.

  • @TheOnyomiMaster
    @TheOnyomiMaster 4 ปีที่แล้ว +3

    11. "AI safety is a distraction from "

    • @Bvic3
      @Bvic3 4 ปีที่แล้ว

      Yup. A good part of why the data oligarchs love to finance it.
      The other part is to have a controlled opposition.

    • @angeldude101
      @angeldude101 ปีที่แล้ว

      _Artificial_ intelligence safety is a distraction from _intelligence_ safety, which encompasses the former.

  • @LamaPoop
    @LamaPoop 3 ปีที่แล้ว

    Thank you very much for this great video! Talking about this topic is one of the most important issues of our time, I think.
    I often use the exact same argument (4:24), but most people just don't want to understand it, or its implications.
    15:40 seems legit D:

  • @moradan81
    @moradan81 ปีที่แล้ว

    The way you suggest that we are working toward AGI faster than we are working toward solving the alignment problem, it reminds me mildly of the Jurassic Park quote "...but your scientists were so preoccupied with whether or not they could, that they didn't stop to think if they should". Jeff Goldblum, chef's kiss.