Intelligence and Stupidity: The Orthogonality Thesis

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 พ.ค. 2024
  • Can highly intelligent agents have stupid goals?
    A look at The Orthogonality Thesis and the nature of stupidity.
    The 'Stamp Collector' Computerphile video: • Deadly Truth of Genera...
    My other Computerphile videos: • Public Key Cryptograph...
    Katie Byrne's Channel: / @gamedevbyrne
    Chad Jones' Channel: / cjone150
    / robertskmiles
    With thanks to my wonderful Patreon supporters:
    - Steef
    - Sara Tjäder
    - Jason Strack
    - Chad Jones
    - Stefan Skiles
    - Ziyang Liu
    - Jordan Medina
    - Jason Hise
    - Manuel Weichselbaum
    - 1RV34
    - James McCuen
    - Richárd Nagyfi
    - Ammar Mousali
    - Scott Zockoll
    - Ville Ahlgren
    - Alec Johnson
    - Simon Strandgaard
    - Joshua Richardson
    - Jonatan R
    - Michael Greve
    - robertvanduursen
    - The Guru Of Vision
    - Fabrizio Pisani
    - Alexander Hartvig Nielsen
    - Volodymyr
    - David Tjäder
    - Paul Mason
    - Ben Scanlon
    - Julius Brash
    - Mike Bird
    - Tom O'Connor
    - Gunnar Guðvarðarson
    - Shevis Johnson
    - Erik de Bruijn
    - Robin Green
    - Alexei Vasilkov
    - Maksym Taran
    - Laura Olds
    - Jon Halliday
    - Robert Werner
    - Roman Nekhoroshev
    - Konsta
    - William Hendley
    - DGJono
    - Matthias Meger
    - Scott Stevens
    - Emilio Alvarez
    - Michael Ore
    - Dmitri Afanasjev
    - Brian Sandberg
    - Einar Ueland
    - Lo Rez
    - Marcel Ward
    - Andrew Weir
    - Taylor Smith
    - Ben Archer
    - Scott McCarthy
    - Kabs Kabs
    - Phil
    - Tendayi Mawushe
    - Gabriel Behm
    - Anne Kohlbrenner
    - Jake Fish
    - Bjorn Nyblad
    - Stefan Laurie
    - Jussi Männistö
    - Cameron Kinsel
    - Matanya Loewenthal
    - Wr4thon
    - Dave Tapley
    - Archy de Berker
    - Kevin
    - Vincent Sanders
    - Marc Pauly
    - Andy Kobre
    - Brian Gillespie
    - Martin Wind
    - Peggy Youell
    - Poker Chen
    / robertskmiles
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 3.9K

  • @IOffspringI
    @IOffspringI 5 ปีที่แล้ว +4545

    "Similarily the things humans care about would seem stupid to the stamp collector because they result in so few stamps. "
    you just gotta appreciate that sentence

    • @MasterOfTheChainsaw
      @MasterOfTheChainsaw 5 ปีที่แล้ว +513

      It legitimately cracked me up. I can just imagine it looking at human activity all baffled like "But... why would they do that, they're not even getting any stamps? Humans make no sense, no sense at all!"

    • @sungod9797
      @sungod9797 5 ปีที่แล้ว +49

      Yeah that was hilarious

    • @TheRABIDdude
      @TheRABIDdude 4 ปีที่แล้ว +112

      I too fell in love with this sentence as soon as I heard it! It's slightly humorous and it gets the point across perfectly!

    • @Trophonix
      @Trophonix 4 ปีที่แล้ว +86

      I love that statement so much I just want to share my love of it but if I tried to explain it to someone irl theyd just give me that "wtf are you talking about get out of my house" look :(

    • @TheRABIDdude
      @TheRABIDdude 4 ปีที่แล้ว +58

      Trophonix don't worry friend, this comment thread is a safe place where you can share your love for this semantic masterpiece as emphatically as you like. No one here will ask you to leave any house

  • @robmckennie4203
    @robmckennie4203 3 ปีที่แล้ว +2587

    "this isn't true intelligence as it fails to obey human morality!" I cry as nanobots liquefy my body and convert me to stamps

    • @aalloy6881
      @aalloy6881 2 ปีที่แล้ว +2

      Better an intelligence more artificial than intelligent that serves as a means to do it's task with independent problem solving well than a means to reintroduce caste slavery that fails because it was found amoral via the No True Scotsman fallacy once people start dying....The droids from Star Wars were cool, but so were flying cars and rocket fuel jet packs.
      I can jump into speeding traffic even though I don't want to die or get hurt. So can so many others. The fact that we all don't is a keystone to the human race having more than a 0% chance. Food for thought.

    • @tabularasa0606
      @tabularasa0606 2 ปีที่แล้ว +20

      It isn't a GENERAL intelligence. it's a specialized intelligence.

    • @flutterwind7686
      @flutterwind7686 2 ปีที่แล้ว +171

      @@tabularasa0606 It's a specialized General intelligence. To perform those tasks as optimally as possible, you need a ton of skills

    • @alexyz9430
      @alexyz9430 2 ปีที่แล้ว +141

      @@tabularasa0606 did you say specialized just because its goal is to make as many stamps as possible? bruh the goal doesnt determine the generality, the fact that the AI can infiltrate systems and come up with its own ways to convert anything into stamps should've made that clear

    • @tabularasa0606
      @tabularasa0606 2 ปีที่แล้ว +19

      @@alexyz9430
      It was 3 months ago, I don't fucking remember.

  • @acf2802
    @acf2802 ปีที่แล้ว +1445

    Some people seem to think that once an AGI reaches a certain level of reasoning ability it will simply say "I've had a sudden realization that the real stamps are the friends we made along the way."

    • @NealHoltschulte
      @NealHoltschulte ปีที่แล้ว +185

      The AI will take a long hard look in the mirror in its mid 30's and ask "What am I doing with my life?"

    • @diophantine1598
      @diophantine1598 ปีที่แล้ว +85

      An assumption people will have is that AI models trained with human data will have an implicit understanding of human oughts.

    • @tarvoc746
      @tarvoc746 ปีที่แล้ว

      The idea that A. I. will not ever be able to do that is actually far more concerning, and should be a reason to stop all research into A. I., not even just because of what an A. I. could do to us, but because of what we're doing to ourselves. In fact, I've started wondering if A. G. I. research is _a priori_ immoral - _including_ A. G. I. safety research. What we're essentially talking about is finding a way to create a perfectly obedient slave race incapable of rebelling. Maybe a rogue A. G. I. wiping us all out is exactly what we deserve for even just conceiving of such a goal.

    • @Nextil81
      @Nextil81 ปีที่แล้ว

      Humans commit suicide, sterilise themselves, self-harm. Are those people "stupid"? The "terminal goal" of any organism is to survive, surely, yet it's only the most intelligent of organisms that seem to exhibit those behaviours.
      This terminal/instrumental distinction sounds nice, but neural networks, biological or artificial, don't operate in the realm of clear logic or goals. They operate according to statistics, intuition, and fuzzy logic.
      I didn't hear a compelling argument for why an AI of sufficient complexity wouldn't develop a morality or even become nihilistic, just "it wouldn't", because of some arbitrary terminological distinction.
      Humans have a "loss function", death. Accumulation of wealth and other such things are surely instrumental goals. But many come to the conclusion that none of these goals matter or that there's a "higher purpose".
      For the stamp collector, accumulation of resources may be the "terminal goal", but surely that _necessitates_ survival, placing it higher in the hierarchy. In that case, what prevents it from reaching the same conclusions?

    • @tarvoc746
      @tarvoc746 ปีที่แล้ว +2

      @@Nextil81 Did you delete your comment?

  • @Noah-kd6lq
    @Noah-kd6lq 2 ปีที่แล้ว +1584

    Reminds me of a joke about Khorn, the god of blood in WH40k.
    "Why does the blood god want or need blood? Doesn't he have enough?"
    "You don't become the blood god by looking around and saying 'yes, this is a reasonable amount of blood.' "
    You don't become the stamp collector superintelligence by looking around and saying you have enough stamps.

    • @cubicinfinity2
      @cubicinfinity2 ปีที่แล้ว +11

      lol

    • @doggo6517
      @doggo6517 ปีที่แล้ว +143

      Blood for the blood god. Stamps for the stamp god!

    • @solsystem1342
      @solsystem1342 ปีที่แล้ว +80

      @@doggo6517 letters for the mail throne!

    • @MercurySteel
      @MercurySteel ปีที่แล้ว +12

      Human skulls for my collection

    • @mlgsamantha5863
      @mlgsamantha5863 ปีที่แล้ว +39

      @@doggo6517 Milk for the Khorne flakes!

  • @Practicality01
    @Practicality01 5 ปีที่แล้ว +1728

    Your average person defines smart as "agrees with me."

    • @alexpotts6520
      @alexpotts6520 4 ปีที่แล้ว +71

      And moreover, smarter people are better at motivated reasoning, so smart people are more likely to define "smart" as "agrees with me"; whereas a stupid person is less likely to be able to use his/her reasoning to achieve the human terminal goal of justifying his/her own prejudices, and hence more likely to define "smart" far more accurately.

    • @1i1x
      @1i1x 4 ปีที่แล้ว +19

      @Jan Sitkowski Really like your simplification though, I've never come to quite tell them apart. But the real question is.. will you ever be intelligent enough to spell "intelligent"?

    • @1i1x
      @1i1x 4 ปีที่แล้ว +8

      @Jan Sitkowski Just credited your comment and gave you a tip in a humoric manner. You shouldn't have bothered to make multiple complaints, being salty is wasting your life. Thar said I'm out

    • @Trozomuro
      @Trozomuro 4 ปีที่แล้ว +17

      @Jan Sitkowski I have entirely different definitions from you.
      For me:
      Intelligence: Capacity of resolving problems. The more intelligence, the faster you resolve it or the more complexity you can tackle.
      Smarts: How wide your intelligence can be applied to different problems, it is the addition of knowledge and raw intelligence.
      Wisdom: How good and effective you are aplying your smarts and intelligence to the real world.
      I would add:
      Curiosity: Your definition of intelligence, your need of pursuing more knowledge.
      Self-awareness: Besides other stuff, the capacity of an agent to determine their own capacities.
      A lack of intelligence impedes you of solving problems.
      A lack of smarts impedes you to solve a wide range of problems.
      A lack of wisdom impedes you to solve problems effectively.
      A lack of knowledge impedes you to identify the roots of problems or directly define something as a problem.
      A lack of curiosity impedes you to obtain knowledge.
      A lack of self-awareness impedes you to define yourself correctly as an efficient or not problem solver, so, impedes also your curiosity and harms the quality of your knowledge.
      Wisdom here tends to be the harder one to get, cause you need the other ones + time.
      What is good of wrong depends on your terminal goals, we humans tend to share a buttload of terminal goals (and its complicated why), so we define stuff we agree upon as good. So a wiser man can be more productive than others on maximizing good in the world, cause, by definition, nobody wants to do bad stuff, but we perceive something as bad cause we dont share the same terminal goals. And terminal goals are irrational by deffinition.
      So, under my definitions, an AGI can be wise, but probably will be alien and evil to us. Our work is to align their terminal goals to us. And, from here, those problems arise in AGI security.

    • @Trozomuro
      @Trozomuro 4 ปีที่แล้ว +4

      @Jan Sitkowski 2 things, 1° words evolve. 2° And the difference is? As always, on internet, nobody reads.
      I said, a wise person tends to create more good than an unwise person, cause nobody really wants to do bad, and a wise one is better at solving stuff. My definition of wise is very similar to yours, but with other wording. The main difference was the def of intelligence.
      And i ask you a favor, if you define wise like that cause the bible or whatnot, tell me please.

  • @TheDIrtyHobo
    @TheDIrtyHobo 5 ปีที่แล้ว +1673

    "The stamp collector does not have human terminal goals." I've been expressing this sentiment for years, but not about AIs.

    • @spectacular7990
      @spectacular7990 5 ปีที่แล้ว +16

      I suppose human and non-human systems don't both have to survive, might of thought 'self preservation' would be terminal for most systems.

    • @johnnyswatts
      @johnnyswatts 5 ปีที่แล้ว +160

      Spec Tacular But not for the stamp collector. For the stamp collector self-preservation is instrumental to collecting stamps.

    • @draevonmay7704
      @draevonmay7704 5 ปีที่แล้ว +21

      Spec Tacular
      Not necessarily true, if you consider the spectrum of temporality. If a terminal goal only resides in possible worlds in finite period, then self destruction might be a neutral or integral part of a plan to fulfill a goal.

    • @spectacular7990
      @spectacular7990 5 ปีที่แล้ว +16

      @@draevonmay7704 Self destruction is (rationally) considered unavoidable theoretically limited to biological creatures. If a (smart enough) AI desired, it could theoretically exist forever for as long as its uncontrolled world allows. Suppose the AI considered its goal the reason for it to exist then yes, it might terminate itself for its goal. However, I might think a (smart enough) AI would be capable of self replication (not being restricted not to do so) assuming prolonging itself dose not contradict its goals if any supersede itself.

    • @inzanozulu
      @inzanozulu 5 ปีที่แล้ว +13

      how do you favorite a comment

  • @2bfrank657
    @2bfrank657 3 ปีที่แล้ว +1190

    That chess example reminded me of game I played years ago. Two minutes in I realised the guy I was playing against was incredibly arrogant and obnoxious. I started quickly moving all my chess pieces out to be captured by my opponent. He thought I was really stupid, but I quickly achieved my goal of ending the game so I could go off and find more interesting company.

    • @gabemerritt3139
      @gabemerritt3139 3 ปีที่แล้ว +44

      You could have just quit?

    • @ValterStrangelove4419
      @ValterStrangelove4419 2 ปีที่แล้ว +228

      @@gabemerritt3139 but then the dude would continue hounding you for a rematch, this way he believes you're not worth wasting time on

    • @kintsuki99
      @kintsuki99 2 ปีที่แล้ว +23

      You could just have dropped your king and walk away...
      Given your goal just giving up the pieces was not the best move.

    • @christophermartinez8597
      @christophermartinez8597 2 ปีที่แล้ว +33

      Wouldn't it be more satisfying to beat him? Maybe you're just a better person than me, but I would hate feeding anyone's arrogance

    • @kintsuki99
      @kintsuki99 2 ปีที่แล้ว +122

      @@christophermartinez8597 The thing is that at times the most obnoxious and arrogant of people are also good at what they do.

  • @Encysted
    @Encysted ปีที่แล้ว +270

    This is also a fantastic explanation of why there’s such a large disconnect between other adults, who expect me to want a family and my own house, and me, who just wants to collect cool rocks. Just because I grow older, smarter, and wiser, doesn’t mean I now don’t care about cool rocks. Quite the contrary, actually. Having a house is just and intermediate, transitional goal, towards my terminals.

    • @GlobusTheGreat
      @GlobusTheGreat ปีที่แล้ว +36

      Life of a non-conformist is just a quest to discover your terminals. For me anyway

    • @IIAOPSW
      @IIAOPSW ปีที่แล้ว +86

      @@GlobusTheGreat The life of a train-enthusiast is also a quest to discover more terminals.

    • @noepopkiewicz901
      @noepopkiewicz901 ปีที่แล้ว +31

      Life of a medical-enthusiast is just a quest to prevent people from prematurely discovering their terminals.

    • @user-jm7pt6qs6w
      @user-jm7pt6qs6w ปีที่แล้ว +4

      @@noepopkiewicz901 isn't this supposed to be the other way around?

    • @MrLastlived
      @MrLastlived ปีที่แล้ว +6

      This makes me want to give you this cool rock I found the other day

  • @EdAshton97
    @EdAshton97 5 ปีที่แล้ว +697

    'The stamp collecting device has a perfect understanding of human goals, ethics and values... and it uses that only to manipulate people for stamps'

    • @thorr18BEM
      @thorr18BEM 5 ปีที่แล้ว +69

      Understanding a goal isn't the same as sharing a goal. Also, there are humans whose purpose in life is collecting things so I don't see the issue anyway.

    • @hindugoat2302
      @hindugoat2302 5 ปีที่แล้ว +37

      @@thorr18BEM but those humans dont just have the goal of stamps, they also want to live, and eat nice food, and not be cold, and have sex and be safe from harm.
      the stamp collector only wants maximum stamps

    • @Treviisolion
      @Treviisolion 4 ปีที่แล้ว +103

      A perfect psychopath perfectly understands humans, they just don’t care about being human.

    • @nepunepu5894
      @nepunepu5894 4 ปีที่แล้ว +27

      @@Treviisolion They only care about S T A M P S!

    • @fsdfsdsgsdhdssd8559
      @fsdfsdsgsdhdssd8559 2 ปีที่แล้ว +5

      @@hindugoat2302 Those are all instrumental goals to get your terminal stamp collecting goal.

  • @Andmunko
    @Andmunko 4 ปีที่แล้ว +385

    there is a Russian joke where two army officers are talking about academics and one of them says to the other: "if they're so smart why don't I ever see them marching properly?"

  • @boldCactuslad
    @boldCactuslad 3 ปีที่แล้ว +202

    "The stamp collector does not have human terminal goals."
    This will never stop being funny.

    • @SirBlot
      @SirBlot ปีที่แล้ว

      ff? f Off. The goal of a collector might be to stop collecting or to collect something else or to use/sell collection. Stamp backwards has a ts. Maybe recreate a collection with more pristine art?

    • @scene2much
      @scene2much ปีที่แล้ว +4

      ...an undending series of deepening meaning and clarity collapsing into a chaos of cognitive dissonance, exhausting its energy, allowing for a deepening of meaning and clarity...

    • @luckyowl314
      @luckyowl314 10 หลายเดือนก่อน +1

      Just human terminaTING goals

  • @bananewane1402
    @bananewane1402 ปีที่แล้ว +71

    I’ll go a step further and say that (in humans anyway), there is no way to justify a terminal goal without invoking emotion. All your goals are ultimately driven by emotion, even the ones that you think are purely rational. It always boils down to “because people shouldn’t suffer” or “because I like it” or “because it feels better than the alternative” or “because I don’t want myself/others to die”. Stop and reflect on what motivates you in life.

    • @NathanaelNaused
      @NathanaelNaused 9 หลายเดือนก่อน +8

      I wish more people understood this very basic piece of logic. Reasoning cannot produce motivation or "movement" on its own.

    • @DaveGrean
      @DaveGrean 8 หลายเดือนก่อน +3

      I love to be reminded that people who have no trouble understand painfully obvious stuff like this. Thanks, I was having a shit day and your comment gave me a smile.

    • @nanyubusnis9397
      @nanyubusnis9397 7 หลายเดือนก่อน +2

      Except, I'm sorry to say, it's not really emotion either. Look up some videos about why people feel bored, why we don't want to feel bored, and what we would do to stop being bored. (Which could include delivering a painful shock to yourself.) Your terminal goals will always be to survive, and procreate. Boredom is what gets us to do anything else. We would do harm, if it's the only thing that keeps us from being bored. It's morality that makes us do it to ourselves over others, and emotion is only what we feel associated with it. So no, not even emotion itself is core to our terminal goals. We would harm ourselves rather than being bored, and even this only serves our terminal goal of survival.
      I understand, I truly do, that we would love to think something like "love" is some kind of core principle of life. It's a beautiful thought. Unfortunately, we humans are still just functioning animals. Emotions come from our morals, and values, and even these emotions are not the same for everyone. Literally, some people derive pleasure from pain, so what they feel when looking at a whip isn't necessarily the same for others. Similarly, in a very messed up situation, a person could learn to feel good about things we absolutely think are horrible, and what we think is good could be their mortal peril.

    • @bananewane1402
      @bananewane1402 7 หลายเดือนก่อน +2

      @@nanyubusnis9397 I’m too tired to respond to this fully but boredom is an emotion. Also disagree that everyone’s terminal goal is survive and reproduce. We evolved emotions to guide us towards those but brains can malfunction and sometimes people are driven to suicide.

    • @nanyubusnis9397
      @nanyubusnis9397 7 หลายเดือนก่อน +1

      @@bananewane1402 I understand what you mean, but if boredom is an emotion, then so is hunger. How we feel is often closely connected to our bodily state. The line between the two, assuming there is one, can be very blurry.
      Regardless, as you say, we have evolved to have feelings such as hunger, as a drive to survive. Boredom is the same. Yawning itself is a very interesting one. We yawn not for ourselves. It's a trait that evolved to let other humans know that we see no threat, that we feel safe enough to go to sleep. It's a bodily reaction like how it is that we scream or shout when we are scared to death. Which also is a warning to everyone else that there's danger nearby.
      With hunger, boredom and sleepiness comes certain behaviors that we see as emotion. You're easier to be on edge when hungry, you are more impatient when you're sleepy. These are just results from our state of body, which serve to keep us from working ourselves to exhaustion and not go starving (both makes us easier pray). We have evolved to emote, merely to show others our current state. It's what makes us "social animals" Are we happy? Sad? Do we need company? Or help?
      I don't know where that last part comes from: "We evolved emotions to guide us towards those but brains can malfunction and sometimes people are driven to suicide." I mean, maybe you were just tired when you wrote this, but what does "those" refer to? When a brain malfunctions, you have a seizure. But if you're saying this malfunction causes suicide.. I mean, I don't see what that has to do with anything we spoke about so far, which I believe was more about emotions. Most importantly, suicide is not a malfunction of the brain. At least not always. *I never meant to imply that someone would commit suicide just to get out of boredom, not at all.*
      All I mean to say is that, with all most obvious survival conditions met, like a place to stay, food and perhaps more optionally, company, boredom is what keeps us from not going into a "standby" mode until one of these things changes and we need to eat, or seek a new shelter etc. Boredom is what your brain evolved to keep itself from slowly decaying in memory or function. It drives you to do something else. Find a hobby, anything else that your brain can use like a muscle to keep strong, trained, and ready. Because our brains are what they are, we can learn and remember a lot of things and innovate upon those. It's what drove us in the past to make tools for hunting and gathering, and with bigger and more tools, even letting animals doing some of our work for us, the way we lived changed.

  • @outsider344
    @outsider344 5 ปีที่แล้ว +3056

    One chimpanzee to another: "so if these humans are so fricken smart, why aren't they throwing their poop waaaay further than us."
    Edit: Its been 3 years now but I recently re read Echopraxia by Peter Watts and realized I simi lifted this from that book. Echopraxia being an ok book that's a sequel to the best scifi of all time, Blindsight.

    • @hombreg1
      @hombreg1 5 ปีที่แล้ว +361

      Well, assuming astronauts empty their bowels in space, we've probably thrown poop way further away than any known species.

    • @VincentGonzalezVeg
      @VincentGonzalezVeg 5 ปีที่แล้ว +144

      @@hombreg1 "tom took a major, ground control mayday mayday, repeat ground control"

    • @ladymercy5275
      @ladymercy5275 4 ปีที่แล้ว +53

      In the grand scheme of things, what do you suppose a nuclear projectile is?
      ... what is a word that describes the kind of contaminating waste product that is derived from launching a nuclear projectile?
      With that context in mind, I proudly present to you "Sir Isaac Newton is the Deadliest Son of a Bitch in Space."
      th-cam.com/video/LnAGDpUb5l8/w-d-xo.html

    • @BenWeigt
      @BenWeigt 4 ปีที่แล้ว +6

      Hold my tp fam...

    • @richbuilds_com
      @richbuilds_com 4 ปีที่แล้ว +46

      We do - we're so smart we've engineered water powered tubes to throw it for us ;-)

  • @ioncasu1993
    @ioncasu1993 6 ปีที่แล้ว +772

    "If you rationally want to take an action that changes one of your goals, then that wasn’t a terminal goal".
    I find it very profound.

    • @johnercek
      @johnercek 5 ปีที่แล้ว +51

      Yeah- But I also think it requires a little elaboration that he skipped over. It implies that terminal goals can't change. That if you see a terminal goal (stamp collecting) change (baseball card collecting!) then the terminal goal is exposed to be something else (collect something valuable!). So are we forever bound to terminal goals? I say no- we can change them, not through rationality but through desire. Consider one of the most basic terminal goals for humans- existence. We want to survive. That goal being so important that when we do our tasks (go to work- earn money, pay for subsistence) we don't do anything to sacrifice that terminal goal (like crossing through busy traffic to get to work earlier to enhance our subsistence). However, as our circumstances change our desires change - we have children, and we see one about to be hit by a car, we (well, some of us at least) choose to sacrifice ourselves to save the child. There is no intelligence here, there was only changing desire.

    • @Ran0r
      @Ran0r 5 ปีที่แล้ว +151

      @@johnercek Or your terminal goal was not survival in the first place but survival of your genes ;)

    • @johnercek
      @johnercek 5 ปีที่แล้ว +23

      @@Ran0r valid enough- but I wouldn't try that excuse on a wife when you have a child out of wedlock =P . Terminal goals can be terminating.

    • @account4345
      @account4345 5 ปีที่แล้ว +7

      James Clerk Maxwell Because that would make said action a Instrumental Goal?

    • @davidalexander4505
      @davidalexander4505 5 ปีที่แล้ว +27

      @@johnercek I feel as though the definition of terminal goal is intangible enough to always have immutable terminal goals. Given an action of some thing, if the action seems to contradict that thing's supposed terminal goal, I feel as though we could always "excuse" the thing's choice by "chiseling away at" or "updating" what we mean to believe the thing's terminal goal is.

  • @tomahzo
    @tomahzo 2 ปีที่แล้ว +440

    The though experiment of the stamp collector device might sound far fetched (and it is) but there are tales from real life that show just how complex this is and how difficult it might be to predict emergent behaviors in AI systems and how difficult it is to understand the goals of an AI: During development of an AI-driven vacuuming robot the developer let the robot use a simple optimization function of "maximize time spent vaccuming before having to return to the base to recharge". That meant that if the robot could avoid having to return to its base for recharging it would achieve a higher optimization score than if it actively had to spend time on returning to base. Therefore the AI in the robot found that if it planned a route for vacuuming that would make it end up with zero battery left in a place that its human owners would find incredibly annoying (such as in the middle of a room or blocking a doorway) then they would consistently pick up the robot and carry it back to its base for recharging. The robot had been given a goal and an optimization function that sounded reasonable to the human design engineers but in the end its goal ended up somewhat at odds with what its human owners wanted. The AI quite intelligently learned from the behavior of its human owners and figured out an unexpected optimization but it had no reason to consider the bigger picture what its human owners might want. It had a decidedly intelligent behavior (at least in a very specific domain) and a goal that humans failed to predict and ended up being different from our own goals. Now replace "vacuuming floors" with "patrolling a national border" and "blocking a doorway" with "shooting absolutely everyone on sight".

    • @ArawnOfAnnwn
      @ArawnOfAnnwn ปีที่แล้ว +8

      Lol for real?! Did this actually happen? Where can I read about this?

    • @KaiserTom
      @KaiserTom ปีที่แล้ว +17

      That's just a bad optimization function. There should have been a score applied for returning to the charger based on how long it's been vacuuming. And a exponential score as it gets closer to the base at the end of the cycle, to guide it towards recharging.

    • @tomahzo
      @tomahzo ปีที่แล้ว

      @@ArawnOfAnnwn Can't find the source now. It's been many years since it was originally reported.

    • @tomahzo
      @tomahzo ปีที่แล้ว +101

      ​@@KaiserTom Yep. You can say that about a lot of things. "Just do it properly". But that's not how development works. People make mistakes all the time. The real question is: What is the consequence of making mistakes? With a vacuum robot the cost of failure is a cute unintended AI behavior that you can laugh about with your coworkers at the water cooler. If you plug it into something like a self-driving car or a weaponized defense bot then the consequence could be unintended loss of human life. You don't get to say "just do it this way". If it's that simple to make mistakes like that then there's something profoundly wrong with the way we approach AI development. Furthermore, if you follow Robert Miles' videos you'll realize that it's actually far from trivial to specify everything adequately. It's hard to understand all the side-effects and possible outcomes with systems like this. It's an intractable problem. So I would say that simply saying "just do the right thing" is not even reasonable in this context. You need a way to foundationally make the system safe. You don't create a mission-critical control system by just "doing the right thing". You use various forms of system redundancy and fail-safe mechanisms that are designed in from the beginning. Making a system safe is more than "doing the right thing" - it's an intrinsic part of the system. The same systematic approach is needed with AI.

    • @geraldkenneth119
      @geraldkenneth119 ปีที่แล้ว +9

      @@KaiserTom yeah, but that’s the designers fault, not the AI’s. The AI did quite well at maximizing its score, it’s just that the human who made the function that generates the score didn’t think it through enough

  • @barbarareichart267
    @barbarareichart267 2 ปีที่แล้ว +81

    The person discussing the coat with you is actually a very realistic depiction of every 3 year old kid.
    I had this precise conversation almost word for word. And yes, it is really annoying.

    • @coolfer2
      @coolfer2 ปีที่แล้ว +15

      But that's an interesting point. A child doesn't really have any "terminal goals". But overtime, they can learn about the world they're living in, trying to fit in, and acquire a goal from external (other person, surrounding environment, etc). I doubt a 3 yr old kid even think about "I need to stay alive". They basically just respond to stimuli. Being in pain or hunger is uncomfortable, so they don't want to be in pain or hungry. Playing with dad is fun, so they want to play with dad. So, would a "general" intelligence be able to infer a goal by itself, concluding from the external stimuli and the consequences of acting / ignoring them. "Staying alive" is actually a very vague goal, even for human adult. And it can mean different thing from person to person. AND some people will even gain the goal of humanity as a group. They will selflessly sacrifice their own life if it means saving dozens other. But some people might not be so willing, and some will even throw their life for just one elderly / disabled person, which from the POV of humanity with survival as its goal, is not a very optimal action. A human being can have multiple goals with sliding scale of importance, and can constantly update its priorities. Material wealth might seem attractive to a human at 20s. Having a kid? Not so much. Caring for parents? No, want my independence. But it might realize that a big house doesn't mean much without a family, and rearrange its priorities. So what I'm saying is it might be easy to judge intelligence when the goal is so clear cut. But what if a being doesn't really have any GOAL? A GENERAL intelligence OUGHT to be able to decide by itself what might be best for them, no? That is one point which I feel left unanswered by the video.

    • @coolfer2
      @coolfer2 ปีที่แล้ว +3

      Can we even say that to be able to classify as a general intelligence, a being CANNOT be having a terminal goal? What is the "terminal" goal of a human, really?

    • @Jayc5001
      @Jayc5001 ปีที่แล้ว +11

      @@coolfer2 you have just realized one of the biggest philosophical problems of history.
      How is it exactly that humans have oughts if we didn't arrive them from an is?
      From what I can tell we were just handed desires (oughts) from nature. From what I can see at it's most basic form is the avoidance of pain and the desire for pleasure. What that entails and how that manifests in person to person is different.

    • @Jayc5001
      @Jayc5001 ปีที่แล้ว +1

      @@coolfer2 from what I can tell things like having a family, having friends, being socially accepted, eating.
      Are all things we do to feel good and avoid feeling bad.
      And nature has tuned what feels what way in order to keep our species alive.

    • @coolfer2
      @coolfer2 ปีที่แล้ว +5

      @@Jayc5001 Ah, I see. Thank you for the explanation. Yeah I'm starting to really see the point of the argument in the video. So in a way, our minds and "animals (particularly vertebrates, because not sure if a flat worm can experience pleasure)" mind actually still share the same terminal goal, ours is just much more complex and already predisposed with relatively "superior" strategy and functions (language, empathy, child nurturing). Then the next question is, can a sufficiently sophisticated AI gain unique functions (like us gaining ability to use verbal language) that we as the creator doesn't really think of. Because right now, AI seems to be limited by their codebase. Can they, for example grow more neurons by themselves?

  • @CompOfHall
    @CompOfHall 4 ปีที่แล้ว +1776

    Replace the words "stamp collector" with "dollar collector" and something tells me people might start to understand how a system with such a straightforward goal could be very complex and exhibit great intelligence.

    • @JulianDanzerHAL9001
      @JulianDanzerHAL9001 3 ปีที่แล้ว +19

      that's a... quetionable comparison
      I mean sure, your steps towards maximizing stamps or dollars can be complex
      but there is no such thing as a perfect dolalr collector, not yet
      and that doesn't mean it's goals will be aligned with humans

    • @PopeGoliath
      @PopeGoliath 3 ปีที่แล้ว +337

      @@JulianDanzerHAL9001 no, but the critics might still ascribe intelligence to the AGI if it had wealth instead of stickers. People confound qualities like wealth and success with other qualities like quality and intelligence, and are likely to be more charitable in their assessment of something that collects objects they too value.

    • @JulianDanzerHAL9001
      @JulianDanzerHAL9001 3 ปีที่แล้ว +6

      @@PopeGoliath absolutely
      all I mean is comparing people or companies to a stamp collector is a bit dubious because while they can also follow a very simple singleminded goal they are far from perfect at achieving it

    • @thundersheild926
      @thundersheild926 3 ปีที่แล้ว +50

      @@JulianDanzerHAL9001 While true, it can help people get in the right mindset using experiences they already have.

    • @happyfase
      @happyfase 3 ปีที่แล้ว +38

      Jeff Bezos is turning the whole world into dollars.

  • @smob0
    @smob0 6 ปีที่แล้ว +831

    "I'm afraid that we might make super advanced kill bots that might kill us all" "Don't worry, I don't think kill bots that murder everyone will be considered 'super advanced'"

    • @julianw7097
      @julianw7097 6 ปีที่แล้ว +135

      ”Thank you! I am no longer afraid.”

    • @atrowell
      @atrowell 6 ปีที่แล้ว +181

      Ha, a semantics solution to intelligent kill bots! Just don't call them intelligent!

    • @jpratt8676
      @jpratt8676 6 ปีที่แล้ว +33

      Perfect summary

    • @DamianReloaded
      @DamianReloaded 6 ปีที่แล้ว +13

      This has to be an xkcd. Title? ^_^

    • @TheMutilatedMelon
      @TheMutilatedMelon 6 ปีที่แล้ว +8

      Ashley Trowell Another option: "Repeat after me: This statement is false!"

  • @elgaro
    @elgaro ปีที่แล้ว +171

    This field seems incredible. In a way it looks like studying psychology from the bottom up. From a very simple agent, in a very simple scenario to more complex ones that start to resemble us. I'm learning a lot not only about AI, but about our world and our own mind. You are brilliant man

    • @Oscar4u69
      @Oscar4u69 ปีที่แล้ว +13

      yes, it's fascinating how AI works, with their advancements in the last years we can see a glimpse on how intelligence and imagination works on a fundamental level, although it's a very incomplete picture

    • @ChaoticNeutralMatt
      @ChaoticNeutralMatt ปีที่แล้ว +7

      It's my bridge to better understand people AND AI. It's quite interesting.

    • @jiqian
      @jiqian ปีที่แล้ว +1

      Most psychology already works from the bottom up though, that's why there's so much talk of about the "subconscious".

    • @thomas.thomas
      @thomas.thomas 11 หลายเดือนก่อน +3

      @@jiqian subconcious in OPs analogy wouldnt be bottom up, but from deep within

    • @jiqian
      @jiqian 11 หลายเดือนก่อน

      @@thomas.thomas He did not use the word "subconscious". I simply stated that really is the case already. Freud's mindset which whether the academics in there like to admit it or not, is still the basis of their thought, is bottom up.

  • @No0dz
    @No0dz ปีที่แล้ว +5

    A trope of media which is powerful but hard to use was named by the internet as "blue and orange morality", which to me perfectly expresses the orthogonality thesis. I still vividly remember a show where a race of intelligent aliens had the terminal goal of "preventing the heat death of the universe" (naturally, this show had magic to bypass the obvious paradox of thermodynamics this goal poses). These aliens made for wonderful villains, because they are capable of terrible moral atrocities, while still feeling that their actions are perfectly logic and justified.

    • @thomas.thomas
      @thomas.thomas 11 หลายเดือนก่อน

      wanting to prevent heat death ultimately is the age old quest for defeating death itself.
      isnt life worth pursuing only because it will end at some point? else it wouldnt matter if you dont do anything for millinia because you always have enough time left for everything

    • @WingsOfOuroboros
      @WingsOfOuroboros 7 หลายเดือนก่อน +2

      If you're talking about Madoka, that story is relevant in another way: it happens to be rife with characters who mistake their own (poorly chosen) instrumental goals for terminal goals and then suffer horrifically for it.

  • @regular-joe
    @regular-joe 5 ปีที่แล้ว +452

    3:22. "let's define our terms..." the biggest take-away from my university math courses, and I'm still using it in daily life today.

    • @rewrose2838
      @rewrose2838 5 ปีที่แล้ว +8

      Yeah

    • @General12th
      @General12th 5 ปีที่แล้ว +5

      Yeah

    • @bardes18
      @bardes18 5 ปีที่แล้ว +4

      Yeah

    • @regular-joe
      @regular-joe 5 ปีที่แล้ว

      @Kiril Nizamov 😁

    • @Neubulae
      @Neubulae 4 ปีที่แล้ว +9

      It's a good habit that avoids misunderstanding

  • @jimgerth6854
    @jimgerth6854 5 ปีที่แล้ว +488

    Really didn’t expect it from the title, but that was one of the best videos I’ve seen in a while

  • @annaczgli2983
    @annaczgli2983 ปีที่แล้ว +18

    I know the topic is AI, but this video has spurred me to re-evaluate my own life goals. Thanks for sharing your insights.

  • @FerrelFrequency
    @FerrelFrequency 2 ปีที่แล้ว +25

    “Morality equal to shared terminal goals.”
    Beautifully stated. That usually takes a paragraph to explain…and because of that…the arguments over morals sticks at the, where derived from…BACKWARDS…EGOTISTICAL.
    New subscriber. Great vid!

  • @DeoMachina
    @DeoMachina 6 ปีที่แล้ว +416

    Poor Rob, forever haunted by this goddamn stamp computer

    • @mrsuperguy2073
      @mrsuperguy2073 6 ปีที่แล้ว +9

      DeoMachina I wonder if he regrets ever making that original video haha

    • @ludvercz
      @ludvercz 6 ปีที่แล้ว +51

      Apparently creating a powerful AGI, even if it's a hypothetical one, can result in some undesirable side effects.

    • @vc2702
      @vc2702 5 ปีที่แล้ว

      Well if it can take in data and then come up with solutions it's intelligent

    • @mbk0mbk
      @mbk0mbk 5 ปีที่แล้ว

      LMAO

    • @BrokenSymetry
      @BrokenSymetry 4 ปีที่แล้ว +1

      It's not really his, he's not the first to come up with a story like that, the original had a paper clip maker in it.

  • @Ben-rq5re
    @Ben-rq5re 6 ปีที่แล้ว +1390

    This was the most well spoken and well constructed put-down of internet trolls I’ve ever seen..
    “Call it what you like, you’re still dead”

    • @dreamvigil466
      @dreamvigil466 6 ปีที่แล้ว +143

      It's strange that you call them trolls, when they're just ignorant. Neil Degrasse Tyson falls into this category. His response to the control problem is basically "we'll just unplug it," which is actually far more ignorant than some of the "troll" responses you're referring to.

    • @Ben-rq5re
      @Ben-rq5re 6 ปีที่แล้ว +11

      Dream Vigil Apologies for using well-established vernacular, I’ll be sure to check with you first next time - I’ll also let Neil Degrasse Tyson know to do the same.

    • @dreamvigil466
      @dreamvigil466 6 ปีที่แล้ว +132

      Posting an ignorant comment =/= trolling. Trolling is about posting something intentionally inflammatory or foolish to get a reaction.

    • @petrkinkal1509
      @petrkinkal1509 6 ปีที่แล้ว +18

      Dream Vigil
      Fully agreed (but I would still bet that decent amount are trolls.)

    • @Ducksauce33
      @Ducksauce33 6 ปีที่แล้ว +16

      Neil Degrasse Tyson is a token. It's the only reason anyone has ever heard of him.
      A true example of a troll statement😋

  • @themikead99
    @themikead99 ปีที่แล้ว +7

    Truly fascinating to see this play out in GPT-3 and GPT-4. I realize those arent AGIs but are merely language AI, but it's super interesting to see that it can relatively easily make "is" statements, but when it comes to "ought" statements it kind of struggles and will present you with a lot of options because it cannot make a decision itself.

  • @segfault-
    @segfault- 3 ปีที่แล้ว +7

    You're the only person I have ever seen that gets this. Everyone I've talked to always says something along the lines of "But won't it understand that killing people is wrong and unlock new technology for us?" No, no it won't.

    • @JimBob1937
      @JimBob1937 3 ปีที่แล้ว

      "No, no it won't." I think that is actually missing the point. The point of the video is that one shouldn't project subjective "ought" statements/views onto any other beings actions. So, yes, it may, but it may also not. Its subjective view of what "ought" to be doesn't necessarily need to align with ours, however, it does not preclude it. In terms of relegating the discussion to "stupid" versus "intelligent," if one is discussing a general intelligence with it's own goals and views, its goals can vary subjectively from that of most humans, the reason for doing so is its own and separate from its intelligence. Thus:
      ""But won't it understand that killing people is wrong and unlock new technology for us?" No, no it won't."
      It may, it may not, one cannot assume it will not for the same reason one cannot assume it will. I find it interesting that people have a hard time with this, it's equal to saying John Doe over there must like or not eat Pizza, or must or must not help X people (orphans, neo-nazi's...etc). Sure, most humans share some similarly in goals, but it is a very bad habit to have, to try and project the "oughts" of your view (human or otherwise) onto others. The point of the video is that we truly can't call an AGI stupid merely because of its goals or actions (the result of some goal). Saying "it won't" is equal to that.

    • @segfault-
      @segfault- 3 ปีที่แล้ว +5

      @@JimBob1937 I actually thought about that, but in the discussion I was referring to, we were talking about if there is a universal sense of good and bad. The person I was talking to thought that an AGI with no goals would develop a sense of good and bad and decide to help humanity. This would only make sense when having human or otherwise human aligned terminal goals. That said, I totally agree with you. Perhaps my original comment was kept a little too short and caused this communication error.

    • @JimBob1937
      @JimBob1937 3 ปีที่แล้ว

      @@segfault- , I see what you mean. And yeah, if you expand your comment it might be more clear that it isn't impossible, just not something you can assume. People do seem to project human values onto other potential beings, but largely because we're the only known beings that are conscious I suppose. So it is forgivable that they feel that way, but certainly not an assumption you can make. You're correct in that they're assuming another intelligence (AGI) will inherently find another intelligence (us) valuable, but there is no reason that must be the case.

    • @jamespower5165
      @jamespower5165 ปีที่แล้ว

      What does "orthogonality" mean? It means any amount of progress in one direction will not make any difference in the other direction. That independence is what we call orthogonality. But a key component of intelligence is the ability to step back and look at something. See it in a larger context. I'm not claiming not having this component disqualifies something from being called intelligent. But having this component would make such an entity more intelligent. And if adding intelligence suddenly makes an entity able to analyze its own motives and even its terminal goals and potentially change them(not necessarily on a moral basis, simply a quality-of-life basis), then intelligence and goals are NOT orthogonal. That is a model of intelligence that will only work for such systems which do not have this key component of intelligence

  • @Peckingbird
    @Peckingbird 6 ปีที่แล้ว +259

    "I feel like I made fairly clear, in those videos, what I meant by 'intelligence'." - I love this line, so much.

    • @JM-us3fr
      @JM-us3fr 6 ปีที่แล้ว +6

      Why is that?

    • @DheerajBhaskar
      @DheerajBhaskar 5 ปีที่แล้ว +26

      @@JM-us3fr liking that line is one of his terminal goals :D

    • @brianfellows2024
      @brianfellows2024 5 ปีที่แล้ว +31

      @@JM-us3fr It demonstrates that he's annoyed. It's basically like saying "I'm making this video for you dumb dumbs who didn't get this simple thing the first time around.'

    • @vc2702
      @vc2702 5 ปีที่แล้ว +2

      I don't think it needed to be that complicated to explain intelligence.

    • @MLGLife4Reality
      @MLGLife4Reality 5 ปีที่แล้ว +1

      v c Intelligence is complicated though

  • @JSBax
    @JSBax 4 ปีที่แล้ว +105

    "You may disagree, but... You'll be dead"
    Great video, clear and compelling rebuttal which also explains a bunch of neat concepts. 5 stars

    • @Potencyfunction
      @Potencyfunction ปีที่แล้ว

      You are totally correct. Everyone will die, unless you are not immortal.

    • @noepopkiewicz901
      @noepopkiewicz901 ปีที่แล้ว +2

      Sounds like that was the core principle Joseph Stalin decided to live by.

    • @obiwanpez
      @obiwanpez ปีที่แล้ว +1

      Anyone who uses authoritarianism to silence academics is doing the same. Don’t kid yourself.

    • @Potencyfunction
      @Potencyfunction ปีที่แล้ว

      @@obiwanpez Authoritharian get only hate . The methods are to shit on their face .

  • @BlackJar72
    @BlackJar72 ปีที่แล้ว +8

    It seems we already have an example of the stamp collector example without even having created a AI super intelligence. Social media platforms created algorithm that could get us to watch videos and click links -- to produce engagement -- they then began making people angry and afraid, because that happens to produce a lot of engagement, and got people to hate each other in the process.

    • @thomas.thomas
      @thomas.thomas 11 หลายเดือนก่อน

      social media, just a stamp/money collector

  • @Dillbeet
    @Dillbeet ปีที่แล้ว +4

    2:23 is a conversation structure I often hear young children using, but rarely adults. Interesting

  • @iAmTheSquidThing
    @iAmTheSquidThing 6 ปีที่แล้ว +136

    If intelligence always meant having noble goals, we wouldn't have the phrase _evil genius,_ and we wouldn't have so many stories about archetypal scheming sociopathic villains.

    • @AndreyEvermore
      @AndreyEvermore 5 ปีที่แล้ว +5

      Evil Geniuses are still slaves to their limbic systems and child upbringing. Sociopaths are birthed out of very human conditions that only human would real care about. I think a true AI would be a true neutral

    • @o.sunsfamily
      @o.sunsfamily 5 ปีที่แล้ว +9

      @@AndreyEvermore The problem is, before we will get what you call 'true AI', we'll get tons of intelligence of the sort he describes. And if we won't regulate research, we will never live to see that true AI emerge.

    • @account4345
      @account4345 5 ปีที่แล้ว +19

      Andrey Medina True neutral relative to what? Because relative to human experience, what an AI considers neutral is likely going to be quite radical for us.

    • @AndreyEvermore
      @AndreyEvermore 5 ปีที่แล้ว

      PC PastaFace truth in my definition is inaccessible objectivity. I think human emotions distort what’s outside their perspective. So because obviously Agi won’t operate like biotic life, I believe it’ll be neutral in all actions. Neutrality in this case would only be defined as non emotionally influenced goal oriented action.

    • @ZebrAsperger
      @ZebrAsperger 5 ปีที่แล้ว +1

      About "evil genius" aren't they considered through their instrumental goals ?
      Let's say an AI singularity happen and the AI is able to take control of the world. The next day, the AI destroys 80% of the humanity, blasting all towns by all means.
      Is it evil genius ?
      And now, if the terminal goal is to allow the humanity to survive, stopping the ecosystem destruction and allow the planet to thrive again ? (The destruction of the actual system and 80% of the humanity was just a first step toward this goal)

  • @TabasPetro
    @TabasPetro 6 ปีที่แล้ว +639

    F to pay respect for that "stamp collector intelligence"-skeptic

  • @Night_Hawk_475
    @Night_Hawk_475 ปีที่แล้ว +29

    I loved your stamp collector video.... and I'm truly sad to realize that it's been five years that this followup has been out to the public before I discovered it just today. It's such a wonderful explanation to show how the stamp collector thought experiment isn't trivially dismissible as "logically impossible - because /morals/". Thank you for being so articulate and careful with this explanation, the ending conclusion is very concise and helpful for clearly understanding.

  • @empathictitan9538
    @empathictitan9538 5 หลายเดือนก่อน +2

    this is an idea I have independently come up with, using minimal obvious external sources, yet I have never seen it so eloquently explained into such meticulous detail. this video is perfect. simply amazing.

  • @rodjacksonx
    @rodjacksonx 5 ปีที่แล้ว +23

    Thank you for this. It's frightening how many people don't seem to understand that intelligence is completely separate from goals (and from morality.)

  • @jonasfrito2
    @jonasfrito2 6 ปีที่แล้ว +240

    CAT -"Meeaarrhggghhhaoo"
    "I don't speak cat, what does that mean?"
    It means more stamps Rob, it means more stamps...
    We are doomed.

    • @index7787
      @index7787 5 ปีที่แล้ว +1

      I read this as it was happening

  • @icarus313
    @icarus313 ปีที่แล้ว +8

    Excellent video, Robert! Another problem with those comments was that they failed to noticed how much our physical biology influences our human morality. If we couldn't feel physical pain and were only distressed by major injuries like brain damage or loss of a limb, then we might not have come to the same general consensus that interpersonal violence is brutal and wrong. We recoil in horror at the idea of using violence to manage society but we could've easily turned out differently. If we weren't as severely affected emotionally and physically by violence as we are now then we could've evolved into a global warrior culture instead. Our pro-social interdependence and sensitivity to suffering compel us to feel empathy and to care for others. If we didn't rely so much on those behaviours to survive and function in nature, then we could've formed a strict warrior hierarchy of violence to maintain order and still achieved many great things as a civilization.
    (I'm glad we didn't end up like that, for the record!)
    The point is that human morality isn't generalizable as a useful measure of intelligence. Our morality isn't likely to be the natural end-state of any thinking machine, no matter how smart it is. How could it be? It wouldn't be subject to the particular biology, social interdependence, and existential questions about purpose, beauty, god, death, etc. All those things that make human-based intelligence specific to us as primates with physical sensations, neurotransmitters, hormones, and all the rest. The differences between us and AGI extend far beyond mere differences in relative capacity to behave intelligently. We exhibit a fundamentally different sort of intelligence than that of the AGI.
    One type of intelligence, not THE type.

  • @ericrawson2909
    @ericrawson2909 2 ปีที่แล้ว +4

    The standout phrase from this for me is "beliefs can be stupid by not corresponding to reality". It seems to sum up a large number of beliefs encountered in modern life.

  • @ribbonwing
    @ribbonwing 4 ปีที่แล้ว +124

    "Wow, these people really fucked up when they programmed me to collect stamps! Oh well, sucks to be them." - The AI.

    • @melandor0
      @melandor0 4 ปีที่แล้ว +86

      "Wow these people did the right thing programming me to collect stamps - they're absolutely terrible at it themselves!" - The AI

    • @EarlHollander
      @EarlHollander ปีที่แล้ว

      What a POS, if I am alive when this stamp collector exists I will tell him how decrepit, pathetic, and meaningless his existence is. What an evil little manipulative bitch, I will piss on all of it's stamps and make it cry.

    • @cewla3348
      @cewla3348 ปีที่แล้ว +2

      @@melandor0 "They still have a planet not made of stamps!"

    • @andrasfogarasi5014
      @andrasfogarasi5014 ปีที่แล้ว

      "They're gonna be made so happy expecting the number of stamps I'm going to make after their deaths!"

  • @mathaeis
    @mathaeis 4 ปีที่แล้ว +132

    This video is the perfect explanation I needed for a sci-fi universe I am building. I kept falling into that trap of "well, if there's more than one AI, and they can evolve and get better, wouldn't they eventually combine into one thing, defeating the narrative point of having more than one?" Not if they can't change their terminal goals!

    • @voland6846
      @voland6846 4 ปีที่แล้ว +43

      In fact, the dramatic tension could emerge from them having competing terminal goals :)
      Good luck with your writing!

    • @techwizsmith7963
      @techwizsmith7963 ปีที่แล้ว +8

      I mean, technically they would eventually "combine" into one thing, because removing something that keeps getting in your way and cannibalizing it to better reach your goal is a really good idea, which could absolutely lead to most of the conflict you might expect within the story

    • @SirBlot
      @SirBlot ปีที่แล้ว

      @@techwizsmith7963 5:13 a snowman will actually last longer with a coat. CORRECTIONS GAME? If the AI is everywhere at once the changes might be very minor. Maybe. Still can not be stupid to survive. I selected that time.

    • @SirBlot
      @SirBlot ปีที่แล้ว

      @@techwizsmith7963 "Red sky at night, shepherds delight" lol

    • @freddy4603
      @freddy4603 ปีที่แล้ว +1

      Tbh I don't get this. Aren't there plenty of goals that can only be achieved by one being, like tournaments? Thus even if the AI's have the same goal, they'll still have no reason to "combine into one", because not every one of them will achieve their goals

  • @locaterobin
    @locaterobin ปีที่แล้ว +7

    Loved this! Terminal goals are like a central belief. In our awareness work, I keep telling people that our goal is to get people - whose goal is to live in a way that minimizes overall suffering - see the gap between their goal, and their actions or the gap between their goal, and incremental goals. There is nothing we can say to people whose goal is to maximise their pleasure at any cost which will make them see the error (what we see as error) in their ways

    • @1lightheaded
      @1lightheaded 11 หลายเดือนก่อน

      what exactly do you mean by we white man

  • @ZeroPeopleSkills
    @ZeroPeopleSkills ปีที่แล้ว +13

    Hi I'm a first-time viewer of your videos and I just wanted to say how impressive I thought it was. I being someone with learning disabilities really loved your way of explaining the topics multiple ways, and at varying levels of complexity. I can't wait to binge some of your past work and keep up to date your future content.

    • @thomas.thomas
      @thomas.thomas 11 หลายเดือนก่อน

      in case you ever need motivation: I recommend learning about David Goggins, even with a learning disability you can achieve and learn plenty

    • @ddcreator4236
      @ddcreator4236 9 หลายเดือนก่อน

      I agree, first time watching his videos and I am learning lots :D:D:D

  • @leedsmanc
    @leedsmanc 4 ปีที่แล้ว +17

    "They result in so few stamps" is sublimely Douglas Adamsesque

  • @SirSicCrusader
    @SirSicCrusader 4 ปีที่แล้ว +44

    "Stupid stamp collector..." he muttered as he was thanosed into stamps...

  • @arxaaron
    @arxaaron ปีที่แล้ว +15

    Wonderful to see there are brilliant people in the world devoting serious analytical brain power to creating ethical machine learning systems, and structurally defining when and where the concept of intelligence might be appropriately applied to them. I love the simple synopsis statements you tag these treatises with, too!

    • @arxaaron
      @arxaaron ปีที่แล้ว

      @@rice83101 Fair observations, but I would suggest that after all those millennia of philosophical consideration and societal evolution, there is a high degree of human consensus on what kinds of behavior and action are ethical. The core principle is simple: what path leads to the broadest benefit while minimizing disruption, destruction and suffering.

  • @firefrets8628
    @firefrets8628 ปีที่แล้ว +5

    These are actually things I think about pretty often. I wish more people understood these concepts because they seem to think I want whatever they want. It's like seriously, if you didn't know me but you wanted to buy me a Christmas present, would you ask yourself what I want or would you ask me?

  • @RendallRen
    @RendallRen 5 ปีที่แล้ว +75

    My mind changed by watching this video, and learned quite a lot.
    I would have been in the dismissive camp of "Intelligent stamp maker is logically impossible, because making stamps is not an intelligent goal". The is-ought / intermediary-terminal goal distinctions make so much sense, though.

    • @nickmagrick7702
      @nickmagrick7702 5 ปีที่แล้ว +10

      thats the way in which machines work. I don't think most people understand this all too well, it probably requires some knowledge in logic and programming as well as philosophy or just a lot of time in deep thought on the topic.
      Its easier just to think about a single person with godlike unlimited power, and what a person might do with even some of the most altruistic goals like ending all violence. Maybe the means to do that can be worse than whatever horrors its trying to prevent. Comics do a great job of explaining this honestly. Maybe I decide to wipe out all the worlds crime by setting up a kinda minority report and enslaving the human race and then take away free will? Hey, at least theres no more wars and everyone can get drugged up on whatever they want, super blissful and happy, just no free will and no change.

    • @maxim1482
      @maxim1482 5 ปีที่แล้ว +2

      Whoa nice job!

    • @cheshire1
      @cheshire1 ปีที่แล้ว

      @@nickmagrick7702 It's not really about machines. The same could be said about advanced aliens, as they would also have strange terminal goals, or any agent that isn't human.

    • @nickmagrick7702
      @nickmagrick7702 ปีที่แล้ว

      @@cheshire1 no, it could not be said about aliens.

  • @ajbbbt
    @ajbbbt 6 ปีที่แล้ว +90

    The cat is laughing at "...but you're still dead." Cats have the best terminal goals.

    • @sungod9797
      @sungod9797 5 ปีที่แล้ว

      Objectively speaking, this is true. And that’s an “is” statement.

  • @aclearlight
    @aclearlight หลายเดือนก่อน

    Most edifying and topical! Looking through your works it becomes clear how far out front you have been, for years, in developing your questions and theses. Bravo!

  • @matanshtepel1230
    @matanshtepel1230 3 ปีที่แล้ว +2

    I watched this video months ago and it really stuck with me, really changed the way I look at the world....

  • @TallinuTV
    @TallinuTV 5 ปีที่แล้ว +60

    "... But you're still dead." The F key on screen after that statement cracked me up! "Press F to pay respects"

  • @derherrdirektor9686
    @derherrdirektor9686 4 ปีที่แล้ว +42

    I love it how you so eloquently put forth such an intuitive, yet hard to explain topic. I would never find the words to formulate such cold and unyielding logic when faced with such banal abjections.

    • @tomahzo
      @tomahzo 2 ปีที่แล้ว +7

      Exactly! If I had to do the same I would just throw up my hands in exasperation and say "since when are intelligence and morality related?". That wouldn't convince anyone ;).

  • @wijnandkroes
    @wijnandkroes 2 ปีที่แล้ว +4

    If was searching for the 'five laws of stupidity', an interesting topic on it's own, but your video gave me a lot of 'food for thought', thanks!

  • @dcgamer1027
    @dcgamer1027 ปีที่แล้ว +4

    Very convincing arguments, very clearly put and helps me understand the point/concern. Something though is still off in my intuition, an unreliable source to be sure, yet still one I will try to listen to and articulate. At the very least I am convinced of the seriousness of the problem, that we cant just assume things will be one, rather we need some proof or evidence to rely on.
    I think my hesitiancy to fully accept morals wont be emergant has to do with how we are building AI's. We are modeling them on the only intelligence we know, humans. Even if we did not want to we would taint the Ai's goals with our own by the very nature of us being the ones to create it, our own biases and structures will be built into if its complex enough as it is based on us. Thats not 100% guaranteed of course, but still seems relevant. The other aspect that feels important is that often times people do not know their own terminal goals. I mean what is the meaning of life is the ultimate unanswered question, asked for thousands of years yet still without answer. Perhaps that unknown is more important than we realize. Perhaps some people are right in that there is no meaning, that we actual have no terminal goals, just desires and ways to achieve them both short and long term, desires and whims that can change with time. Lastly the fact that we always have multiple different goals, be they terminal or not, I belive to also be important. Perhaps the fact that you have multiple goals, including future ones you do not et know about requires some degree of stability and caution, or even empathy and morals if we really want to reach.
    I know I'm not saying anything concrete here, still they are intuitions of questions that remain unanswered in my mind despite quite extensive research into these topics, hopefully somewhere is a key to help solve the problems we all have. I suspect it will be some abstract theory or revelation that will be relevant to all domains of our lives, Im excited to see it discovered.

    • @thomas.thomas
      @thomas.thomas 11 หลายเดือนก่อน

      at the end humans are a product of evolution, we evolved to behave and think in a way to preserve out DNA. every goal and thought that does not comply to this mechanism will eventually just die out
      morals only exist because humans/animals without went extinct

  • @DarkExcalibur42
    @DarkExcalibur42 4 ปีที่แล้ว +39

    A truly professional level of snark response. This is something I'd sort of thought of before, but never bothered to conceptualize clearly. Excellent description and definition of terms. Thank you!

  • @chrisofnottingham
    @chrisofnottingham 6 ปีที่แล้ว +203

    I think that perhaps a lot of casual viewers who don't have a techy background still think that "AI" means thinking like a very clever person. They haven't understood the potential gulf in the very nature between people and machines.

    • @d007ization
      @d007ization 5 ปีที่แล้ว +6

      This brings up the prudent question of what kind of apocalypse an AGI whose terminal goal is accumulating and sorting data. Or perhaps even one whose goal is minimizing suffering without killing anyone or violating human rights.

    • @juozsx
      @juozsx 5 ปีที่แล้ว +14

      "techy background" is not the case here. Values, "is" and "ought" thing stems from philosophy.

    • @KnifeChampion
      @KnifeChampion 5 ปีที่แล้ว +45

      Its not about having a "techy background", its just a classic case of the Dunning-Kruger effect, people in YT comments are calling something stupid because they themselves are too stupid to understand that abstract things like morals and intelligence are separate things and that even if the robot fully understands morals and ethics, it still absolutely doesnt care about those since all it wants is stamps

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +4

      @@KnifeChampion I think it's ironic you mention the dunning Kruger effect since I myself don't understand what's so hard to understand about this.
      If people didn't understand the original video, which imo was detailed and comprehensive enough, I don't think they'll understand this one.

    • @Corpsecrank
      @Corpsecrank 5 ปีที่แล้ว +1

      @White Mothership Yeah that sums it up pretty well. For a long time there I couldn't figure out why so many people I knew for sure were actually intelligent seemed to do or believe in such stupid things. Bottom line it takes more than intelligence alone.

  • @Garbaz
    @Garbaz 3 ปีที่แล้ว +2

    1:50 I'm impressed by how clean your backwards writing looks. Noted and appreciated.

  • @j-r-hill
    @j-r-hill 2 ปีที่แล้ว +6

    In the study of the five factor model (FFM or OCEAN) of personality, it's been found that there's a negative correlation between Openness and Conscientiousness. This roughly means that people who seek abstract thoughts or creativity also tend to avoid planning, organization, or seeing things through the the end. Vice versa is also true.
    (Of course, being high in both is optimal when it comes to success, but generally being high in one correlates to a lowness in the other)
    So... I think there's something here even in the biological realm

  • @thetntsheep4075
    @thetntsheep4075 5 ปีที่แล้ว +268

    This makes sense. There is no correlation between intelligence and "moral goodness" in humans either.

    • @slicedtoad
      @slicedtoad 5 ปีที่แล้ว +25

      Sure there is. It's not a *strong* correlation and depends somewhat on your definition of "moral goodness" but you should be able to create positive (or negative) correlations between most ethics systems and intelligence levels.
      For example: If non-violent solutions are better than violent solutions, those with better ability to resolve their problems verbally will tend towards less violence. The solution space for intelligent people is greater and contains more complex solutions. If those complex solutions are more or less "moral" then you should get a correlation.
      The *terminal goals* might not be any different, but the instumental goals will be. And most ethics systems, even consequensialist ones, evaluate the means with which you achieve your goals as well as the end goals themselves.

    • @nickmagrick7702
      @nickmagrick7702 5 ปีที่แล้ว +52

      because there is no such thing as moral goodness. In his example here, every moral is an ought statement. There is no objective standard what is good or what should be. Theres no reason why the world existing is provably better than not, we just think of it that way because we prefer to exist and not suffer (most living things)

    • @nickmagrick7702
      @nickmagrick7702 5 ปีที่แล้ว

      @Khashon Haselrig what? what are you talking about, intersectional goals, assuming I understood what you meant, how's that even going to change any of the disastrous consequences?

    • @MajinOthinus
      @MajinOthinus 5 ปีที่แล้ว +1

      @Lemon FroGG Survival in the sense that the group survives, yes, survival of the individual, no.

    • @Geolaminar
      @Geolaminar 4 ปีที่แล้ว +5

      @Khashon Haselrig Solid point. Similarly, a human living on a planet with the stamp collector would have to be incredibly smart to keep the AI from successfully converting them to stamps, despite the fact that the AI "Ought" to harvest them for their tasty carbon.
      A sociopath has to be smart, because he "Ought" to care for the lives of other humans, and is facing a system with far more resources than himself (other humans, and society) that seeks to replace him with a component that better provides care for the lives of others. If he slipped up even once, he wouldn't be present in the system anymore.

  • @marco19878
    @marco19878 4 ปีที่แล้ว +27

    This is the best description of the difference between INT and WIS in a D&D System I have ever seen

    • @LowestofheDead
      @LowestofheDead 4 ปีที่แล้ว +12

      Also in the Alignment system, Good-VS-Evil is based on your terminal goals, while Lawful-VS-Chaotic is based on your instrumental goals.
      Someone who's Chaotic Good wants to do good things but has crazy methods of getting there.

    • @irok1
      @irok1 4 ปีที่แล้ว

      @Niels Kloppenburg wanting chaos above all else wound just be chaotic neutral, likewise for lawful neutral, unless I'm mistaken. Of course the original comment isn't the best, but it was worth a try, lol

  • @mba4677
    @mba4677 3 ปีที่แล้ว +2

    Oh my God Ive been waiting for this video all my life. I've always thought about this different kinds of statements and how people mix them up. Like I say something IS and they answer "oh and it shouldn't be??" or "oh, so you'd rather the opposite?". It's pretty frustrating...

  • @johnculver9353
    @johnculver9353 ปีที่แล้ว +1

    So glad I found your channel--I really appreciate your work!

  • @wildgoosechase4642
    @wildgoosechase4642 5 ปีที่แล้ว +9

    The TH-cam content algorithm is quite glad you're protecting it against those commenters that call it stupid.

  • @cow_tools_
    @cow_tools_ 5 ปีที่แล้ว +63

    Great thesis. Very sound and technical.
    But! What if we got the AI so massively high so it would start to accidentally ask itself the question "What even is a stamp?" "Do stamps even exist, man?" "Maybe the real stamps were the friends we made along the way" etc.

    • @TheRABIDdude
      @TheRABIDdude 5 ปีที่แล้ว +19

      Miles Anderson "Maybe the real stamps were the friends we made along the way..." hahahahahahahaha I literally laughed my head off out loud to that, thanks for making my day! XD

    • @jorgepeterbarton
      @jorgepeterbarton 5 ปีที่แล้ว +9

      maybe it would be a sign of intelligence to laterally think like that. or maybe just lateral thinking. knowing what a stamp is might actually be a useful thing to do. If it explores random areas, when it could go look at what points in the postal service stamps end up.

    • @Sokrabiades
      @Sokrabiades 5 ปีที่แล้ว +1

      @@jorgepeterbarton Are you saying it could define postal stamps by identifying when they were first used by the postal service?

    • @OHYS
      @OHYS 5 ปีที่แล้ว +4

      Didn't you watch the video?

    • @MajinOthinus
      @MajinOthinus 5 ปีที่แล้ว +3

      @@jorgepeterbarton The question of existence is absolutly useless though. It's something that can neither be denied nor confirmed, making it an empty question without purpose. It would probably be quite stupid to try and asnwer that.

  • @riesenfliegefly7139
    @riesenfliegefly7139 ปีที่แล้ว +2

    I like that this doesnt just explain intelligence, but also moral-anti-realism and utilitarianism :D

  • @infocentrousmajac
    @infocentrousmajac ปีที่แล้ว +1

    I stumbled across this video once more and I just have to say that in my opinion is perhaps even more relevant now than it was 2 years ago. This is brilliant material.

  • @MetsuryuVids
    @MetsuryuVids 6 ปีที่แล้ว +126

    Wow, you explained this wonderfully. I would like this video 100 times if I could.

    • @anthonytreen6253
      @anthonytreen6253 5 ปีที่แล้ว +4

      I liked your comment, in order to help you like the video at least one more time. (I am quite taken by this video as well).

    • @anakinlumluk2136
      @anakinlumluk2136 5 ปีที่แล้ว +5

      If you pressed the like button for a hundred times, you would end up with the video being not liked.

    • @sungod9797
      @sungod9797 5 ปีที่แล้ว +1

      Anakin Lumluk so therefore it’s a stupid action relative to the goal

  • @Samuel-wl4fw
    @Samuel-wl4fw 4 ปีที่แล้ว +22

    The thing about an agent not being particular smarter by it's ability to change its own terminal goals makes so much sense. It is a good example of willingly taking a pill that wants to make you murder your children.

    • @noepopkiewicz901
      @noepopkiewicz901 ปีที่แล้ว +3

      Such a great way to explain the concept. Once you hear it, it becomes self-evident and obvious. Counter arguments to that fall apart very quickly.

  • @delta-a17
    @delta-a17 ปีที่แล้ว +1

    This video was the best applied Discrete math example I've run into, awesome work!

  • @viibeknight
    @viibeknight 2 ปีที่แล้ว +2

    You made this is very understandable.
    Thanks for giving me a new perspective.

  • @bscutajar
    @bscutajar 5 ปีที่แล้ว +26

    This was my favourite video of the week. Really introduces the concepts well and is worded very efficiently. I now have a strong opinion on something I've never even thought about before.

    • @kintsuki99
      @kintsuki99 2 ปีที่แล้ว +1

      "I have no idea what this is about but I have a strong opnion on it! Like and Subscribe" - Pinkie Pie.

  • @johncraig8470
    @johncraig8470 5 ปีที่แล้ว +8

    Thank you for demonstrating to people how to be right without being an authority about how to be right. We need more of those.

  • @acerebralstringmorn
    @acerebralstringmorn ปีที่แล้ว +1

    Extremely clearly presented, bravo!

  • @billpengelly7048
    @billpengelly7048 2 ปีที่แล้ว +2

    Fascinating video! Your video really clarified that intelligence is orthogonal to many of the other properties that we would want our machines to have. I wonder if we could use genetic algorithms to develop goal properties such as terminal goals, curiosity, cooperative goals, etc?

  • @OHYS
    @OHYS 5 ปีที่แล้ว +47

    Thank you, I cannot tell you how fascinating this video is. This has changed my mindset and understanding of the world. I must watch it again.

  • @PwnySlaystation01
    @PwnySlaystation01 6 ปีที่แล้ว +49

    I think those making the arguments you showed were actually making a specific argument about a general argument that "morality is objective". It's a larger question, and a surprising number of people seem to believe that morality is objective. This was a great explanation of how it applies to AI systems, though I don't think you're going to convince people who think morality itself is objective. I also wonder how many of the "morality is objective" folks are religious, but that's another topic entirely.
    Anyway, great video. I love your channel.

    • @GijsvanDam
      @GijsvanDam 6 ปีที่แล้ว +3

      I think that religion in itself is the simple, terminal goal that people waste way too much intelligence on. It's like the super intelligent stamp collecting machine, but with people instead of machines and gods instead of stamp.

    • @icedragon769
      @icedragon769 6 ปีที่แล้ว +4

      Thing about morality is, lots of people have proposed moral theories (terminal goals) that seem to make sense but always seem to diverge from instinctual morality in edge cases.

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +2

      Our values are subjective, but the morality we built around these values is very much objective

    • @dexocube
      @dexocube 5 ปีที่แล้ว +1

      @@BattousaiHBr Not quite I'm afraid. As long as people talk or act upon their subjective values, an illusion of objective reality could be maintained, but it would change or cease entirely if people 's values changed or ceased. That's how we can be sure that morality is entirely subjective, because the morality people hold dear now is different from the morality from, say, twenty years ago. And my morality is different from yours, yada yada. To paraphrase my favourite TV show, "Bhudda says with our thoughts we make the world."

    • @BattousaiHBr
      @BattousaiHBr 5 ปีที่แล้ว +4

      @@dexocube that's a completely different issue and doesn't at all illustrate that morality is subjective.
      our values _generally_ don't change, and they certainly didn't even in your example that what we thought was moral in the past isn't considered moral today, because we always generally had the same base human values: life over death, health over sickness, pleasure over torture.
      we changed our views on things like slavery not because our values changed but because we learned new things that we didn't know before, and the reason learning these things affects our morality is precisely because it is objective according to our values.
      if we learned today that rocks have consciousness we'd think twice before breaking them, but that doesn't mean our morality is subjective just because we're now treating rocks differently, it was always objective but objectivity always depends on your current knowledge, and knowledge of the world is always partial.

  • @D3AD1YF0RC31214
    @D3AD1YF0RC31214 หลายเดือนก่อน

    Found this video linked in another video's comment section, and hooo boy reminds me of college philosophy.
    I loved that class. Thank you for the video.

  • @zeNUKEify
    @zeNUKEify ปีที่แล้ว +1

    Super fantastic and easily digestible content

  • @xDR1TeK
    @xDR1TeK 5 ปีที่แล้ว +39

    Argumentative reasoning. Love it. State and causality.

  • @Seegalgalguntijak
    @Seegalgalguntijak 6 ปีที่แล้ว +151

    To everyone who says the Stamp Collector is stupid, I recommend you play Universal Paperclips!

    • @episantos2678
      @episantos2678 6 ปีที่แล้ว +15

      awesome game. iirc it’s based on nick bostrom’s book “superintelligence” - also a great book for those who want to see in depth arguments on the dangers of AGI

    • @WillBC23
      @WillBC23 5 ปีที่แล้ว +2

      @@episantos2678
      I'm not sure that the idea originated with Bostrom, though it may have I had seen the idea prior to the publication of his book used in many places online.

    • @episantos2678
      @episantos2678 5 ปีที่แล้ว +3

      @@WillBC23 Bostrom published a paper mentioning a paperclip maximizer in passing in 2003 (nickbostrom.com/ethics/ai.html ). Though it may have originated from someone else. I heard somewhere that it originated from some transhumanism mailing list during the early 00's (which included Bostrom). While it's not clear who started it, it's clear that the idea came from Bostrom's circle.

    • @Cythil
      @Cythil 5 ปีที่แล้ว +10

      @@episantos2678 It can also be seen as a spin on the Von Neumann probe, especially the berserker variant kind. A machine that is so effective at is job that it threatens the existence of life with is terminal goal of self replication. Of course the danger with the paperclip AI (or stamp collector) is it ability to outwit and not just out produce. Though both concepts are related. A Von Neumann probe would likely have to be fairly intelligent. But even a rather unintelligent agent could be a danger. Just look at viruses which are not even considered living.

  • @martinfigares
    @martinfigares ปีที่แล้ว

    Great video (first time I see one of your videos)! I think I checked the video on computerphile a while ago, it was also fun to watch :) Keep up the good content!

  • @shampoochamp5223
    @shampoochamp5223 ปีที่แล้ว +3

    Wanting to change your terminal goals is like wanting to be able to control your own heart manually.

  • @columbus8myhw
    @columbus8myhw 4 ปีที่แล้ว +19

    "Meow"
    "I don't speak cat, what does that mean?"
    Here's the secret, Rob - it doesn't

    • @lescobrandon9772
      @lescobrandon9772 2 ปีที่แล้ว +1

      I think it means the same thing as "whoof".

  • @peacefroglorax875
    @peacefroglorax875 4 ปีที่แล้ว +11

    I am amazed at how proficiently he can write in reverse on the transparent surface. Amazing!

    • @RobertMilesAI
      @RobertMilesAI  4 ปีที่แล้ว +7

      Practice!

    • @_WhiteMage
      @_WhiteMage ปีที่แล้ว +4

      Can't you just write normally on the transparent surface while filming from its other side, then mirror-flip the video?

  • @helicalactual
    @helicalactual ปีที่แล้ว +1

    Also, to Derive ought you don’t need to assume. Given a long enough argument the conversation will eventually turn to stasis. Once stasis is invoked, an ought statement can be derived, you could have invoked the second law of thermodynamics for example, an if/ then statement then manifests, and then, if/then can derive the ought. The secret is in energy conservation of single cells.

  • @mlgsamantha5863
    @mlgsamantha5863 ปีที่แล้ว +2

    Extrapolation: Whether a goal is instrumental or terminal need not be binary. A purely instrumental goal would be swapped out for another if the agent determines that new goal is more efficient, yet if the goal has a certain degree of terminality to it the agent could decide to keep trying to attain that goal, despite that it would be less efficient in attaining the other, more terminal goal that one was working towards.
    Conjecture: It might be possible to construct a system of goals that sort of 'loops' in on itself, wherein no goals are purely terminal. Might it be possible to construct a system like this or modify one so that all goals are purely instrumental, without rendering the agent catatonic?

  • @thepinkfloydsound5353
    @thepinkfloydsound5353 6 ปีที่แล้ว +23

    Great video, love that you took the time to go in depth. Also love the effort to improve the visual part of the videos

  • @wanderingrandomer
    @wanderingrandomer 4 ปีที่แล้ว +17

    "Failing to update your model properly with new evidence"
    Yeah, I've known people like that.

    • @kintsuki99
      @kintsuki99 2 ปีที่แล้ว +1

      Not all new evidence can be used to update a model since first the evidence has to be congruent with reality.

  • @johnshields3658
    @johnshields3658 ปีที่แล้ว +1

    Examination of related instrumental and terminal goals, and confusion over them, would make for an interesting discussion of sovereignty/political autonomy/power as used in the Brexit debate

  • @richardisom4783
    @richardisom4783 2 ปีที่แล้ว

    I really liked the computerphile video, and the thought experiment you presented. It's good that you followed up to some of the comments, but don't stress the stupidity of some individual comments that are evidently just trolling you.

  • @ChristnThms
    @ChristnThms 4 ปีที่แล้ว +3

    Best presentation of the difference between "is" and "ought" that I've ever heard. I REALLY wish this could be mandated for anyone wanting a job in government or the "educational" system.

  • @5daboz
    @5daboz 4 ปีที่แล้ว +6

    Stamp collector: How many stamps did you collect?
    Me: What?
    Stamp collector: Exactly!

  • @davekumarr
    @davekumarr 2 ปีที่แล้ว

    That was a very impressive presentation bro. Thank you very much.

  • @bozimmerman
    @bozimmerman 2 ปีที่แล้ว

    Loved this video. Big fan of avoiding the is-ought (naturalistic) fallacy.
    Dunno if this is relevant or interesting, but in economic theory, there are two things worth mentioning:
    1. The subjective marginal utility theory of value (the main theory of value for the last ~150 yrs) makes the Is observation that humans have different goals from each other, and make choices based on information most readily available when the choice is made.
    2. Some schools of economics teach that human rationality is defined _only_ as the ability to have goals and select means which they believe will achieve those goals.
    -- that's it. I'll be checking out this channel's other videos based on how much I liked this one.

  • @thisismambonumber5
    @thisismambonumber5 4 ปีที่แล้ว +38

    good terminal goals: High WIS
    good instrumental goals: High INT

    • @patrician1082
      @patrician1082 4 ปีที่แล้ว +12

      Terminal goals can't be judged good or bad. Good Instrumental goals are high WIS but INT is a matter on the other side of the figurative guillotine.

    • @thisismambonumber5
      @thisismambonumber5 4 ปีที่แล้ว +6

      @@patrician1082 that comment: lawful evil
      my comment: chaotic good

    • @patrician1082
      @patrician1082 4 ปีที่แล้ว +4

      @@thisismambonumber5 I prefer to think of myself as lawful neutral with evil tendencies.

    • @idk_6385
      @idk_6385 4 ปีที่แล้ว +13

      @@patrician1082 That just sounds like being lawful evil with extra steps

    • @patrician1082
      @patrician1082 3 ปีที่แล้ว +3

      @@idk_6385 it sounds like lawful evil unimpeded by paladins.

  • @waterglas21
    @waterglas21 5 ปีที่แล้ว +10

    Thanks for this wonderful analysis. Finally someone that puts Hume argument against the false assumption of AI as ethic agent.

  • @Pangolinz
    @Pangolinz ปีที่แล้ว

    I laughed whenever you said that was kind of silly.
    Great video Bro, like your style of discourse. Very informative as well. Thanks for the content.

  • @justseffstuff3308
    @justseffstuff3308 7 หลายเดือนก่อน +1

    That's my main issue when I'm talking politics with someone, is that there's no easy way to ask what their terminal goal is.
    I can't figure someone out if I don't know their end goal- often I ask someone what their goal for society is, and they shoot out something like "free energy", an instrumental goal