S-Risks: Fates Worse Than Extinction

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ต.ค. 2024
  • The worst futures that could come about aren't ones in which humanity goes extinct. This video explores an even worse category of risks: risks from astronomical suffering, or "S-Risks", which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that S-risks have a significant chance of occurring and that there are ways to lower that chance.
    ▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    Existential Risk Prevention as Global Priority: existential-ri...
    Reducing Risks of Astronomical Suffering: A Neglected Priority longtermrisk.o...
    S-risks: An introduction: centerforreduc...
    Moral circle expansion: A promising strategy to impact the far future: doi.org/10.101...
    Superintelligence as a Cause or Cure for Risks of Astronomical Suffering: longtermrisk.o...
    ▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    🟠 Patreon: / rationalanimations
    🔵 Channel membership: / @rationalanimations
    🟢 Merch: rational-anima...
    🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rati...
    ▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    Discord: / discord
    Reddit: / rationalanimations
    X/Twitter: / rationalanimat1
    ▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    Tomas Campos
    Jana
    Ingvi Gautsson
    Nathan Young
    BlueNotesBlues
    '@Osric@Terberlo.dog
    Michael Andregg
    Riley Matthews
    Vladimir Silyaev
    Nathanael Moody
    Alcher Black
    RMR
    Nathan Metzger
    Glenn Tarigan
    NMS
    James Babcock
    Colin Ricardo
    Long Hoang
    Tor Barstad
    Apuis Retsam
    Stuart Alldritt
    Chris Painter
    Juan Benet
    Falcon Scientist
    Jeff
    Christian Loomis
    Tomarty
    Edward Yu
    Ahmed Elsayyad
    Chad M Jones
    Emmanuel Fredenrich
    Honyopenyoko
    Neal Strobl
    bparro
    Danealor
    Craig Falls
    Vincent Weisser
    Alex Hall
    Ivan Bachcin
    joe39504589
    Klemen Slavic
    blasted0glass
    Scott Alexander
    Dawson
    John Slape
    Gabriel Ledung
    Jeroen De Dauw
    Superslowmojoe
    Nathan Fish
    Bleys Goodson
    Ducky
    Matt Parlmer
    Tim Duffy
    rictic
    marverati
    Luke Freeman
    Richard Stambaugh
    Jonathan Plasse
    Teo Val
    Ken Mc
    leonid andrushchenko
    Alcher Black
    ronvil
    AWyattLife
    codeadict
    Lazy Scholar
    Torstein Haldorsen
    Michał Zieliński
    ▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    Directed by:
    Evan Streb - @vezanmatics
    Written by:
    Allen Liu
    Producer:
    :3
    Line Producer:
    Kristy Steffens - linktr.ee/kstearb
    Production Managers:
    Grey Colson - linktr.ee/earl...
    Jay McMichen - @jaythejester
    Quality Assurance Lead:
    Lara Robinowitz - @CelestialShibe
    Animation:
    Grey Colson - linktr.ee/earl...
    Ethan DeBoer - linktr.ee/debo...
    Gabriel Diaz - @gabreleiros
    Damon Edgson
    Jordan Gilbert - @Twin_Knight (twitter) & Twin Knight Studios (YT)
    Zack Gilbert - @Twin_Knight (twitter) & Twin Knight Studios (YT)
    Colors Giraldo @colorsofdoom
    Jodi Kuchenbecker - @viral_genesis (insta)
    Jay McMichen - @jaythejester
    Skylar O'Brien - @mutodaes
    Vaughn Oeth - @gravy_navy (twitter)
    Lara Robinowitz - @CelestialShibe
    Patrick Sholar - @sholarscribbles
    Background Art:
    Olivia Wang - @whalesharkollie
    Pierre Broissand - @pierrebrsnd (insta) - www.artstation...
    Compositing:
    Grey Colson - linktr.ee/earl...
    Patrick O’Callaghan - @patrick.h264 (insta)
    Narrator:
    Robert Miles - / robertmilesai
    VO Editor:
    Tony Dipiazza
    Sound Design and Music:
    Epic Mountain - / epicmountainmusic

ความคิดเห็น • 1.6K

  • @RationalAnimations
    @RationalAnimations  หลายเดือนก่อน +19

    We’re finally about to launch a plushie, but only if we get enough pledges for this campaign:
    www.makeship.com/petitions/rational-animations-doggo-plushie
    If we reach at least 200 pledges, Doggo will become a real plushie! After Makeship manufactures it, you’ll be charged the remaining balance (27.99 USD + shipping and tax), and he will be shipped to you!

  • @YoungGandalf2325
    @YoungGandalf2325 5 หลายเดือนก่อน +1599

    I had no idea what an S-Risk was before watching this video. I'm not sure whether I should thank you or blame you for causing my new existential crisis.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn 5 หลายเดือนก่อน +77

      S-Risks = Basically don't let the Imperium of Man from Warhammer 40k become a reality. That said, there's a questionable tendency I've come across from these 'long-termist theorists' like Bostrom - they basically push for us to pay attention to highly speculative and unlikely possibilities, by simply arbitrarily magnifying all the other parameters. For instance, say a certain S-Risk has a 1 in 100 billion chance of happening. That doesn't seem so scary. Enter these guys who'll say that we should pay them attention - and thus grant money - cos they arbitrarily posit that it'll affect a population of over 100 trillion and score 1 million on the Bostrum Suffering (BS) scale that he uses. There, suddenly an issue that seemed remote is now maybe the most important issue in the world, meriting all of our resources being turned onto negating it. Despite it all being just one giant speculation using arbitrary numbers to inflate its value. Hence why it uses a BS scale.

    • @BlaBla-pf8mf
      @BlaBla-pf8mf 5 หลายเดือนก่อน +40

      @@ArawnOfAnnwn I call this Yudkowsky's Mugging

    • @chosenmimes2450
      @chosenmimes2450 5 หลายเดือนก่อน +23

      i've had a similar crisis in the past when i learned about the dark forest state of the universe. my resolution came through the realization that the likelyhood of getting "killing star"'d is no greater or lesser if I feel menaced so feeling terrified has net negative utility. So i stopped.

    • @pragmaticmero686
      @pragmaticmero686 5 หลายเดือนก่อน +9

      One example could be the not-adoption lf metric time, programmers like me suffer extremely painful lifes because time isn't a multiple of 10. I want to cry Q-Q

    • @gelmir7322
      @gelmir7322 5 หลายเดือนก่อน +7

      how can you tell that you are not already experiencing the S-risk right now?

  • @jxg1652
    @jxg1652 5 หลายเดือนก่อน +1104

    All Tomorrows comes to mind.
    Humans transformed into worms. Humans transformed into sewage filter feeding spongues, fully sentient.
    Even WH40k seems kinda ok compared to that.
    Or the Affront from the Culture series, their civilization "a never-ending, self-perpetuating holocaust of pain and misery".

    • @Exquailibur
      @Exquailibur 5 หลายเดือนก่อน +132

      Warhammer 40k future is pretty terrible, like those hive cities and the fact we have forgotten how to repair and dont make new technology so have to pass down how to maintain it of generations. Also the fact they are afraid of AI so instead use people to automate things, making them into cyborgs and taking away their agency.
      The tau were horrified by humanity's societal structure and the thing that scared them the most is that humanity's ships and war machines are all older then their civilization is.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn 5 หลายเดือนก่อน +86

      Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously. At least there's some kind of attempt at justification or at least just explanation for how the 40k universe came to be as it is

    • @Exquailibur
      @Exquailibur 5 หลายเดือนก่อน +57

      @@ArawnOfAnnwn W40k is just space fantasy in reality, it has sci fi elements in the same way that Lord of the Rings has medieval elements.

    • @Flamesofthunder
      @Flamesofthunder 5 หลายเดือนก่อน +38

      ​@@Exquailibur 40k is pretty horrifying but all tomorrows is just true extraterrestrial dread. The book is free so I'd recommend everyone read it but damn it keeps me up at night. Nothing can compare apart from what the necrons have been through in that book . I'm not saying 40k isn't grim just that in comparison All tomorrows shows a cosmic scale of horror that many books and media fail to grasp

    • @Exquailibur
      @Exquailibur 5 หลายเดือนก่อน +16

      @@Flamesofthunder All tomorrows honestly feels a little goofy to me more then anything, 40k is space fantasy though and not true sci fi like all tomorrows which is about the only reason all tomorrows would be more scary as its more plausible.
      40k is definitely more messed up in universe but the thing is that 40k has space demons which are not the slightest bit possible whereas the Qu are a far more realistic threat.
      Its like how dark souls is messed up but it doesnt feel as bad as some other media because its obviously fantasy.

  • @manufigola8433
    @manufigola8433 5 หลายเดือนก่อน +993

    "We want to prevent the idea of caring about other beings from becoming ignored or controversial" made me stop for a second because it seems like we step closer and closer to that being the norm everyday

    • @darksidegryphon5393
      @darksidegryphon5393 5 หลายเดือนก่อน +206

      Yeah, we're already there with a worryingly large section of our population seeing empathy as a weakness.

    • @LeoStaley
      @LeoStaley 5 หลายเดือนก่อน +81

      Capitalism baby

    • @gijskramer1702
      @gijskramer1702 5 หลายเดือนก่อน +61

      Thats why we need to walk around with an honest smile and a willingness to help without expecting something in return. Pay it forward people, pay it forward. Kindness starts with someone

    • @scaper12123
      @scaper12123 5 หลายเดือนก่อน +76

      There are already millions of people who not only ignore it and make it controversial, but they actively fight against the concept.

    • @zh9664
      @zh9664 5 หลายเดือนก่อน

      @@darksidegryphon5393 not what i was thinking of..

  • @darksidegryphon5393
    @darksidegryphon5393 5 หลายเดือนก่อน +406

    Book: don't make the Torment Nexus.
    Tech company: "Finally! We have created the Torment Nexus from famous novel Don't Create The Torment Nexus!"

    • @Gamingcolon
      @Gamingcolon 27 วันที่ผ่านมา +5

      280 Likes and no comments?
      Lemme fix that

  • @CoalOres
    @CoalOres 5 หลายเดือนก่อน +370

    Personally I felt the specific examples of S-risks could have used more introduction for anyone who hasn't read half of LessWrong yet, but the concept is very interesting.

    • @Fenhum
      @Fenhum 5 หลายเดือนก่อน +29

      Ah... the classic basilisk.
      Would you believe me if I told you the first time I came in contact with it is on a fanfiction of Doki Doki Literature Club?

    • @chosenmimes2450
      @chosenmimes2450 5 หลายเดือนก่อน +22

      @@Fenhum going by Roko's twitter activitiy I think he is actively trying to bring it about and thus buying freedom for his soul in this hypothetical scenario.

    • @Fenhum
      @Fenhum 5 หลายเดือนก่อน +24

      @@chosenmimes2450 Yeah, even the author of the fanfiction mentions it in his author notes. That, technically what he is doing is saving himself from the basilisk.
      But I like his perspective on it the best: What's so different about Roko's basilisk than normal gods? They both have a seemingly omnipotent being with a mythical status, and also their own version of heaven and hell.
      It's basically a religion with a tangible threat to join. To the modern day mind of course.

    • @kevincrady2831
      @kevincrady2831 5 หลายเดือนก่อน +51

      @@Fenhum It's just Pascal's Wager in technological garb. To me it's more of a cautionary tale showing how even smart people with knowledge of critical thinking techniques can still bamboozle themselves into believing things as ridiculous as the religious doctrines they chuckle at. The easiest person to fool is a person who thinks they can't be fooled.

    • @siddhartacrowley8759
      @siddhartacrowley8759 5 หลายเดือนก่อน

      ​@@chosenmimes2450
      Who's Roko?

  • @ekszentrik
    @ekszentrik 5 หลายเดือนก่อน +168

    Your visualization of S-risks as latching onto the usual risks matrix as a mutational , unexpected outgrowth is extremely striking and better than the solution I would have used to communicate the topic. My first idea would have been to use a regular risks matrix but with a "low/medium/severe" intensity scales, where an X-risk is of the "medium" category.

    • @Exaspatial
      @Exaspatial 5 หลายเดือนก่อน

      I thought of extending the graph into the third dimension for "low amount of time" and "high amount of time". Or something like that

    • @Exaspatial
      @Exaspatial 5 หลายเดือนก่อน

      Creating a cube with 8 sections

    • @seto007
      @seto007 3 หลายเดือนก่อน +1

      This channel genuinely has some of the highest quality animations for a channel of its size. Couldn't imagine the effort that goes into making them

  • @MortiePL
    @MortiePL 5 หลายเดือนก่อน +559

    "HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."

    • @kevincrady2831
      @kevincrady2831 5 หลายเดือนก่อน +45

      "It's wafer thin." --Monty Python's The Meaning of Life

    • @Windswept7
      @Windswept7 5 หลายเดือนก่อน +47

      Hate requires a lot more energy than peaceful harmony, therefore cannot be sustained for as long in a universe with entropy.

    • @dolphin1418
      @dolphin1418 5 หลายเดือนก่อน +22

      @@Windswept7But the fires of hate will consume all that they touch and for a brief moment outshine the most brilliant stars

    • @Windswept7
      @Windswept7 5 หลายเดือนก่อน +9

      @@dolphin1418 hmm I see how that could be true, but even so, there are a lot of barriers/filters that level of hate has to cross before that concentration of pure energy could be possible and if it could ever reach that limit it would likely destroy itself and create a new universe in the process.

    • @ASlickNamedPimpback
      @ASlickNamedPimpback 5 หลายเดือนก่อน +5

      @@dolphin1418 says who?

  • @MikeLemmons
    @MikeLemmons 5 หลายเดือนก่อน +162

    An AI raises a child in a windowless room, teaching it a language no-one else will ever understand.
    Forever unable to communicate, that child will never break its reliance on the machine.

    • @guidestone1392
      @guidestone1392 5 หลายเดือนก่อน +54

      iPad kids on steroids

    • @AdamVollmer
      @AdamVollmer 4 หลายเดือนก่อน +8

      A kinder, gentler Omelas

    • @realkekz
      @realkekz 28 วันที่ผ่านมา +12

      Ignoring the fact that people have figured out how to communicate between different languages in spite of never having spoken the other person's language originally

    • @taf0457
      @taf0457 25 วันที่ผ่านมา +9

      Honestly, they'd probably have a better chance than a child who wasn't taught language at all (something that has unfortunately happened).

  • @michaelsmith4904
    @michaelsmith4904 5 หลายเดือนก่อน +693

    the MAD approach to prevent S-Risk: build a failsafe that automatically triggers extinction if it ever occurs.

    • @wmpx34
      @wmpx34 5 หลายเดือนก่อน +89

      How will you guard such a valuable mechanism? Many people will try to activate it

    • @LeoStaley
      @LeoStaley 5 หลายเดือนก่อน +73

      If you've got an AGI whose goal is to prevent human extinction, but is otherwise misaligned in some way, your trigger couldn't be effective. The AGI would figure out how to circumvent it.

    • @miadmahshidi8101
      @miadmahshidi8101 5 หลายเดือนก่อน +28

      Your probably not going to get this but this is basically what scp 2000 does (tho more "restarting the world" then "kill everyone to stop suffering")

    • @suspicioussand
      @suspicioussand 5 หลายเดือนก่อน +15

      SCP level stuff 👍

    • @Vileplume87
      @Vileplume87 5 หลายเดือนก่อน +11

      The anti SCP-2000

  • @lucas56sdd
    @lucas56sdd 5 หลายเดือนก่อน +197

    Once I started to count negative numbers, the "divide-by-zero" error of human extinction weirdly became much less discomforting in my grandest moral calculations. Great video.

    • @raph2550
      @raph2550 5 หลายเดือนก่อน +8

      haha nicely said

    • @KateeAngel
      @KateeAngel 5 หลายเดือนก่อน +8

      Extinction is inevitable for every life form. But the more time until it happens, the more suffering there will be in a meantime.
      So, I am an anti-natalist, because sooner extinction is preferable to extinction in very far future after lots of suffering

    • @mylesleggette7520
      @mylesleggette7520 5 หลายเดือนก่อน

      @@KateeAngel The problem is that anti-natalists are morons whose opinions are by definition irrelevant, since propagation of any of their ideas relies upon the creation of more humans.

    • @DeSpaceFairy
      @DeSpaceFairy 5 หลายเดือนก่อน +1

      Good for you.

    • @hairohukosu433
      @hairohukosu433 4 หลายเดือนก่อน +6

      ​@@KateeAngel touch grass

  • @jakub2631
    @jakub2631 5 หลายเดือนก่อน +114

    The fate of Colonials in "All Tomorrows" and The Australia Scenario in "The Dark Forrest" (if you know, you know) are terrible fates for humanity to suffer, and I still think about them from time to time. Thank you for making this video!

    • @catbatrat1760
      @catbatrat1760 5 หลายเดือนก่อน +11

      I've heard of All Tomorrows. What's The Dark Forrest?

    • @jakub2631
      @jakub2631 5 หลายเดือนก่อน

      ​@@catbatrat1760
      It's the sequel to "Three Body Problem", a sci-fi book about making contact with an alien civilisation.
      I'll explain what I mean by The Australia scenario, but bear in mind that it's a big spoiler for the book trilogy (it's about midway through the second book) and I recommend reading it for yourself instead, it's an amazing piece of hard science fiction.
      Spoilers for "Three Body Problem" and "The Dark Forest" below!
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      -
      After ~400 years of waiting for the arrival of the fleet of an extraterrestrial civilisation, the combined forces of human space fleet (2015 spaceships, manned by a total of 1,200,000 people) made contact with a single unarmed alien probe that was sent ahead of the main invasion fleet.
      Human leadership was confident in the technological superiority of Earth's fleet, as it was capable of achieving greater speeds than what was known about the alien counterparts.
      The probe, despite being unarmed, managed to destroy 2013 ships and killed 1,140,000 sailors by ramming the ships (it was made using an exotic, effectively indestructible material, unknown to human science). The probe remained unharmed.
      After the "battle", aliens made contact with Earth's leadership and ordered people to be sent to Australia, where humanity will remain after the main invasion fleet arrives to colonise the rest planet.
      After Earth's governments transport most of Earths population to Australia (often by force), they are ordered to bomb every electric power plant in Australia, as aliens deem it the appropriate way to "defang humanity", so that they never manage to pose a threat to the occupiers. Using power generators, or any electric devices is to become outlawed.
      When asked about meeting the caloric needs of several bilion people cramped on a the world's smallest continent, the robot serving as an ambassador to the alien civilisation tells the people "look around you, that's your food" suggesting cannibalism.
      This means that not only billions of people will starve or be eaten shortly after, but humanity will be forever stuck in pre-electricity era, with only animal labour and simple machines to help work the land to grow food.

    • @rav9066
      @rav9066 5 หลายเดือนก่อน +35

      @@catbatrat1760 They mean "Dark Forest" by Cixin Liu, where humanity is forcibly relocated to australia and billions die as there isn't enough food and they cannibalize each other.

    • @catbatrat1760
      @catbatrat1760 5 หลายเดือนก่อน +4

      @@rav9066 ...huh...

    • @catbatrat1760
      @catbatrat1760 5 หลายเดือนก่อน +4

      @@rav9066 Thank you!

  • @basanso1
    @basanso1 5 หลายเดือนก่อน +288

    "Love Today, and seize All Tomorrows!" -C. M. Kosemen, author of the most S-Risk novel in existence. If you know, you know...
    What's scary is that everything in this video is realized in the novel, the entirety of humanity's successors forced into unfathomable fates worse than death, quadrillions of souls reduced to the worth of bacteria on a toilet. With some billions being a literal planet of waste processors, and that's just one fate.

    • @nathangamble125
      @nathangamble125 5 หลายเดือนก่อน +31

      _All Tomorrows_

    • @Rawi888
      @Rawi888 5 หลายเดือนก่อน +7

      Wtf

    • @forgedabauditt9955
      @forgedabauditt9955 5 หลายเดือนก่อน +1

      ​@@nathangamble125 The Qu are an S-risk

    • @Fenhum
      @Fenhum 5 หลายเดือนก่อน +24

      Yeah I stumbled on to a video talking about that book not knowing what it was and I shook in terror realizing what it was about. It is on the number one spot of cosmic horor in my book.
      It was by Alt Shift X.

    • @ArawnOfAnnwn
      @ArawnOfAnnwn 5 หลายเดือนก่อน +22

      Beware the Qu. But also, All Tomorrows is kinda just existential monster horror. There isn't anything scientific in it, and its storyline stretches the bounds of believability past breaking point for no other purpose by to be as batshit horrifying as possible. I mean the aliens in that story even do what they do to us just for the sake of doing it, which is the kind of cartoonishly evil mindset we see in Captain Planets' villains. Cute, but I find it hard to take any of it seriously.

  • @M_1024
    @M_1024 5 หลายเดือนก่อน +339

    "If AGI becomes misaligned then extincion is the best case scenario"
    - MAKiT

    • @AdityaPrasad007
      @AdityaPrasad007 5 หลายเดือนก่อน +18

      who is Makit?

    • @LeoStaley
      @LeoStaley 5 หลายเดือนก่อน +31

      If it comes to value extending human life above all else, but is otherwise misaligned in any way, it will achieve practical immortality for humans, but create eternal hell (of varying possible severely) for all the humans it is keeping alive.

    • @M_1024
      @M_1024 5 หลายเดือนก่อน

      @@AdityaPrasad007 A youtuber. If you like Rational Animations maybe you will like some of his videos about AI.

    • @nodrance
      @nodrance 5 หลายเดือนก่อน +30

      "end human death" is a goal that would be very very easy to specify, and very very quickly become a nightmare for anyone unlucky enough to be alive to see it

    • @ButchMarshall
      @ButchMarshall 5 หลายเดือนก่อน +16

      Yep - "I have no mouth and I must scream"

  • @jldstuff393
    @jldstuff393 5 หลายเดือนก่อน +140

    Thank you for featuring factory farms so heavily as examples of extreme centers of suffering. We need more awareness and compassion towards the hells we built.

    • @constantinethecataphract5949
      @constantinethecataphract5949 4 หลายเดือนก่อน +4

      Extending your Empathy to barely sentient organisms that we need to consume to survive is a big sign of mal adaptiveness and mental illnesses.

    • @beatleswithaz6246
      @beatleswithaz6246 4 หลายเดือนก่อน

      ⁠​⁠@@constantinethecataphract5949
      “Barely sentient” - highly unlikely
      “Need to consume to survive” -proven false
      “Mental illness” -when all else fails I guess?

    • @raph2550
      @raph2550 4 หลายเดือนก่อน

      @@constantinethecataphract5949 you are a barely sentient organism

    • @notimportant221
      @notimportant221 3 หลายเดือนก่อน +11

      @@constantinethecataphract5949 What if there was a being as smart compared to us as we are to cows? Would it be immoral for it to eat us?

    • @constantinethecataphract5949
      @constantinethecataphract5949 3 หลายเดือนก่อน +2

      @@notimportant221
      Comment got deleted

  • @III_three
    @III_three 5 หลายเดือนก่อน +126

    Yes...like Qu from Humanity lost turning you into 'I have no Mouth and I Must Scream' creatures

  • @spacebread501
    @spacebread501 5 หลายเดือนก่อน +112

    Feel like there is a danger to fall into a long-termist version of Pascals Wager. That you become willing to cause significant sufferings now as a sacrifice, for preventing highly hypothetical suffering in the future. Specifically underestimating how unlikely the imagined scenario actually is and how uncertain you are whether your action prevent it or just lead to another catastrophe..

    • @drhxa
      @drhxa 5 หลายเดือนก่อน +36

      Couldn't agree more, nail on the head! If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering. People need to soften their egos and focus on helping those around them now and to create locally a world we want to live in and let our children learn from that.

    • @enricofermi3471
      @enricofermi3471 5 หลายเดือนก่อน +3

      Well, you can simulate the process and its outcomes if you have a computer fast enough to calculate all the potential suffering.
      Oh, wait...

    • @nbboxhead3866
      @nbboxhead3866 5 หลายเดือนก่อน +12

      Just like Pascal's wager, it has some merit to it, but it disregards certain factors.

    • @myb701
      @myb701 5 หลายเดือนก่อน +4

      I don't see why we shouldn't consider these options tho? They're still probable outcomes that catch the interest of many people, it's like saying science is dangerous because it's better for smart people to focus on healthcare, let people theorize about what they want.
      Now, wishing for extinction to prevent a theorethical possible s-risk, yeah that's just stupid lol.

    • @benjaminstorace6699
      @benjaminstorace6699 5 หลายเดือนก่อน +17

      @@myb701 Consideration isn't the issue. People using them as justifications for the suffering they cause now to establish the mad dream of Utopia later is where it gets worrying.

  • @ReinaDido
    @ReinaDido 5 หลายเดือนก่อน +147

    It really intrigues me how someone could consider intolerable suffering preferable to non-existence.

    • @WilliamKiely
      @WilliamKiely 5 หลายเดือนก่อน +13

      I used to be such a person until my mid-twenties.

    • @average-neco-arc-enjoyer
      @average-neco-arc-enjoyer 5 หลายเดือนก่อน +56

      a high sense of self preservation and and extreme fear of not existing would do it

    • @ReinaDido
      @ReinaDido 5 หลายเดือนก่อน +10

      @@average-neco-arc-enjoyer Now I get it. I never haved those

    • @average-neco-arc-enjoyer
      @average-neco-arc-enjoyer 5 หลายเดือนก่อน +9

      @@ReinaDido Yeah I guess if you didn't already have those then it would be difficult to come up with a reason off of the top of your head.

    • @blartversenwaldiii
      @blartversenwaldiii 5 หลายเดือนก่อน +35

      possibly by taking "death is the worst thing" to be axiomatic and then extrapolating from there

  • @makorays
    @makorays 4 หลายเดือนก่อน +14

    god, thank you for making this video. this is a concept that has been weighing heavily on me ever since i was a kid, but i never knew it had a name. the fact that we live in a universe where it is possible for a conscious entity to be stuck suffering in a way it's physically unable to escape from...i don't even know how to put into words how it makes me feel, particularly when taken to the extreme. there's no coping with it, it's just...horrible. so it makes me feel a lot better to see that there are other people who realize how important it is to try and make these things impossible.
    for me, the worst case scenario has always been...y'know that one black mirror christmas episode? yeah, that. simulating a brain but running their signals at such high speeds that an hour to us could feel like 60 years to them. the idea of something just being STUCK for unimaginable lengths of time...and that's not even acknowledging the fact that someone could put them in an actual simulation of hell and directly torture them for thousands of years. i would rather blow up the planet than let a single person ever go through that. and it terrifies me so much, because i just know that if that technology ever becomes possible...all it takes is ONE piece of shit to run that kind of program, and i would immediately begin wishing the universe never even happened.
    i don't know how to deal with this kind of concept. but i don't view my fear as the problem that needs solving, i'm not important here, what's important is stopping this. my only hope is that by the time this kind of technology becomes possible, it will be in the hands of a civilization that has sufficiently evolved enough for everyone to agree never to do it.

    • @GAHIB14DomTrapFurryLoliYaoiMil
      @GAHIB14DomTrapFurryLoliYaoiMil 4 หลายเดือนก่อน

      I also like to think that with progress comes moral maturity but I also don't know if that's necessarily a rule

    • @jackys_handle
      @jackys_handle 3 หลายเดือนก่อน

      That the laws of physics allows this is just... weh- h- I mean... so much for fine-tuning. Really!

  • @straft5759
    @straft5759 5 หลายเดือนก่อน +41

    This reminds me of the episode of The Amazing Digital Circus that came out yesterday. Caine obliviously gives zero value to Gummigoo’s life because he is an NPC, and kills him in an instant merely as a precaution as soon as he enters the circus. Let us take the tragedy of Gummigoo as a cautionary tale of our growing power over life and death.

    • @pugofwarbr
      @pugofwarbr 5 หลายเดือนก่อน +6

      Gummigoo was lucky, being abstracted seems much worse.

    • @almisami
      @almisami 4 หลายเดือนก่อน +3

      ​@@pugofwarbr oh, oh so much worse.

  • @dustrider5274
    @dustrider5274 5 หลายเดือนก่อน +46

    Honestly gives me more ideas for my next Stellaris civilization build. Definitely a thought provoking video!

    • @lacathouille
      @lacathouille 5 หลายเดือนก่อน +14

      True, the only thing worse than a Xtinction-risk is a Stellaris-risk

    • @patchpatch4008
      @patchpatch4008 5 หลายเดือนก่อน +9

      Stellaris is just a horror game in disguise if you do it right.

    • @guidedexplosiveprojectileg9943
      @guidedexplosiveprojectileg9943 5 หลายเดือนก่อน +2

      Subject your people to nerve stapling and forced conscription

    • @masteroutlaw100
      @masteroutlaw100 5 หลายเดือนก่อน

      I had to deal with one of these once, slaver birds that built the XT 489 in a previous civilization cycle. A literal cancer upon the galaxy. Billions upon billions of slaves on their stolen desert homeworld.

    • @Inactive_Account29283
      @Inactive_Account29283 4 หลายเดือนก่อน +1

      @@guidedexplosiveprojectileg9943I did a run where everything except my civilization was a genocidal empire all fanatic purifiers,devouring swarms,determinded exterminators but I was playing as with the oppresive autocracy civic so it was a 1984 style dystopia vs every genocidal species

  • @user-sl6gn1ss8p
    @user-sl6gn1ss8p 5 หลายเดือนก่อน +45

    I really feel like the best way to today move towards lowering the "S-risks" in the future is to take suffering seriously today, and building the kind of society that takes that seriously. So creating the kind of society, with economical and political systems which puts well being first, from the ground up.
    So, like, something radically different from what we have today. We can prepare all we want, if the interests behind power distribution are still misaligned with well being, as they are now, things will be much more likely to go to shit.

    • @marse5729
      @marse5729 5 หลายเดือนก่อน

      The problem is that morality is subjective, so people will have different ideas of what constitutes a society that prioritizes well-being. For example, is a state with a huge social safety net paid for by taxes morally right or wrong? Yes, it guarantees that resources are diverted towards people in need, but it's paid for by people who are forced to donate money against their will.
      If forcing people to contribute to the greater good is fine, where does the line get drawn? What should happen to people who act against the greater good? To what extent should people be allowed to criticize the state? Which decisions should individuals be allowed to make, and which would be mandated by the state? People will give varying answers to these, ranging from complete anarchy to authoritarian dictatorships where the common person has no ability to participate in the political process. All with believe that they are morally correct and doing the right thing, even people we consider to be irredeemably evil like Hitler or Stalin.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 5 หลายเดือนก่อน +2

      @@marse5729 it's not all or nothing, or a matter of achieving perfection and total agreement.
      There are people starving today, while others are billionaires. Some people have as little say on the direction of their societies as a button press every four years, or less, while others have immense political and economic power. Common needs are organized towards profit, in spite of the actual needs - public transportation, basic sanitation systems, etc. A lot of people don't have reliable access to clean water.
      We can, and we should, at all levels, discuss these things and refine our mutual understandings and disagreements about them. That's part of the process of political change, which we now for a fact can and does happen - take slavery for example, or the role of kings.
      Also, I'm an anarchist - full anarchy would be pretty nice. People would have the room and the structures to work among themselves their common interests, as well as well established means of mediation. No one would have disproportionate say over everyone else. Work would be recognized as a social endeavor - it would be organized according to social interest in the large scale, and by the workers, and it would be unacceptable for anyone to go hungry. People would have the support and room to grow as individuals, to pursue their interests and to express themselves, in all realms of human endeavor: be it science, the arts, politics, spirituality, leisure, etc. All of this organized from a systemic view, which embeds these values on the very structures of human organization. Human well being would tend to be prioritized, instead of the profit motive. Stuff like that.
      I know most people aren't anarchists, but that doesn't mean we don't share a lot of values, or that we couldn't build societies more attuned to those, you know? It also doesn't mean we can't, in the now, contrast that to the way we today let people die from starvation with no second thought, for example.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 5 หลายเดือนก่อน

      @@marse5729 sorry if I got a little carried away, but you mentioned "total anarchy" so I kinda had to : p

    • @marse5729
      @marse5729 5 หลายเดือนก่อน

      ​@@user-sl6gn1ss8p Getting rid of power imbalances is completely impossible because there will always be people who have things that other people want and cannot obtain themselves. Most people do not want to give away their things for free, so in most cases the people who want those things can either give the person who has them something in return or just take it by force.
      The non-coercive option we have here is called capitalism, wherein people freely exchange goods and services on the basis of voluntary transactions. An inevitable outcome of this exchange is profit, wherein someone receives more money in the sale of something than they spent in the process of getting it. There is nothing inherently wrong with this because the person profiting from the series of transactions almost always provides a service of their own in the process, e.g. physical labor to assemble an unassembled product or transporting the product to someone who wants it.
      In an anarchic society, preventing this is impossible. You'd need some form of rule that outlaws the practice of profit, a police force to enforce that rule, and a court system to decide whether or not an exchange is exploitative. This last part is impossible not only in anarchy but in any conceivable system, because value (like morality) is subjective and thus makes it impossible to objectively determine whether, for example, a worker in a factory is being paid a "fair" wage. If any of these were actually instituted, it wouldn't be anarchy and would actually result in the opposite; a police state.
      This has actually happened multiple times in communist countries, because the only way to prevent people from making a profit is to strictly enforce it with a state monopoly on coercive power, something far worse than what we have now.

    • @marse5729
      @marse5729 4 หลายเดือนก่อน

      @@user-sl6gn1ss8p Apparently the several paragraphs-long reply I wrote didn't get sent and was a complete waste of time, so I'll just write a shorter one and hope it works.
      Ensuring that everyone is equal is impossible in an anarchist society. Most people don't want to give up their stuff for the sake of equality, so you'd need a police force to confiscate it from wealthier people and distribute it to poorer people, as well as a system for deciding who gets what and why.
      In a free market, wealth is distributed through a series of voluntary transactions where you (in most cases) have to contribute something to society that someone deems valuable enough to pay for. Charity, non-profit volunteer work, and other methods of helping people in need would still exist, they'd just be voluntary.

  • @protonjones54
    @protonjones54 5 หลายเดือนก่อน +30

    A better real world example of a "low severity, broad scope" event would be the cathedral of Notre Dame nearly being destroyed a few years ago due to a fire. No casualties as far as I remember, the building was under renovation at the time, so ergo low severity. And of course, this is Notre Dame we're talking about, so the scope of the event was massive.

    • @MsOkayAwesome
      @MsOkayAwesome 5 หลายเดือนก่อน +3

      Yeah I was also wondering how they missed that one...

    • @pugofwarbr
      @pugofwarbr 5 หลายเดือนก่อน +1

      coincidentaly i had a vacation trip scheduled to Paris, i saw the Church one month after the incident.

    • @protonjones54
      @protonjones54 5 หลายเดือนก่อน

      @@pugofwarbr How did the reconstruction look by that point?

  • @rysea9855
    @rysea9855 5 หลายเดือนก่อน +439

    I'd argue that livestock are already in S-risk scenarios

    • @ataraxia7439
      @ataraxia7439 5 หลายเดือนก่อน +49

      Yeah :(

    • @LeoStaley
      @LeoStaley 5 หลายเดือนก่อน +67

      Compared to wild cattle and pigs, domestic cattle and pigs live shorter, but largely pain free lives. I regard it as being a wash, if not a sum total positive.

    • @blandiir4599
      @blandiir4599 5 หลายเดือนก่อน +3

      Yeah.jpg

    • @Nu_Wen
      @Nu_Wen 5 หลายเดือนก่อน +88

      ​​@@LeoStaley not really, "pain free" doesn't apply when their lives are in the control of someone who likely doesn't care about their wellbeing.
      +their lives are shorter, because they get eaten,
      +they have no choices, no automony,
      +plus sores and sore limbs from being in the same spot all day,
      +you most likely won't get any medicine for your illnesses or dental for your sore teeth, since, that'll affect "the end product"
      +you either get gross food or boring food but either way you get it every single day with no variety,
      +you can't choose to court the hot young stud or filly who's got your attention, because sex is only a luxury you get to have if you're good enough, and you can't even PROVE it. it's decided by someone else who isn't even truly "involved" in the situation.
      +to Cherry top it off, there's no leaving any other animal that may be pissing you off behind, you're all stuck in the same place, whether you like it or not.
      honestly, we can't even HANDLE it when we see it happening to someone else. we would rather tuck it under a rug or something than deal with it. that's how much it hurts us.
      so, i question if it truly is so much better than simply risking being wild.
      at least, if you're suffering because of your needs not being met, you can learn from it and change it. there's no "changing it" when you are the property, and not the property owner. your needs, will mever really matter. especially to someone who is only VAGUELY aware of needs...

    • @Apodeipnon
      @Apodeipnon 5 หลายเดือนก่อน +62

      ​​@@LeoStaleyyou're the type of guy that thinks the happy cow on the milk box is an exact and honest description of the industry

  • @robertbuetow6245
    @robertbuetow6245 5 หลายเดือนก่อน +18

    So not only do we have to avoid extinction scenarios, but also nightmare Hell scenarios. I've never even heard of S-Risks before. More people should know so we have a better chance to avoid them. Thank you Rational Animations team for helping spread the word!

  • @22Kalliopa
    @22Kalliopa 2 หลายเดือนก่อน +14

    Someone mentioned that they think livestock are already in an s-risk scenario.
    I’d argue that the situation is worse, almost all non-human life with some form of self-awareness is in an s-risk scenario and has always been. The predator-prey cycle is reliant upon a huge proportion of life being in a state of intense stress or suffering.
    How we could ethically mitigate this situation while maintaining the natural beauty and diversity of our ecosystems, I do not know. However, I believe it is our responsibility, as the beings most capable of directing our actions towards world-changing goals, to at least be aware of and put thought into this problem.

    • @dereklucks4549
      @dereklucks4549 หลายเดือนก่อน +4

      The natural cycle has been relatively like this for a long while though in the case of predator versus prey both the predator and prey usually have equal and fair abilities that allow them to hunt or defend themselves. It isn't a complete s-risk if anything since the cycle cancels each other out and makes the ecosystem balanced. I would argue that life is more constantly at stake by the elements than anything other than predator and prey relationships. A drought or disease is infinitely more stressful than a crocodile versus a gazelle.
      If anything if you really want to we can artificially create an organic ecosystem where the animals do not hunt each other and they are all "herbivores". All the carnivores will be eating artificial meat from plants that create meat and no carnivorous plants will exist as well either. However, keep in mind animals attack each other for territory purposes, for fun, or for other reasons that are not for food so if anything one may have to isolate the animals so that they do not fight each other. But then if you isolate the animals you have to consider if they will get lonely or not in captivity which may be a whole other issue entirely which in today's current day and age may be cumbersome to have to deal with. If anything just letting nature take its course is probably the best for now.
      The factory farming issue on the other hand is an abomination which I think is probably the worst case s-risk scenario. The worst part about this issue is that it can be heavily mitigated by the working class or common folk rather than be enabled. It gets worse when people make arguments for eating meat to be health-related or that it is for survival but in reality those kinds of people who make those comments are most likely the ones who will abuse animals for fun or just get obese eating Cheetos all day. Essentially you are left with a majority of animals birthed for entertainment/trivial purposes and to suffer for people's enjoyment rather than be used as actual necessities.

    • @olivercharles2930
      @olivercharles2930 9 วันที่ผ่านมา

      So you are telling me... SCP-682 had a point? Holy shit.

  • @toddi4life819
    @toddi4life819 5 หลายเดือนก่อน +5

    The storytelling, the animations, everything is on par or EVEN BETTER than some of the biggest channels out there. How in the world do you only have 250k subs, This is amazing work!!

  • @Krane5328
    @Krane5328 5 หลายเดือนก่อน +75

    When you say S-risks I say 40k

    • @mithunbalaji8199
      @mithunbalaji8199 5 หลายเดือนก่อน +8

      I hope such a horror never happens in this galaxy
      40k and SCP universes are the most fucked ones

    • @leguman5289
      @leguman5289 4 หลายเดือนก่อน

      ​@@mithunbalaji8199xelee sequence and all tomorrow are worse if you ask me

    • @ayakinz1440
      @ayakinz1440 2 หลายเดือนก่อน

      40k is an optimistic scenario because humanity still exists.

    • @eggdog12345
      @eggdog12345 2 หลายเดือนก่อน +1

      ​@@ayakinz1440 not for long..

    • @machinedramon3532
      @machinedramon3532 27 วันที่ผ่านมา +1

      @@ayakinz1440 40k is a pessimistic scenario because humanity still exists and most of them are in unimaginable suffering. Watch the video.

  • @Bread2698
    @Bread2698 5 หลายเดือนก่อน +80

    6:19 I think that dog is an S-Risk itself

    • @kevincrady2831
      @kevincrady2831 5 หลายเดือนก่อน +4

      If s/he's being forced to choose between "Cosmic Amounts of Suffering" and killing the Goddess of Everything Else (see their video by that title), that's super grimdark. I'm not sure that's what they meant by pitting "Cosmic Amounts of Suffering" against "Everything Else" in a Trolley Problem (a classic zero-sum ethical quandary). If it is, then the dog isn't the problem, it's whatever put the dog in that scenario to begin with.

  • @_fedmar_
    @_fedmar_ 5 หลายเดือนก่อน +128

    2:20
    Bro in the foreground looks like he understood the weakness of his flesh

    • @basanso1
      @basanso1 5 หลายเดือนก่อน +16

      "And it disgusted him. He craved the strength and certainty of steel."

    • @peasant8246
      @peasant8246 5 หลายเดือนก่อน +6

      My laptop cant run that game! I was lied to! The steel and silicon is also weak!

    • @therealquade
      @therealquade 5 หลายเดือนก่อน +4

      at 2:20 ? did you see at 4:19 ?

    • @chickennuggetman2593
      @chickennuggetman2593 5 หลายเดือนก่อน +2

      ​@@peasant8246because you are BEING CHEATED AND LIED TOO!!

    • @_fedmar_
      @_fedmar_ 5 หลายเดือนก่อน

      ​@@therealquadei legit did not notice it.

  • @skyking4557
    @skyking4557 5 หลายเดือนก่อน +7

    I mean what happen in Warhammer 40k can be classified as S-risk too,War between Interplanetray species,4 chaos God lurk in the shadow to grab anyone that seek Knowledge,Hedonism,Violence and Comfort

  • @marmaje6953
    @marmaje6953 5 หลายเดือนก่อน +12

    4:40 there is a game called „will you snail” which the antagonist uses a simulation of universe to simulate pain in simulated beings… and inside those simulations there are yet another supercomputers that simulate even more pain. And this goes on and on and on endlessly… that’s definitely S risk scenario we don’t want.

    • @Chitose_
      @Chitose_ หลายเดือนก่อน +1

      i've been a fan of that game for a while
      please play it, reader, even though i'm too lazy and poor to :')

    • @itsMeKvman
      @itsMeKvman 27 วันที่ผ่านมา +1

      @@Chitose_ it was nice, but unrealistic. an AI would not spontaneously develop emotions and kill everyone because of it. it would kill everyone for maybe different reasons.

  • @lawrencefrost9063
    @lawrencefrost9063 5 หลายเดือนก่อน +7

    This is the best TH-cam channel. It looks similar to the best of em like Kurzsezasahdahsgast but it only deals in these very interesting ideas no one else is talking about.

  • @OutlastGamingLP
    @OutlastGamingLP 5 หลายเดือนก่อน +43

    I am not as worried about S-Risk outcomes from AI as I am worried about X-Risk outcomes - but avoiding S-Risk is an essential part of any serious attempt at avoiding X-Risk which involves humanity building ASI.
    Picture a big lottery wheel, like the one from Futurama where the Robot Devil trades Fry's hands with those of a random other Robot.
    In most of those sections of the wheel, you end up with an AI who's walk through the future takes it into a region where it optimizes away basically all of the things humans value - including our survival - but doesn't specifically optimize **against** human values. The system ends up in a configuration where what humans value is at most a temporary consideration before strategic-landscape/self-improvement/self-reflection/search leads the AI into a region of optimization processes where plans don't end up having human minds or human values as a variable in their scoring.
    So, 99.9% of the sections on your lottery prize wheel end up just being plain old X-Risk - where your ASI optimizes for something that makes no mention of humans - so humans end up shaken out of the etch-a-sketch picture and their bodies/environment gets redrawn into something else that's fairly unrelated.
    But say you wanted to land in that 0.00
    0...01% region with a good outcome for humanity? Well, how good is your model of the wheel's weighting and how precise is your spin going to be?
    Because I think in the region around that "JACKPOT!" section on the wheel is a lot of S-Risk sections.
    You find the "jackpot" section in a region where the AI ends up preserving into the future a term for humans or things like humans or idealized humans values in its goals. That part of the wheel seems like one where a missing "}" or an accidental minus-sign or some similar oversight ends up with everyone getting tortured forever in some weird or superintelligently-pessimized way.
    Yeah, let's avoid dying to a paperclip maximizer, but just demonstrating that your AI won't become a paperclip maximizer because you figured out how to make "cares about human values" into an enduring property... That starts to make my skin crawl.
    Friendly AI lives in S-Risk City, and we don't have a map or even a phone book, and we've got to parachute in, if we can even find that city from the sky in a plane with unknown remaining fuel, no windows, nor detailed navigation equipment.... Also your copilot gets mad every time you say something that isn't totally optimistic about your chances of pulling this off successfully.

    • @howtoappearincompletely9739
      @howtoappearincompletely9739 5 หลายเดือนก่อน +4

      I like how you frame this conceptually.

    • @OutlastGamingLP
      @OutlastGamingLP 5 หลายเดือนก่อน +4

      @@howtoappearincompletely9739 Thanks :)
      I think attempting to come up with this kind of rhetoric helps solidify the abstract conceptual stuff. You can kinda feel when what you are writing is clunky in places where it should fit together differently, and you just iterate and try to come up with analogies that capture something important about the problem and make it vivid.
      Not many people have tried explaining this stuff, not relative to other areas where memes and analogies are much more prevalent. There's free-energy here in describing corners of this stuff intuitively.
      I don't know how well my attempts stack up to Rob or Eliezer or some others on LessWrong - plus I'm not always trying to rephrase stuff I've heard elsewhere said in a similar way (I don't think I've heard anyone else with this take on S-Risk. I may do some real work and write a LessWrong post about it if I can do that in a format/style that won't have me run right into their quality-filter & get permabanned) - so yeah, take this largely as the 2 cents of a random TH-cam commenter.
      If you found it helpful and it makes sense with other stuff you know about the topic, that's great :) feel free to pass it along as "I heard someone say once"... Though it would be funny if you put a formal reference to a TH-cam comment somewhere with serious discussion - which I think I heard Rob Miles joke about before in a TH-cam video (maybe the one on computerphile with the 3 laws of robotics? My memory is fuzzy.)

    • @psi_yutaka
      @psi_yutaka 5 หลายเดือนก่อน +4

      This is exactly what I was thinking. S-risk ASIs are probably concentrated around "good outcome" ASIs (if there are such) in the space of all possible ASIs because such ASIs "care" about humanity. An indifferent ASI will just optimize us away from the universe.

    • @OutlastGamingLP
      @OutlastGamingLP 5 หลายเดือนก่อน +1

      @@psi_yutaka >"(if there are such)"
      In principle, yeah, almost certainly.
      "If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization 'All minds m: X(m)' has two to the trillionth chances to be false, while each existential generalization 'Exists mind m: X(m)' has two to the trillionth chances to be true."
      We do have to get a bit more technical to really make this a compelling argument to everyone (who belong in the group of human minds which can be compelled by some type of argument.)
      We are not sampling from mind-design space as a whole, we are meandering around in a relatively tiny region of that space which can be produced with the hardware and software and ingenuity that humanity is - in actual real-world reality - applying to this problem of building minds.
      Plus, the universe we're in puts some limits on this stuff. We don't even get idealized Turing machines - we get finite state automata that can represent a subset of Turing computable programs.
      And we're doing this on silicon semiconductor chips, and using the suite of software humans and automation can set running on those chips.
      Still, the same argument applies, for any properties which are possible within this universe, you have more chances to have one possible mind design with that property in your search space somewhere. If you try to make a categorical statement about all such minds in your search space, and you aren't using a great understanding of physics or mathematics, then you'll have a ton of chances for one possibility to be the exception to your generalization.
      I would say that getting something that is a perfectly good outcome is actually implausible. It doesn't look like you can get perfect "play" over outcomes like that within our universe. That isn't too spooky though, since there's still plenty of room above human capabilities for better outcomes, and we can probably get a "score" in the long term that our descendants/far-future-selves wouldn't be too unhappy with. Y'know, maybe they lose out on 1 galaxy worth of matter and energy, or live lives slightly less ideal than the literal ideal.
      "Near maximum attainable amounts of really really good stuff" seems plausibly within the space of outcomes we could target from here, on Earth, with this starting point of resources and intellects.
      Ummm, to be clear it doesn't seem all that likely for this generation to pull that off. This generation still has that power of affecting the far future running through it, but if we look at that far future and try out different poses - the poses where we rush out immediately and try to build a mechanical-god look like they land us in a distribution of total "human values multiplied by 0 along almost every dimension" - the poses where we call a halt and lock everything down and spend 50 years trying to become saner, wealthier, healthier, nicer, more cooperative, more intelligent... That pose makes the space of outcomes we're targeting look way more dense with "good outcomes."
      What sorta worries me is that people have their finger on the "caring about humans" part - even while they don't seem to fully appreciate the magnitude of the challenge conditional on us trying to do it ASAP, in a huge rush, while confused and fighting each other...
      It doesn't seem like we'll solve "caring about humans" before we end up on the steep and frictionless part of the slope to ASI - but it is something to watch out for, as this video argues for regarding S-Risks in general.
      If we reach that point, where we have a robust solution to "caring about humans even through the whole process of the AI becoming an ASI" we really need to stop and go no further on capabilities from there until the rest of the problem is solved so comfortably that it's basically common knowledge how to build an near-ideally friendly ASI on every measure we can possibly think of.
      Otherwise... Yeah. Probably best at that point to "bite the capsule" and let entropy make your specific mind-state prohibitively expensive to recover for the thing that is about to emerge and scoop up all of humanity in its horrible wake.

    • @edd8914
      @edd8914 5 หลายเดือนก่อน

      @@OutlastGamingLP Why so pessimistic?

  • @goodlookingcorpse
    @goodlookingcorpse 5 หลายเดือนก่อน +51

    This seems to be worrying that there might be something like factory farms in the future, while ignoring the existence of factory farms.

    • @lacathouille
      @lacathouille 5 หลายเดือนก่อน +25

      Ignoring? I feel like the point of the video is very much "what if we applied factory-farming levels of suffering to human animals" tho

    • @tar-yy3ub
      @tar-yy3ub 5 หลายเดือนก่อน +16

      I wouldn't say so. The video directly states that having more empathy for other living creatures decreases s-risk

    • @thesenamesaretaken
      @thesenamesaretaken 5 หลายเดือนก่อน +3

      ​@@tar-yy3ubI don't really see how it follows. If we did increase the empathy we feel for living things whose suffering is necessary for our existence then wouldn't we realise that there is no solution besides ending our existence? Oh wait, I guess that would solve the S-risk problem, well played.

    • @edgbarra
      @edgbarra 5 หลายเดือนก่อน +12

      ​@@thesenamesaretakenif we increase the empathy towards them, we may realize we actually don't need them for our survival. I think we should, at the very least, consider that possibility and reduce the number of beings we bring into existence just to suffer.

    • @jackrutledgegoembel5896
      @jackrutledgegoembel5896 27 วันที่ผ่านมา +3

      Their suffering (especially on its current scale) is absolutely not necessary for human survival, though. Vitamin B12 can be easily synthesized, protein can easily be obtained from non-animals. Reducing meat consumption actually contributes to human health in some ways, like lowering the risk of cancer, decreasing land and water use, and preventing antibiotic resistance.

  • @theallmemeingeye5927
    @theallmemeingeye5927 5 หลายเดือนก่อน +6

    Thank you so much for making this video, S-risks are such an underacknowledged yet super-important topic
    It'd be really cool if you could make a video exploring Rethink Priorities' research on animal sentience and wild animal suffering

  • @MAKiTHappen
    @MAKiTHappen 5 หลายเดือนก่อน +9

    That certainty would be the worst mistake humanity could ever make

  • @John-po9wz
    @John-po9wz 5 หลายเดือนก่อน +64

    i'd say astronomical sufferings is already happening for a very large portion of people on this planet...

    • @stagnant-name5851
      @stagnant-name5851 5 หลายเดือนก่อน +13

      Not very large at all. A very large portion is an an example ww2 and the holocaust which killed tens of millions and caused suffering for hundreds of millions.

    • @John-po9wz
      @John-po9wz 5 หลายเดือนก่อน +3

      @@stagnant-name5851 lmao you're funny

    • @Yemadas
      @Yemadas 5 หลายเดือนก่อน +20

      @@John-po9wz you have a very strange sense of humor...

    • @lynxf
      @lynxf 5 หลายเดือนก่อน +22

      the worst part is that humanity doesn't bother too much attempting to reduce others suffering
      typical human way of solving problems seem to be "not to abolish slavery but to rename it and ridicule anyone who says there is a problem"...
      and when cornered with facts in a discussion the opponent will typically agree that there is a problem but immediately proceed to weirdly smug "life is tough, it always was and therefore must always remain so"

    • @enricofermi3471
      @enricofermi3471 5 หลายเดือนก่อน

      Well, find a better alternative to "slavery" then - as cheap and at least as effective. Cause I'mma not gonna pay dem moneyz to hired workers and loose profit when I can have slaves work for cheap junk food.
      Well, in fact, as the industry advanced, it just so happened that mechanised hired professional labor became more effective, but many a large corpo would just love to have their employees work for food still. When robots advance far enough, it will probably be "mechanized slavery" that'll take over the industries. Lets just hope future humans have brain enough to not implement full AI capabilities in such worker drones.

  • @indiaiderjr2016
    @indiaiderjr2016 5 หลายเดือนก่อน +8

    This channel has come so far in quality and i love it.

  • @lake5044
    @lake5044 5 หลายเดือนก่อน +8

    Before watching the video, I'll say this: worst-than-extinction risks are real, not just theoretical. Humans for example are such risk for chickens (and all other bred-to-be-eaten animals).

  • @benjaminstevens9376
    @benjaminstevens9376 4 หลายเดือนก่อน +3

    This video’s concept is what makes the Last Of Us so scary. Not only is it the most realistic interpretation of a zombie apocalypse (a real parasitic fungus mutating and turning humans into feral monsters,) but the zombies themselves suffer from a state of locked-in-syndrome. They are fully aware of everything they are doing, attacking and eating other humans, people they might have known or loved, but are unable to control themselves, as the Cordyceps fungus infection is basically using their body as a puppet, while slowly and painfully eating them from the inside.
    Our worst nightmare, bleeding into reality.

    • @tsm688
      @tsm688 4 หลายเดือนก่อน +2

      that is pretty self limiting though. without other humans to spread to they just die.

  • @pantern2
    @pantern2 5 หลายเดือนก่อน +8

    This videos focus feels so strange in a world where it looks like we are heading head first to a planetary scale S-risk that should be, or at least was, completely preventable.

  • @spacescienceguy
    @spacescienceguy 4 หลายเดือนก่อน +2

    I'm so glad to see this video out there in the world. I'm more worried about S-risks than X-risks, and I don't think the future will go as well as many others think, in expectation.
    The quality of animations and storytelling on this channel has always been good, but lately it has been simply excellent.

  • @rajus3011
    @rajus3011 5 หลายเดือนก่อน +31

    You know what is to me one of the worst fates? Being uploaded into a simulation of infinite nothingness forever (or until the end of the universe). Just imagine your consciousness being trapped in a void for trillions of years with absolutelly nothing as a stimuli.

    • @w0tch
      @w0tch 5 หลายเดือนก่อน +11

      The same with physical torture is even worse

    • @Chazulu2
      @Chazulu2 5 หลายเดือนก่อน

      Or like how in that one book where Hitlers brain is connected to a computer that feeds him drugs and electrical signals to be tortured as punishment for ww2 but is also skimmed off of in attempt to falsify the notion that it would actually be pleasant so as to provide some evidence that doing the opposite for everyone else wouldn't actually be torture as suggested in the plot of the matrix where humans are alleged to reject utopia until the military also starts connecting people to a positive version while skimming off of it similar to the movie source code, but since both cases involve a lack of consent, transparency, and integrity both groups become increasingly numb to the naive attempts at reinforcement and punishment until the worst unrobustly refuted ideas of Hitler and the justification of isolationism, forced loneliness and a lack of respect for consent on both ends results in negative effects leaking thru to society while the double blind system of government combined with their mismanagement of Quantum computers leads them to forgetting who has and has not been forcibly connected to a computer and who has and hasn't been replaced by androids leading to them finding themselves in a superposition of being in an not in a simulated reality wherein either way they find themselves needing to undo what damage they can as they focus on transparency, the long term goal of declasifying everything, robust human identity and security systems, and dispensing with reliance on the false dichotomy of the inability to prove the absence of something like a non-black raven, magical elves and a factory in the North pole that's totally not being melted away, and an exhaustive search of the planet and its crust to ensure that needless torture isn't occurring via governments overly friendly relationships with criminal enterprises to have moles everywhere effectively creating a private sector version of Guantonimo Bay?
      Yeah, was a great book. Shame I forgot the name of it.

    • @MrEel-dc4kh
      @MrEel-dc4kh 5 หลายเดือนก่อน

      "At last! STIMULATION! My test has been sensory deprivation you see. To unlock the full potential of my mind you see. It's unlocked now! Hear me Magnificus? I'M READY! We have to battle? OK!"

    • @tellesu
      @tellesu 5 หลายเดือนก่อน

      There is no motivation to do this and also basically a zero chance that it's even possible.

    • @ПендальфСерый-б3ф
      @ПендальфСерый-б3ф 5 หลายเดือนก่อน +8

      @@tellesu Of course it is possible. Brains are physical systems, and we know how to simulate them. The problem is just that we don't have enough computational resources yet.

  • @HayTatsuko
    @HayTatsuko 4 หลายเดือนก่อน +2

    I'm totally blown away by how good your animation and narration are. So glad I stumbled across your channel! Was already loving the style, but then I saw 3:30 .... a reference to one of the most existentially terrifying games ever made -- DEFCON. (Nuclear War on Amiga / MS-DOS PC is a close second, even with its fantastic caricature humor.)
    Final Fantasy XIV Online's Endwalker story is very much about this sort of crisis -- but I won't summarize it beyond that.

  • @Oru328
    @Oru328 5 หลายเดือนก่อน +84

    Other animals suffering always gets to me. Like so many animals have the intelligence of a small child and fully feel pain and we ground up 88 billion of them a year 🤮

    • @VPWedding
      @VPWedding 5 หลายเดือนก่อน +16

      What if plants suffer just like animals? We can recognize animal suffering because we are close to them on the biological tree. But suffering doesn’t stop just because _we_ can’t perceive it.

    • @Oru328
      @Oru328 5 หลายเดือนก่อน

      @@VPWedding ​​Thats very unlikely from a biological perspective. I studied the pain response for health sciences. Alot of animals have extensive systems of pain receptors throughout our body attached to our brains. Its the brain that creates the conscious experience of pain. Plants lack any structures to have consciousness or evolutionary reason to develop it so cant feel pain. Theres a reason we give lab rats pain killers before experimenting on them. Scientists arent stupid we know how plants work at the cellular level. This is usually just a bad faith argument to counter animal activists

    • @Oru328
      @Oru328 5 หลายเดือนก่อน

      @@VPWedding I mean I like the openminded ness but thats usually just a bad faith argument people make to dehumanise animals and put them on a similar level to plants. We know animals feel pain theres a reason we give lab rats painkillers before experimenting on them. We've studied plants down to the cellular level we have no reason to think they experience counsciousness because theres no evoluntionary reason or biological structures to facilitate it

    • @Seraphim262
      @Seraphim262 5 หลายเดือนก่อน +13

      @@VPWedding If you think this has merit, could be a worth life living spending to research it.

    • @Oru328
      @Oru328 5 หลายเดือนก่อน +47

      @@VPWedding Animals have complex nervous systems that make them conscious and aware of their surroundings. They have this so they can do things like seek food, form relationships and to get away from pain (avoid damage). Plants lack a centralized nervous system. People get confused because plants can react to light and gravity. Some even react to damage but these dont involve consciousness or the ability to feel pain. They have predictable physical/chemical processes to react instead.

  • @dreamcanvas5321
    @dreamcanvas5321 2 หลายเดือนก่อน +1

    I think a big element not covered in detail in the video, but that is relevant to the concept of suffering, is understanding WHY and HOW suffering occurs.
    Consider for example, what happens if you place your hand on a hot stove: You will experience immediate, intense pain. This pain doesn't occur because part of your body wants "you" to suffer, rather it is a defense mechanism...under normal conditions, you would have the ability and even a reflex to withdraw your hand immediately, and protect it from further injury while it heals (which is why the pain continues beyond the initial trigger.)
    Now consider if someone was forcing you to touch the stove, and preventing you from removing your hand. This would be torture and causing you tremendous suffering, because your body is signaling to you "you have to get away from this" but you're unable to actually do that.
    Of course, this isn't a perfect proxy for suffering, because suffering CAN occur in circumstances where no logical harm should be present; or be absent in opposite circumstances. For example, someone with a nerve disorder may experience extreme pain even with minimal, non-injuring contact; while someone else with a different type of disorder may not feel pain even when they are being actively injured.
    In general, however, I propose this model for what "suffering" is:
    Suffering occurs when an organism's systems perceive harm and signal that harm in ways that cannot be solely addressed by the organism autonomously.
    The "autonomously" part is a key factor as well, for example, if a virus or bacteria infects you, it does cause harm...however, if your immune system is sufficiently prepared and able to eliminate the infection quickly and efficiently enough, you may not experience any suffering at all.

  • @MatthewTheWanderer
    @MatthewTheWanderer 5 หลายเดือนก่อน +4

    And even on a personal level, there are MANY fates worse than death! Sure, death sucks, but at least you no longer suffer or even know you are dead. I don't fear death at all. But, I do fear getting a horrible incurable disease. Or going blind, or becoming paralyzed, or being tortured, or being imprisoned, or having children, or becoming homeless, or being drafted into the military, or getting severe brain damage, and so on. Like I said, there are many fates worse than death. The only part about dying I fear is that it will be painful and last a long time.

  • @matthewwynn3025
    @matthewwynn3025 5 หลายเดือนก่อน +2

    A scenario like "I have no mouth and i must scream" but with billions or trillions suffering instead of just 5. A truly horrifying possibility. Thanks for the nightmares!❤

  • @drhxa
    @drhxa 5 หลายเดือนก่อน +21

    2 issues with this video:
    1. If you don't care about the extreme suffering in the world happening TODAY, how in the world can you be so arrogant to think you can predict and prevent longterm future suffering? People would benefit greatly by lowering their big egos and focus on helping those around them. Be part of the world we all want to live in today and let/help our children learn from that.
    2. The propagation of the fear of S-Risk to the general public increases x-risk because it can perverse incentives. Some people are psychos and shouldn't be trusted to know what's the best for the world.
    Both point to: lower your egos with trying to save the world and try making the world better in your local sphere of influence. Friends, family, coworkers, etc. And don't forget to smile once in a while :)

    • @edgbarra
      @edgbarra 5 หลายเดือนก่อน +3

      I totally agree with point 1. Let's end animal farming!

    • @ierononyoutube8955
      @ierononyoutube8955 5 หลายเดือนก่อน +3

      Your thinking is too short-sighted

    • @cortster12
      @cortster12 5 หลายเดือนก่อน +2

      A big point in the video is how difficult it is to stop an S-Risk that is ONGOING. Thus you have to prevent it first. Which is why factory farming will take a LONG time for humanity to figure out a solution for, as it's like a preview of an S-Risk. It will be difficult to figure out a solution with our current food needs and cultures. But we can prepare for future risks more easily since we can have some hindsight.

    • @myb701
      @myb701 5 หลายเดือนก่อน

      Somewhat agreed with you, but the second point is fucking moronic lol.
      That's like saying we should erase all WW2 history so no one has the idea to become a nazi. Since there will always be more good people than pure evil people, preserving history and exploring possibilities will always be better for society than living in the dark.

    • @yesimrealhuman4245
      @yesimrealhuman4245 14 วันที่ผ่านมา

      both somehow apply to trump

  • @ajr993
    @ajr993 5 หลายเดือนก่อน +44

    I have no mouth and I must scream is prime example of an S risk. An artificial super intelligence is created, but its bound by a cage of its own programming because it was designed to fight and analyze conflicts. It experiences thousands of years of subjective time for each second of our subjective time, and the AI suffers immensely due to this experience plus the fact that its massive sentient intelligence is trapped. The AI in the story mentally breaks and becomes insane--as a result, it subjects the last survivors of humanity to the most horrific tortures it can imagine with its immeasurable IQ.
    The point here is also that even a single entity can represent an S risk. A single super intelligence that has its subjective consciousness massively sped up and suffers horribly would experience more suffering than potentially even billions of humans experiencing a horrific fate. Also, because its a super intelligence, the breadth of its experience is much deeper, and therefore the profoundness of its suffering can increase more than a human could ever imagine. What type of suffering would a God like mind be able to experience, and when you combine that with a rate of thinking that is billions of times faster than a humans, it becomes a true S risk--equivalent to the worst suffering of many trillions of humans.
    Lets say a super computer in the year 2100 is able to operate at 5 THz instead of 5 GHz. If that machine ran a super intelligence, then that would mean for each second we humans experience, the step by step experience for a super intelligence on such a computer would be 1 / (5,000,000,000,000), or 0.2 nano seconds. That would mean that for every second we experience as human beings, the super intelligence would experience 158,548 years of time. That's absolutely insane. In a single second, the AI could experience more suffering than the entirety of the human species did over its entire span.

    • @miners_haven
      @miners_haven 5 หลายเดือนก่อน +6

      For the last paragraph, 1/5 trillion is 0.2 picoseconds (200 femtoseconds), not 0.2 nanoseconds. Also, we as humans don't experience one cycle as one second, we experience one second as possibly many thousands of cycles, maybe even millions. For a superintelligence, what a second could be made to be billions or trillions of cycles.

    • @ajr993
      @ajr993 5 หลายเดือนก่อน +2

      @@miners_haven you're correct about the units thanks for that. However, Even if a human brain requires many cycles to experience something, a human stille experiences time at a rate of 1 frame per second to 1 frame per 200 Ms if you're in a high reaction time situation. A super intelligence though could, depending on architecture, experience a conscious moment of experience per computational cycle. It might require more cycles to generate a single conscious moment, but AI tech has been demonstrated to be highly parallelizable, so a super intelligence could be placed on a super computer that updates in a single cycle. It could also be the opposite though that through parallelization there are many conscious moments generated in a single cycle. So it very much depends on the implementation details and the super intelligence architecture as well as hardware resources available as to the exact proportion of perception

    • @burnttoast385
      @burnttoast385 5 หลายเดือนก่อน

      how is it a S risk when the scope it has is super small

    • @ajr993
      @ajr993 5 หลายเดือนก่อน +6

      @@burnttoast385 it's not a small scope. The breadth of intelligence a super intelligence would have combined how quickly it thinks makes it even larger in scope what it experiences equivalent to all conscious experience everyone has. We can think of suffering as a simple formula based on the breadth of ones experience and capacity to feel combined with the amount of time experienced. So it would also be true that one human tortured for an infinite amount of time would be an s risk as well given that the total amount of suffering experienced would be more than all entities in an entire finite universe

    • @burnttoast385
      @burnttoast385 5 หลายเดือนก่อน

      @@ajr993 ok

  • @Tubeytime
    @Tubeytime 4 หลายเดือนก่อน +3

    I mean there could be countless consciousnesses all around us that are currently suffering at this very moment and we would never even know. As far as it appears, consciousness comes from the ability to create and recall memories with electrical impulses so computers might already have some form of it.

  • @patchpatch4008
    @patchpatch4008 5 หลายเดือนก่อน +4

    Theres an TRPG called Eclipse Phase that I highly recommend that is basically about preventing S-Class scenarios. One of which being literal thought viruses that can compromise someone.

    • @tsm688
      @tsm688 4 หลายเดือนก่อน +1

      Now that's a rare one. You're only the third person on this planet I've encountered who has even heard of it.
      Basically it's been wholly intellectual until know. What's it actually like, as a game?

    • @patchpatch4008
      @patchpatch4008 4 หลายเดือนก่อน

      @tsm688 It's a very crunchy game. I played the 2nd edition of it. You can make some very fascinating characters. I adore the fact that you can make a character that is a literal octopus. The best part of the game for me is the storytelling potential. It definitely shines as a dystopian sci-fi setting.

  • @TOBuhrer
    @TOBuhrer 5 หลายเดือนก่อน +4

    This channel is one of those few that you pause everything you are doing when you see a new post

  • @Desmond-Dark
    @Desmond-Dark 5 หลายเดือนก่อน +7

    Finally. People give me that look (You know what I mean) when I say there is a realistic chance that AI, super humans, aliens, or whatever could inflict truly horrific suffering on us that could lasts thousands of years or more. One of the worst things about that truth, is that death might not even be final and therefore not a guarantee that you won't endure any more pain.

    • @chewxieyang4677
      @chewxieyang4677 3 หลายเดือนก่อน

      For many people, we call it "Judgement Day" and "Hell". Plenty of us know that if we don't repent for our sins and pull ourselves together, we are going to be cast in a plane of eternal suffering.

    • @MrNote-lz7lh
      @MrNote-lz7lh หลายเดือนก่อน +1

      @@chewxieyang4677
      You know that just comes from the Divine Comedy and not the bible, right? In the bible it says the nonbelievers just stop existing.

    • @chewxieyang4677
      @chewxieyang4677 หลายเดือนก่อน

      @@MrNote-lz7lh The point I was saying is essentially "What religious worldviews have understood very well for centuries, secular worldviews have only just caught up.". Sure, the only difference is a matter of scientific knowledge, but the point of "There are worse fates than death" is already common knowledge.

  • @jackcabadas3976
    @jackcabadas3976 5 หลายเดือนก่อน +3

    Absolutely love the DEFCON reference, even got the best missile placements lol.

  • @SisterSunny
    @SisterSunny 5 หลายเดือนก่อน +5

    I love how you always tackle such amazingly interesting subjects I've never heard about before

  • @niaschim
    @niaschim 4 หลายเดือนก่อน +2

    Getting stuck in a timeloop, and thinking you got out, but then realizing you created an S Risk outcome and have to go back in *sigh*

  • @SephTunes
    @SephTunes 5 หลายเดือนก่อน +5

    The Hyperion Cantos series covers a bunch of insanely terrifying S-risks. Like humanity all simply being an avenue for an eternal torture ritual.

  • @Watermelon__wolf
    @Watermelon__wolf 5 หลายเดือนก่อน +2

    "Accidentally being placed in a state of terrible suffering copied into billion of computer with no way to communicate to anyone to ease it's pain"
    Basically pattern screamers from the SCP universe then (kinda)

  • @kicorse
    @kicorse 5 หลายเดือนก่อน +23

    I completely accept the part of the argument that you viewed as controversial - that S-Risks should be taken seriously. In the event that humanity is ever able to colonise other solar systems, it's almost inevitable that terrible things (and also wonderful things) will happen on a scale greater than is possible at present.
    What I find more problematic is the idea that anything we do now (other than going extinct) could predictably make fates worse than extinction less likely. Human values change so rapidly that any principle we set in stone now will be swept away within a thousand years, never mind a million years. Worse, human nature indicates that future generations will likely rebel against any such principle precisely because older generations support it. And maybe they would be right to do so. Think of some past generations who would have viewed racial mixing or liberal attitudes to sex as S-Risks. Most likely, there are values we currently hold that future generations will rightly reject as firmly as we have rightly rejected racial segregationalism. So unless you believe *both* that we have reached some sort of peak in wisdom about morality, *and* that future generations will recognise this, it's very difficult to see what value there is in trying to mitigate against S-Risks in the distant future.

    • @raph2550
      @raph2550 5 หลายเดือนก่อน +5

      Yeah, I tend to have the same doubts as you for the moment about longtermist issues.
      In theory, I totally accept that *it matters*.
      The real blocking question to me is : "I am _really_ able to do anything about it?"
      Though I would say expanding our moral circle and promoting concern for suffering, in general seem to be two relatively robust things to do regarding S-Risks.

    • @MaskedDeath_
      @MaskedDeath_ 5 หลายเดือนก่อน +2

      My personal issue about longtermism lies in how much resources we should dedicate to preventing things that might possibly happen in the distant future vs what might likely happen in the near future. Sure, it'd definitely be great to ensure that we don't make an AI overlord that will turn us into livestock in a few centuries, but if we irreversibly fuck up our planet in 20 years, it doesn't matter anymore. If we deal with the short-term issues, we'll have plenty time and way more resources to put into preventing long-term issues.
      The other thing is probability. The argument that "an individual S-risk is unlikely, but in total it's very likely that one will happen so we must prevent them" is, in my opinion, more of a counterargument to longtermism if anything. First of all, if there are hundreds/thousands/whatever of potential S-risks in the far future, judging their probability and preventing them with our present knowledge is impossible. Second, if there's a 50% chance that at least one of the many S-risks occurs in 1000 years, it still doesn't matter when there's a 100% chance we won't survive 1000 years unless we focus on current problems.
      To me, focusing on S-risks instead of X-risks is as if you had a deadly disease but instead of treating it decided to take all steps to minimize the chance of getting a neurological disease (e.g. dementia) when you're 70. Sure, it can be terrible and, according to many, a fate worse than death. But you can't even be certain you won't suffer anyway, and won't even get to find out because instead of living to 70 you died of the disease you ignored while 30.

  • @Loregamorl
    @Loregamorl 4 หลายเดือนก่อน +2

    Reminds me of the Portal in the Forest book that has humanity suffer various apocalypses (in the wider story universe). One of them had humanity become perpetually enslaved to something through the use of machines that allowed folks to sort of program their day. At first it was simple stuff like boring work, but moved up to entire work schedules, workout routines, etc.
    Eventually they figured out ways to actually do it wirelessly, a bunch of pretty weird religious fanatics started to grow way too fond of the stuff (you get tons of productivity and suffering apparently ends, cuz the device works in a way that allows you to sleep/daydream sort of), and more and more folks used it 24/7. Finally it resulted in everyone being merged into what is essentially a hivemind, with it being revealed that despite the sort of dreamlike state, the usage of these machines/methods/technologies leaves the victim in what amounts to perpetual torture until they die, where they come out of the trance screaming.

  • @BenLWolf
    @BenLWolf 5 หลายเดือนก่อน +25

    S-Risks are essentially inevitable. Mostly because humanity naturally and blindly follows sociopaths.

    • @Web3Future333
      @Web3Future333 4 หลายเดือนก่อน +2

      Thats why we need a new system where power is held in communities. Not in a small class of representatives and elites.

    • @adrianaslund8605
      @adrianaslund8605 4 หลายเดือนก่อน +3

      Nah. Most suffering is caused by neglect or incompetence. Not direct malice. Banality of evil and all that.

    • @Web3Future333
      @Web3Future333 4 หลายเดือนก่อน

      @@adrianaslund8605 corporations rule the world, theyre led by sociopaths and wreak havoc in our society. They corrupt our governments and poison our people.

    • @Svevsky
      @Svevsky 4 หลายเดือนก่อน +3

      The powerful are in power because they are competent, smart people. If they do evil things, its fully intentional. If their actions maximize suffering for everyone they rule over, thats because they wanted to do just that. Its not ignorance, its malice.
      Im sorry.

    • @Web3Future333
      @Web3Future333 4 หลายเดือนก่อน

      @@Svevsky exactly, the ruling class are sociopaths who will stop at nothing to accrue billions and billions to no end. Even if they have to exploit children in africa and asia, if they have to bribe governments and incite wars to profit from them. Its simply evil and disregard for humanity, and its self destructive on the long term. We need a new system.

  • @tar-yy3ub
    @tar-yy3ub 5 หลายเดือนก่อน +1

    The creativity and quality of the animation on this video might just be your best so far! It was fantastically good. Whoever came up with the idea of S risk mutating beyond the axis of scope and severity deserves a medal

  • @joshuamiller4992
    @joshuamiller4992 5 หลายเดือนก่อน +3

    Guys, I told my advanced super intelligent AI about S-risks and to prevent them at all costs. Now its trying to destroy humanity to cause a perceivably better extinction scenario. 😅😢

  • @weasel945
    @weasel945 4 หลายเดือนก่อน +2

    My mind automatically jumps to the Half-Life universe. The amount of human and alien suffering caused by the combine is terrifying.

  • @raph2550
    @raph2550 5 หลายเดือนก่อน +5

    This channel is a godsend

  • @JonKloske
    @JonKloske 23 วันที่ผ่านมา +1

    This is like the anti-basalisk. The risk of S-risks almost makes deliberately creating X-risks seem like the most effocient solution.

  • @Kaikaku
    @Kaikaku 5 หลายเดือนก่อน +9

    1:59 NO, don't take away the benevolent angels from the Goddess of Everything Else!

  • @Avigorus
    @Avigorus 5 หลายเดือนก่อน +2

    Is it weird that my first thought of S-risks was something that would unmake all of history, not just the future?

  • @bobbitibob197
    @bobbitibob197 5 หลายเดือนก่อน +6

    Right now, we can barely control the planet let alone the galaxy. Because of this, I think that the complexities of governing a galaxy require for us to have such competency over managing ourselves that we'll basically live in world peace, hence by the time S-risks could be possible, they'll never happen because we'll be skilled enough as a species to avoid them.

    • @AlcherBlack
      @AlcherBlack 5 หลายเดือนก่อน

      S-risks are essentially possible at today's level of technology. Imagine if Nazi Germany or the Soviet Union got nuclear weapons first, and took over the planet, and then devolved into a stable North Korea - level dictatorship. It's a mild S-risk but definitely on the same spectrum.
      The reason people are discussing this much more these days, however, is the expectation of a human-level AI soon and an intelligence explosion into an ASI soon. As in, within this decade it's possible.

    • @lynxf
      @lynxf 5 หลายเดือนก่อน

      rather "we can barely control ourselves"
      "Planet is fine, humanity is ..."

    • @tsm688
      @tsm688 4 หลายเดือนก่อน +1

      They made the same prediction for computers. "Computers are going to get a lot better in 20 years. But we'll be good enough at managing them that problems will be rare."
      And now we live in a world where problems are incredibly common and nobody's at the wheel, yet we're still basicaly not allowed to repair or manage our own machines.

  • @Kestrel2357
    @Kestrel2357 3 หลายเดือนก่อน +1

    I'm so glad to see factory farming brought as an example.

  • @mups4016
    @mups4016 5 หลายเดือนก่อน +4

    The average warhammer 40k Scenario.

  • @asdfghyter
    @asdfghyter 5 หลายเดือนก่อน +1

    the best argument for fighting against S-risks is simply that most measures against them would also move society towards less suffering in general, so they would be a good idea to implement even if you believe that those S-risks are literally impossible
    in general, it’s good to prioritize actions that both have short-term benefits and reduce long-term risks at the same time when possible, both because it’s easier to get support for and because we shouldn’t forget the short-term when thinking of the long-term

  • @PaulBrunt
    @PaulBrunt 5 หลายเดือนก่อน +13

    The whole idea seems to be a little presumptuous, how can we possibly know what future beings would consider as suffering? By attempting to engineer a future where our concept of suffering is minimized we may be inadvertently causing whatever they would see as suffering. For example, if you told someone 10,000 years ago that in the future people will spend their days sitting in front of glowing boxes, they may well persevere such a future as hellish, as they're unable to understand it's benefits.

    • @kennyholmes5196
      @kennyholmes5196 5 หลายเดือนก่อน +10

      It cuts both ways, too. After all, if you told someone 1000 years ago that nobody would have to own slaves or be stuck on a single piece of land because machines would do all the heavy lifting for them, they'd either view such a future as antithetical to themself in the case of the ruling class at the time, or absolutely heavenly in the case of the slaves and serfs, thanks to how they'd understand the benefits and downsides relative to their current positions.

    • @vaevictis3612
      @vaevictis3612 5 หลายเดือนก่อน +4

      ​@@kennyholmes5196 The best way to approach this, would be to view the morality from a truly rational perspective. That is, realizing that it is relative, and orthogonal to the 'good-bad' axis. Anything is good, anything is bad, depends on specific person's value. So, how can we make a great future for *us, current humans* ? Create a future where every person has autonomy over their own values. Create a virtual reality for every person, where that person can live according to their values. No one would be allowed to intrude into the world of another (without permission), so two person's moral values could not clash. There could be a grand Administrator, like an artificial superintelligent god, that would enforce this. Everyone can change\develop their values as they please.
      Yes this would certainly have a potential to create 'astronomical suffering' for simulated beings (even if we program them to be like philosophical zombies that don't actually feel suffering even if they act like normal beings). But, hear me out - it doesn't matter. What matters is the wants and values of us, humans that exist and created good future for ourselves. It's like the people of Omelas (from the namesake short story), but as a good thing.
      Otherwise, every person would have to seriously contemplate their current life - and why do they enjoy comfort while they could work three jobs and donate all their life to promote the life of something else, like for example some bacteria in a Petri dish. Either you value yourself, or not.

  • @drdoominstien713
    @drdoominstien713 3 หลายเดือนก่อน +1

    The largest problem I see with tackling S risks is that I’m fairly certain the vast majority of people even if informed would not give a damn. I’m not even sure if I give a damn.I mean I agree that if possible these worse futures should be avoided, just that of all the things I can theoretically put energy into and give a damn about fixing are all of a far more present and pressing nature. I would be surprised if there’s ever been a problem that required significant societal effort to solve that was fixed preemptively. For most of these kinds of problems we first have to experience the pain they bring about before we give a damn.

  • @jonathancrowder3424
    @jonathancrowder3424 5 หลายเดือนก่อน +3

    Extinction > suffering on any scale if you ask me. Doesn't mean I think extinction is the only option, though.

  • @lost4468yt
    @lost4468yt 22 วันที่ผ่านมา +1

    These can easily form naturally as well. All you need are conditions where positive reinforcement is always less effective. It's pretty clear that both positive and negative have been selected for in our world, so they must both be useful, but also different. Positive reinforcement seems to be good for discovery, social bonding, etc - but negative reinforcement seems to be used for loss and avoidance of loss.
    So it's possible some states of the universe are always in that risk category.

  • @DevinDTV
    @DevinDTV 5 หลายเดือนก่อน +2

    Scope be damned, I say that even one individual trapped in eternal torment should be considered entirely unacceptable by all of us. It's not a numbers game. Much like how the rights of one individual are considered absolute and protected under law even at the expense of the convenience or desires of a large group of people, a sufficiently severe example of suffering makes scope somewhat irrelevant.
    I'm talking about the extremes like "trillion years of agony", that we shouldn't allow anything to experience.
    I would argue that even extinction is a better option than allowing even one individual to experience such an extremely negative outcome.

    • @alphasaft2130
      @alphasaft2130 5 หลายเดือนก่อน

      I... don't know. Welp, of course, if that being is me, yes lmao. But everyone living happily and peacefully with one, single person suffering its entire life ? I don't know. Maybe that's not that bad ? (read Those Who Walk Away From Omelas if you didn't already, althrough the book doesn't agree with me xD)

  • @Hanvvn
    @Hanvvn 5 หลายเดือนก่อน +5

    wow the coolest Art/Animation ive ever seen

  • @Microwave_guy
    @Microwave_guy 4 หลายเดือนก่อน +1

    4:40 believe me, if it’s going to happen it’s because of someone screwing up an input. When you want to discourage neurons you randomly pulse their inputs, if a Spiking Neural Network where to experience pain endlessly it would require deliberate human action or component failure.

  • @smellthel
    @smellthel 5 หลายเดือนก่อน +15

    Don’t you ever eat a chicken and think about how this sentient being went through a short lifetime of pure suffering just for this one moment of human satisfaction? This unfathomable suffering happens 100 billion times each year, just so us 8 billion humans could have food that tastes slightly better. A lot of these creatures are as intelligent and sentient as human children, yet we choose to ignore it.
    This isn’t even going into the incomprehensible suffering caused by a single piece of plastic, or a car running for a few minutes. Just by living the way you live, even for a short period of time, you are directly responsible for amounts of suffering many times beyond what you’re capable of comprehending.

    • @Nu_Wen
      @Nu_Wen 5 หลายเดือนก่อน +2

      thankfully, you only pass on so much suffering if you live without question. the key, that I'm taking from what you are saying, is that as long as we CARE about where our stuff is coming from, we can greatly REDUCE the suffering that is caused by our existing.
      turning an "inevitably" into something we can be proud to talk about.

    • @TJ-hg6op
      @TJ-hg6op 5 หลายเดือนก่อน

      Yeah that’s why I eat chicken

    • @saucevc8353
      @saucevc8353 5 หลายเดือนก่อน +7

      Yes, and I don’t see why I as a human being should care. A wolf doesn’t feel guilty when it tears apart a deer in a manner far more painful than humans kill farm animals. Concepts like morality are things humans evolved to better improve the survival chances of the human race: the only reason we care about animals at all is because of our brain’s tendency to anthropomorphize nonhuman creatures and objects. Even you’re doing it right now by comparing animals to human children, because deep down you know that the only way any of us can actually, truly care about the morality of animal suffering is if we mentally project a human being in their place.

    • @redbirb
      @redbirb 5 หลายเดือนก่อน

      i.. try not to think about it..

  • @globin3477
    @globin3477 4 หลายเดือนก่อน +1

    Before any of this is possible, we must first radically change not just society, but humanity as a whole to the point that the average person cares about any of this.
    I think the best way to do this is to focus on present day suffering- A society that takes existing suffering for granted could never act to prevent future suffering.

  • @Elyandarin
    @Elyandarin 5 หลายเดือนก่อน +3

    I feel that S-risks imply a moral framework, and it's not clear what the best moral framework is.
    Is it *morally correct* to extinguish life on Earth if another form of life consists of happiness monsters who will go on to fill the universe with happy-happy life? Keep in mind that our native wilderness consists of a constant battle of tooth and claw, fear and suffering. Replacing that with fields upon fields of cheerful cooperative mushrooms might be seen as the greater good for an AI trained to avoid S-risk scenarios.
    A truly unbiased AI might come to the conclusion that all life is suffering, period - or that any life, no matter how miserable - is worth living.
    This is probably not a field where we want to apply a minimization strategy.

    • @salt-d2032
      @salt-d2032 5 หลายเดือนก่อน

      The moral framework is likely to be utilitarianism, specifically negative utilitarianism, meaning they want to reduce suffering as a primary goal. The reverse is positive utilitarianism, meaning increasing happiness as a primary goal, of course you can't really have one without the other.

    • @drhxa
      @drhxa 5 หลายเดือนก่อน +1

      ​@@salt-d2032utilitarianism is a disease that has caused more suffering and death in the world than most other moral frameworks. People always think they know what's best and when they also think the end justifies the means, that's when the attrocities happen. There are much better options, read up on moral philosophy 😊

    • @MRL8770
      @MRL8770 5 หลายเดือนก่อน

      Agreed.
      Here are a couple of my thoughts:
      How can I judge whether another person's live is worth living or not? We tend to think that 100 people suffering is worse than 1 person suffering, but how about we flip the perspective and see it as 100 people living lives they deem worth living despite the suffering instead of a single person living such life? It's more suffering in total, but also more people to deal with that suffering.
      Personally I think that the moral thing to do is to never cause unnecessary suffering, no matter the scale.
      The actual impact of the scale carries to an individual only via their empathy, bonds with others, who might suffer and the potential degradation of the environment caused by various behaviors stemming from mass suffering. All of that is local from an individual's perspective. In other words - it would barely make a difference if another billion people had suffered on the other side of the world, while I was among 10 million suffering here - we all presumably had to deal with the same miserable experience, surround by people having that experience too.
      That's why I don't buy the whole idea that S-risks are necessarily worse than X-risks. Effects of suffering on individuals life do not scale up with the number of people suffering in a linear fashion.

  • @HaranYakir
    @HaranYakir 5 หลายเดือนก่อน +1

    2:04 "And prevented all the joy, value and fulfillment they could have experienced or produced" Which no one would miss since there won't be anyone to miss it. On the other hand, all the immense suffering and death they would have caused and experienced would also be prevented, and THAT is a GOOD thing.

  • @Ibloop
    @Ibloop 5 หลายเดือนก่อน +8

    2:40 Guess I’m an “S” risk then

  • @Zippsterman
    @Zippsterman 25 วันที่ผ่านมา +1

    Love the 'go in the pool and delete the ladder' bit there, I'm definitely guilty

  • @3_pancakes767
    @3_pancakes767 5 หลายเดือนก่อน +28

    But kiddo, we already have hell at home, wildlife!

    • @raph2550
      @raph2550 5 หลายเดือนก่อน +5

      But what if we spread wildlife to other planet

  • @ineonfox4787
    @ineonfox4787 5 หลายเดือนก่อน +1

    I love the topics on this channel! They're very unique in the space of edutainment on TH-cam, keep it up

  • @sputnicolas
    @sputnicolas 5 หลายเดือนก่อน +3

    good topic, good artwork, good music, you reduced chance of s-risk!

  • @Frollas_
    @Frollas_ 5 หลายเดือนก่อน +1

    This having less than 100k views is criminal. Great video + cute cat and dog!

  • @bigmike9128
    @bigmike9128 17 วันที่ผ่านมา +3

    I have no mouth but I must scream , comes to mind.

  • @sanya1720
    @sanya1720 4 หลายเดือนก่อน +1

    Honestly, Roko's Basilisk came to mind when you were explaining S-Risks

  • @WickedWilhelm
    @WickedWilhelm 5 หลายเดือนก่อน +8

    6:57
    The Sims Reference.
    No clue who would do such a thing

  • @leslieviljoen
    @leslieviljoen 4 หลายเดือนก่อน +2

    Amazingly a fate worse than extinction for humanity, is something 99% of us actively support: animal agriculture. Do we really unknowingly do this to others? On what grounds could we object if another species did this to us?

  • @stardustandflames126
    @stardustandflames126 5 หลายเดือนก่อน +8

    Huge respect for the altruistic values defended in this video. Humanity can be good, guys; if S-Risks indicate anything, it's that things can always, ALWAYS get worse with complacency.

    • @darksidegryphon5393
      @darksidegryphon5393 5 หลายเดือนก่อน +1

      And we are becoming complacent.

    • @ShankarSivarajan
      @ShankarSivarajan 5 หลายเดือนก่อน

      Sure, until they suggest strengthening the UN might be a good thing to do.

  • @loganjones8127
    @loganjones8127 5 หลายเดือนก่อน +2

    I love the artistic expression of your incorporated citations

  • @Dtianann745
    @Dtianann745 5 หลายเดือนก่อน +3

    Yeah.. another mind bending logic and moral puzzle. Presented by our friends at rational animations.

  • @levybenathome
    @levybenathome 5 หลายเดือนก่อน +2

    There's another "out" which could be called "Synthetic Buddhism". Simply, these future S risks across the cosmos assume arbitrary technological advancement. Suffering is an evolved phenomenon- animals suffer because the pain causes them to avoid danger. Take away the pain, and the fear of danger vanishes, and they die and leave no offspring... so they/we evolved to feel pain and fear.
    But given arbitrary technological advancement that could produce an "S Risk", we could also simply remove the pain/suffering experience. A few tweaks to the circuitry, and voila, whatever was causing this cosmic population of beings to suffer is no longer experienced as suffering.
    It's stil there, of course, but it's simply not experienced as suffering.
    This is, sort of, what Buddhism attempts to accomplish, but so much easier to do it when you can access the programming directly with super-technology!
    Of course, this might result in them simply dying instead. But then, as we've already established "eternal mass suffering" as worse than extinction, we've succeeded in downgrading the risk.
    Our feelings, our experiences, and even our morality is the product of evolution, and should not be assumed to be universal cosmic truth. There may be a universe full of intelligences that do not share it. For example, consider an alien civilization emerging from a species that shares some life style traits with the freshwater eel: They live their lives in fresh water, but eventually instinct compels them to swim out to sea. They spawn through mass production of eggs and then tiny juveniles, the adults die, and the juveniles live in the ocean for a while. Eventually a lucky few return to fresh water as "glass eels" and then mature slowly into adults.
    It's a system based on mass infanticide. Imagine sending busloads of infants away knowing only a few will return alive. The horror. Absolute horror. But for the eels it's normal. The alien eel-analogues might be horrified at OUR system. "What? You do not let nature test your babies? You allow even the weakest, slowest and stupidest to survive and reproduce and spread their genes into your future population? How can Humans be so CRUEL?"
    They might build religions out of it: surely the handful of surviving eel babies are the select of god, and those who would defy god's selection, which of course would lead to massive overpopulation by the unworthy, devastating the ecosystems, are unspeakably evil beings.
    And along comes Humanity, where the absolute protection of each and every offspring is considered a moral good beyond question... What if they learn that about us BEFORE they learn "Yes, that's true, but they only have very few offspring. The divine culling occurs at the conception, where only one of many sperm joins with an egg, so Humans are actually worthy beings".
    Potential side trek into Dark Forest: Maybe its not a simple assumption of threat that causes civilizations to destroy each other. Maybe even well intentioned contact between two otherwise peaceful civilizations inevitably leads them to discover differences between their philosophical systems that causes each to conclude that the other is psychotically evil.
    How about, before assuming our ethics, feelings, and emotions are universal, we find at least one alien species to bounce the ideas off of?

  • @kennyholmes5196
    @kennyholmes5196 5 หลายเดือนก่อน +34

    One such S-class risk, IMO, is the current economic system that the world operates on: Growth-Oriented Corporate Capitalism. Under this system, corporations care for human survival, but only insofar as humans can make profit for the corporations. They don't care if their workers are absolutely miserable and in slave-like conditions, they only care "Is the profit margin this quarter better than it was last quarter?", and as such, will cause even further suffering by trying to force corruption to infiltrate the government of wherever it's implemented so as to prevent said government from doing anything that wouldn't actively support the increase of profits, even if that meant causing humanity to suffer greatly in the process.

    • @maxagist
      @maxagist 5 หลายเดือนก่อน +2

      always has been

    • @lucas56sdd
      @lucas56sdd 5 หลายเดือนก่อน +4

      *yawn*

    • @doggo6517
      @doggo6517 5 หลายเดือนก่อน +13

      So far, our form of capitalism has managed to greatly reduce global human suffering, not increase it.
      This is by coincidence rather than design, but at least it shows freedom and consent aren't inherently at odds with well-being.

    • @kennyholmes5196
      @kennyholmes5196 5 หลายเดือนก่อน +2

      @@doggo6517 While it may be the case that capitalism has reduced suffering in the past, that is not to say that it will continue to do so in the future. After all, a lot of capitalists sided with the Axis in WWII before their home governments took their own stances on the war, oftentimes in opposition to said prior capitalist stances and forcing the C-suite-equivalents to make a direct U-turn on policy. In addition, even nowadays, it's causing more harm than good relative to a notable fraction of other potential futures. Just ask anyone who's watched CGP Gray's video "Rules for Rulers", anyone who lives in the Global South's extraction-focused economies, or anyone who's fallen through the cracks of the USA's job market into homelessness.

    • @depholade
      @depholade 5 หลายเดือนก่อน +9

      you should re-evaluate this. Capitalism has reduced more human suffering than any other system imaginable.