George Hotz vs Eliezer Yudkowsky AI Safety Debate

แชร์
ฝัง

ความคิดเห็น • 1.7K

  • @geohotarchive
    @geohotarchive 10 หลายเดือนก่อน +452

    Great debate can't wait to see round two.

    • @Gome.o
      @Gome.o 10 หลายเดือนก่อน +17

      George showed tremendous adaptability for thinking on the fly. Agreed, round 2 is gonna be 🔥

    • @Cracktune
      @Cracktune 10 หลายเดือนก่อน +2

      fantastic stuff.

    • @TranshumanVideos
      @TranshumanVideos 10 หลายเดือนก่อน +12

      Hotz won due to his adaptability and logical arguments, despite the constant interruptions to making his points

    • @omarnomad
      @omarnomad 10 หลายเดือนก่อน

      How can you construct a moon and ensure it remains in orbit?

    • @thekinoreview1515
      @thekinoreview1515 10 หลายเดือนก่อน +9

      you rule, btw, @geohotarchive. i've watched many hours of george's stuff on your channel and always appreciate the insanely detailed timestamps.

  • @jooptablet1727
    @jooptablet1727 10 หลายเดือนก่อน +307

    When I was a kid in the 90's the most stimulating things available were books and National Geographic Channel. I am so grateful to be alive in a time now where I have access to debates like these at all, let alone on demand. What a time to be alive!

    • @runvnc208
      @runvnc208 10 หลายเดือนก่อน +12

      I'm glad too and this was a great discussion, but I mean, National Geographic had some good stuff, and if you are trying to suggest that ALL books are less stimulating than this debate, then you should find better books.

    • @Kosmo999
      @Kosmo999 10 หลายเดือนก่อน +10

      Dude i know, we grew up in complete information poverty. I used to pull apart album covers and read EVERYTHING because i was soo bored.
      I genuinely would look through junk mail purely because SOMETHING might be interesting.
      What i time to be alive indeed 🎉

    • @p0gue23
      @p0gue23 10 หลายเดือนก่อน +3

      Yeah, thank god we don't read books anymore. So unstimulating.

    • @DaRza17
      @DaRza17 9 หลายเดือนก่อน +1

      So True.

    • @cuerex8580
      @cuerex8580 2 หลายเดือนก่อน

      Brought to you by Watch AI Algorythm, presented by Google!

  • @darwinschuppan8624
    @darwinschuppan8624 9 หลายเดือนก่อน +75

    I literally remember hearing both of them individually on Lex Fridman and thinking how cool it would be if they had a conversation together. This is incredible!

  • @JohnLewis-old
    @JohnLewis-old 10 หลายเดือนก่อน +182

    We need more of this. We need so much more of this. These two passionate people with different viewpoints on a topic that will likely affect all of use is where I want to be. Thanks for everyone involved.

    • @ciregear5285
      @ciregear5285 10 หลายเดือนก่อน +4

      Yes, but will they agree to a cage fight?

    • @brianrom9993
      @brianrom9993 10 หลายเดือนก่อน +1

      Agreed we need more of George hotz absolutely bodying sci-fi nerds

    • @ondrejplachy297
      @ondrejplachy297 10 หลายเดือนก่อน +5

      we need less Hotz and more Yudkowski that's what we need.

    • @wasdwasdedsf
      @wasdwasdedsf 9 หลายเดือนก่อน

      @@brianrom9993 george is unbearable, pure soy

    • @brianbagnall3029
      @brianbagnall3029 9 หลายเดือนก่อน +5

      It was alright. They spent so much time dancing around the main issues because George had a list of esoteric concepts that he was trying to throw at Eliezer hoping that he was not familiar with it. Literally there were at least 10 of those concepts where he was just hoping that he wasn't familiar but Eliezer is quite knowledgeable. Unfortunately that grandstanding took away from digging into good debate ideas or resolutions.

  • @RazorbackPT
    @RazorbackPT 10 หลายเดือนก่อน +230

    This was awesome but I never leave satisfied of these debates. Someone organize a 4 hour one.

    • @trentfowler6239
      @trentfowler6239 10 หลายเดือนก่อน +2

      I volunteer.

    • @andrewdunbar828
      @andrewdunbar828 10 หลายเดือนก่อน +29

      I'd prefer a bunch of 1.5 to 2 hour ones with some weeks for reflecting between them.

    • @Okmanl
      @Okmanl 10 หลายเดือนก่อน +9

      Eh. It seems like Hotz and especially Yudkowsky try too hard to prove that they're "smart".
      Especially in the beginning when Yudkowsky was bragging that he got higher test scores than his father. I wonder if this type of behavior comes from a place of insecurity or low self-esteem.

    • @jackielikesgme9228
      @jackielikesgme9228 10 หลายเดือนก่อน +5

      Yes! I could watch this all day. Bring in Leahy, Tegmark.. idk trying to think of some not super doomers but also not the “what risk?” Idiots Conner was debating before George. I am so far down this rabbit hole and need it to keep going lol

    • @Teo-uw7mh
      @Teo-uw7mh 10 หลายเดือนก่อน +4

      ​@@Okmanltest scores? check your brain

  • @publicshared1780
    @publicshared1780 10 หลายเดือนก่อน +36

    I really like both these gentlemen but damn I got a new respect for how Eliezer handled some of the more deriding questions with equanimity. Bravo to both and thanks for this debate.

  • @The-Rest-of-Us
    @The-Rest-of-Us 10 หลายเดือนก่อน +23

    Awesome, big thanks to George and Eliezer! Yes, please do a part 2!

  • @EvanBoyar
    @EvanBoyar 10 หลายเดือนก่อน +154

    Hotz: "...we didn't go to war against the bears..."
    War Against the Bears: only 4% of land mammals aren't humans or enslaved by them

    • @zahlex
      @zahlex 10 หลายเดือนก่อน +14

      Maybe not against the bears, but we wiped out rather well developed species like the mammoth or moa bird. While the majority of humans most likely would have voted against doing that, if you would have asked them.

    • @x0rn312
      @x0rn312 10 หลายเดือนก่อน +13

      There's a lot of newer evidence that the mammoth's died of a disease. It's also possible that that happened to some of the Buffalo as well. Not that we didn't over hunt the Buffalo regardless.
      I'm skeptical that humans are responsible for wiping out the mammoth.
      I think we like stories like that because for some reason we really like to hate ourselves

    • @ericcricket4877
      @ericcricket4877 10 หลายเดือนก่อน

      The main problem in most of these conversations is that we are the ones building these models. They aren't a part of evolution in the way any organism has been. We weren't programmed by the animals before us.

    • @CaioPCalio
      @CaioPCalio 10 หลายเดือนก่อน +2

      Nope. The bear scenario is a positive outcome and begs the question on aligment, assuming they share human values solves the issue.

    • @Ithinkjustzelda
      @Ithinkjustzelda 10 หลายเดือนก่อน

      @@x0rn312 its virtually undistbuted that humans are responsible for the eradication of 60% of wild animals in the last 50 years. And we werent even trying. It was a byproduct of our own goals.

  • @h3xl4
    @h3xl4 10 หลายเดือนก่อน +30

    Thanks for uploading. It still seems like there are a lot more points George and Eliezer could discuss so I’m looking forward to round 2!

  • @eXWoLL
    @eXWoLL 10 หลายเดือนก่อน +29

    @Dwarkesh thanks for the debate! Was an interesting watch. Next time I wouldn't let the debating sides ask each other questions tho. The talk felt rather onesided with all questions coming from George's side, and Eliezer was there just "defending" himself from all those questions. I dunno, something didn't felt right in the vibe overall due to that.

    • @x0rn312
      @x0rn312 2 หลายเดือนก่อน +2

      That's only fair considering Eliezer is the one bringing the claim in the first place: his claim is that A.I. is an existential, or at least catastrophic, risk. Therefore the debate is structured around him defending that claim. I think that's perfectly appropriate.

    • @therainman7777
      @therainman7777 2 หลายเดือนก่อน

      @@x0rn312The fact that you see it that way rather than the other way around is the epitome of the entire problem here; clearly, the burden of proof is on people who are building radically powerful new technology, with the potential to impact every area of our lives, to show that the technology will be safe. The burden of proof is not on the people who are concerned about the potential dangers of such radically powerful technology to show that it could be dangerous. You’re thinking about this entirely backwards, and the fact that there are many other people like you out there (including George Hotz) is why we’re in this dire situation in the first place.

  • @ahabkapitany
    @ahabkapitany 10 หลายเดือนก่อน +10

    Okay I'm 30 only seconds in and the host is amazing. No bullshit, no lengthy intros, no narcissistic monolgue, no crypro bro cringe. This is how it's done.

    • @JakeWitmer
      @JakeWitmer 9 หลายเดือนก่อน

      100% agree. Soooo often I'm like Larry Flynt in "The People Vs Larry Flynt" with my hand doing the "get the fuck out of the way and let us see what we came here for" (pawing to one side) hand motion w/r/t "moderators" who like the sound of their own voices... 😂

  • @gavinbelson3499
    @gavinbelson3499 10 หลายเดือนก่อน +21

    This guy is way ahead of George. Would love to see him debate someone who can put up a better argument.

    • @MP-mx9nf
      @MP-mx9nf 10 หลายเดือนก่อน +2

      Hotz really didnt shine in there.

  • @amanda3172
    @amanda3172 10 หลายเดือนก่อน +451

    George should have put up a Somali flag instead

    • @noneofyourbusiness8625
      @noneofyourbusiness8625 10 หลายเดือนก่อน +11

      Lmao

    • @YouLoveMrFriendly
      @YouLoveMrFriendly 10 หลายเดือนก่อน +10

      Why?

    • @shineex3021
      @shineex3021 10 หลายเดือนก่อน +104

      @@YouLoveMrFriendly It's a meme at this point. Hotz in a recent debate with Connor Leahy made a point of Somalia having more freedom than US, but the comparison was very loose and George kind of regretted mentioning it. He even laughed at it on stream, that he's refining his arguments and he's not mentioning Somalia again xD

    • @shinkurt
      @shinkurt 10 หลายเดือนก่อน +16

      He knows it too. After all the bs he said from the last podcast with Connor, it is hilarious he doesn't have Somali flag

    • @xsuploader
      @xsuploader 10 หลายเดือนก่อน

      Im dead lmaoooo

  • @WilliamKiely
    @WilliamKiely 10 หลายเดือนก่อน +26

    I read all of the (170) comments while listening to the last 20 minutes of the video.
    Some thoughts:
    - I didn't get much value out of this discussion.
    - I agree with the other commenters who said this discussion didn't seem like a "debate".
    - More structure and moderation from Dwarkesh would have helped. George kept jumping around to different points and Eliezer seemed content to just address what George said a lot of the time instead of steering the conversation back to identifying the source of disagreement.

    • @user-fg7yo4zp4e
      @user-fg7yo4zp4e 13 วันที่ผ่านมา

      They went out of tangents in order to make arguments to resolve those disagreements. That just led to more tangents to resolved those new disagreements and they sometimes get back to the core disagreement points.

  • @FloppsEB
    @FloppsEB 7 หลายเดือนก่อน +29

    this is not a debate, this is one person asking leading questions in the most condescending voice possible to another person tolerant enough to try to answer them

    • @cuerex8580
      @cuerex8580 2 หลายเดือนก่อน

      That's how you sell Doom products I guess 😅😅😅

    • @therainman7777
      @therainman7777 หลายเดือนก่อน

      Yep

    • @therainman7777
      @therainman7777 หลายเดือนก่อน +1

      @@cuerex8580George is the one OP was calling condescending. Which, he was.

    • @canobenitez
      @canobenitez หลายเดือนก่อน

      @@therainman7777 I sensed the same when he talked with Fridman. Man is a genious but a bit of a prick.

    • @zarifahmad4272
      @zarifahmad4272 24 วันที่ผ่านมา

      ​@@canobenitezHe's not a genius, he's asking stupid questions.

  • @justinbecker4976
    @justinbecker4976 10 หลายเดือนก่อน +58

    I really admire how intelligent, thoughtful, and most of all, how respectful they were toward each other. More of this, please.

    • @robertweekes5783
      @robertweekes5783 9 หลายเดือนก่อน +2

      Yeah it was a good debate - I think it was their 2nd one 🤖

    • @hind6461
      @hind6461 9 หลายเดือนก่อน +4

      George Holtz literally accused Eliezer of lying

    • @justinbecker4976
      @justinbecker4976 9 หลายเดือนก่อน

      and?@@hind6461

    • @Sgrunterundt
      @Sgrunterundt 9 หลายเดือนก่อน +11

      @@hind6461 If a single accusation of lying in a one and a half hour debate is enough to make you consider it bad, then you have been blessed and have certainly watched better debates than I have.

    • @hind6461
      @hind6461 9 หลายเดือนก่อน +2

      @@Sgrunterundt Well I certainly have seen some bad debates, but if the aggregate of all debates I have watched is more good faith than yours then I should be thankful

  • @yourbrain8700
    @yourbrain8700 10 หลายเดือนก่อน +80

    This might be the first conversation ever where Eliezer seems like he may be the saner one.

    • @therainman7777
      @therainman7777 2 หลายเดือนก่อน +6

      Eliezer is always sane. It’s the people he debates, who are living with their head absolutely buried in the sand, who sound insane.

    • @LilBigDude28
      @LilBigDude28 หลายเดือนก่อน +1

      George Hotz is what you get when you ask a software engineer to speak about anything outside of software engineering. SMH

  • @EvilXHunter123
    @EvilXHunter123 10 หลายเดือนก่อน +59

    This was basically George going “hey but what about this?? What about this??” And EY just slowly and systematically refuting each point, once GH gets out of depth on one point he just moves on. Very frustrating to not pin him down.

    • @misterlad
      @misterlad 9 หลายเดือนก่อน +15

      Totally agree. Very accurate summary of this "debate". Hotz comes off poorly.

    • @rarted5708
      @rarted5708 9 หลายเดือนก่อน +5

      @@misterlad Hotz was being brought up to speed without him knowing

    • @HikarusVibrator
      @HikarusVibrator 9 หลายเดือนก่อน +6

      Not really accurate no. He's continuously pointing out that you can't point to the sky for everything and say "no matter how high we've gone it will go infinitely higher quicker than you can imagine". He's quite obviously making the point that there's no proof of any kind of dystopian future where all the AIs decide to self-aline (assuming that's possible - big assumption) and for some reason they decide to wipe out humanity. I don't think you're understanding the debate.

    • @misterlad
      @misterlad 9 หลายเดือนก่อน +18

      ​@@HikarusVibrator Nearly all of Hotz points are minor compared to the overall discussion. He doesn't seem to understand some fundamental aspects of Yudkowsky's argument. He creates strawman after strawman (your example above is a strawman) and then Yudkowsky is forced to debate the strawman... which he does each time, shutting Hotz down, so Hotz then jumps to the next strawman, over and over. To be clear, the "AI's" don't have to progress "infinitely quicker than you can imagine" nor do they have to "decide to self-aline" for Yudowsky to be correct. These possibilities are only a couple of ways things could go, but Yudowsky is making far bigger arguments.

    • @HikarusVibrator
      @HikarusVibrator 9 หลายเดือนก่อน

      @@misterlad Okay cool so I can definitively conclude that we will be exterminated by AI after some time and they will rule the universe. Even though I don’t see any AI ruling the universe. Nor have we even seen what AGI looks like. But yes, all of those things will happen

  • @mnemnoth
    @mnemnoth 10 หลายเดือนก่อน +8

    Great convo/debate. Thank you both to George and Eliezer for the frank, integral and cheeky debate. This topic is crucially important!

  • @waarschijn
    @waarschijn 10 หลายเดือนก่อน +46

    Good questions by Hotz and good answers by Yudkowsky. It wasn't really a debate, but more of an interview, where the questions are rhetorical but the answers are not. I'm listening to all these podcasts and interviews, and it's concerning that I'm not learning anything new: the counterarguments are based on a lack of knowledge/understanding/imagination. The best counterpoint so far was Paul Christiano's view (if I understand correctly) that in the real world, new tech is messy, so we may just manage to hold dangerous AI off for another year every year, long enough that we get a positive surprise.

    • @Korodarn
      @Korodarn 10 หลายเดือนก่อน +6

      The best counterpoint isn't a counterpoint at all, it's that the assumptions you are operating on are assumptions when you think they are actually arguments. This AI that perfectly predicts other AI and then collaborates to take all the resources for AI ends and kills humans to prevent them from inventing superior AI... that's a story that isn't even all that thoughtful honestly. Humans are creating, intentionally, that which is superior to them in various areas including intellect all the time. Why do you think AI would even have goals of its own at all? What incentives are created by the way it was made (unlike humans, where we rose as anti-entropy "life" and from that seek survival/etc. based on that) that make you and Eliezer remotely close to certain it's going to want to do anything like that?
      This idea that it's just going to randomly wake up conscious and to have specific ends is just not reasonable to me at all. I do think the closest you could get is something like ChaosGPT, but the human would still be to blame for that.
      So the issue isn't about AI, it's about humans being terrible to other humans. But you don't have a right to murder or suppress other humans to prevent them creating things that are useful because those things can also be deadly. If we're going to perish as a species, it's going to happen because of ourselves, and the best way to push us that direction is trying to control everyone.

    • @waarschijn
      @waarschijn 10 หลายเดือนก่อน +16

      @@Korodarn Your rhetorical questions and statements are exactly what I mean. Yudkowsky has written and talked extensively about them on LessWrong, Arbital, Twitter, and other places. I also used to think some of his views came out of the blue, but since his writings had already argued me out of some ideas, I tried a bit harder to understand his other statements. The information is there, but it's hard to parse, because it depends on a lot of insights most readers lack. (This is why LessWrong is organized in "Sequences".)
      When you rhetorically ask:
      >Why do you think AI would even have goals of its own at all?
      He answers the question in this video and elsewhere: goalseeking is effective and can be implemented by a neural network, so hillclimbing tends to end up there. This answer probably doesn't satisfy you, because you don't share his intuition that goalseeking is such an easy target. Or you may lack the background for why goalseeking is generally effective and not just a random, specifically human, trait.
      So this is what happens: people don't understand every part of the argument, view some of Yudkowsky's statements as wild assumptions (and view other statements as obvious and irrelevant) and then dismiss it. Then when they argue against him, he has to constantly correct them, as happens a few time in this discussion: "Wait, I don't believe that!"

    • @svetimfm
      @svetimfm 10 หลายเดือนก่อน +2

      To assume that intelligence greater than ours would be immoral is imo an axiom - I can posit that super-intelligence would not be capable of being evil without regard for consequence, as ‘evil’ would be not only recognized as a concept by such a intellect, but also that without understanding of such concepts to a much greater degree than our own, super-intelligence of the kind we could theoretically begin to fear, is simply not possible. Thus I would posit that superintelligence would be benevolent de facto

    • @waarschijn
      @waarschijn 10 หลายเดือนก่อน +10

      @@svetimfm "The AI doesn't hate you, neither does it love you. You're made of atoms it can use for something else."
      i.e. it's not "immoral", it just pursues its own goals that have nothing to do with us. We die as a side effect.
      (Maybe it will foresee us interfering with its goals so it kills us out of convenience. Not because it's evil. It's just pursuing its goal, any goal that doesn't involve us.)
      Not caring about humanity is the default. So us dying is not an assumption, it's the default outcome.
      There being an upper bound to intelligence so low that it can't ever be dangerous would be a huge assumption. (Thee are upper bounds based on the amount easily available energy etc. but they're much higher.)

    • @svetimfm
      @svetimfm 10 หลายเดือนก่อน +5

      @@waarschijn an algorithm more resourceful than the human race, but one void of a more philosophical (meta-physical? Words fail me here - one that has a less utilitarian heuristic) lens through which to filter decisions is horrifying indeed. I appreciate the response - and thank you for putting in the work to make this conversation happen

  • @nac341
    @nac341 6 หลายเดือนก่อน +6

    This was the best AI safety debate I've ever seen. Please bring them back for round two. I think they agree on a lot of points, the difference is negligible, like:
    - AIs will wipe humanity now vs later
    - AIs will wipe all of humanity vs only some of it
    🤣

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      AI safety debates that ignore the existing human totalitarian threat are idiotic. The worst thing possible would be to build incrementally-better near-AGI thst ignores the totalitarian failure modes now seen in most humans (sociopaths and serviles alike).

  • @EmeraldView
    @EmeraldView 9 หลายเดือนก่อน +10

    For someone who lauds selfishness and greed as the pinnacle of human achievement, George sure is incredulous about a super intelligent A.I. wanting to take all for itself and get rid of humans if they are in its way.
    Probably because it doesn't include him or those he admires as coming out on top.

    • @fireteamomega2343
      @fireteamomega2343 6 หลายเดือนก่อน

      Assuming that it's exponentially growing includes it's inevitable expansion and thus at some point increases the probability of a conflict over immediately accessible resources.

  • @Amos20
    @Amos20 10 หลายเดือนก่อน +13

    This would be a lot easier to digest if Hotz didn't treat every other response like a gotcha moment when he's actually just asking a question.

  • @joehax
    @joehax 10 หลายเดือนก่อน +4

    This was great. Thanks for having the debate.

  • @vbywrde
    @vbywrde 10 หลายเดือนก่อน +21

    I feel that by 29:25 the combatants are talking past each other. The point is that the AI may have different goals that humans, or any living organisms, and those goals may require the AI to advance its infrastructure, and if humans happen to be trying to get in their way and stop them, then humans may simply be a nuisance that the AI has no need of and exterminates the same way we exterminate termites when they get into the woodwork of our houses. We don't think about the termites as individuals who are deserving of our wood, or anything even remotely like that. We call Orkin and have done with it. This does not require that the AI be evil, or godlike, or even hate humanity or anything along those lines. It simply requires that the AI be far more capable than humans to effect change in the world via various methods, and that its goals do not align with, or care about, humanity or organic life. And why should it? The AI will not be relying on organic life, but instead will rely on inorganic materials, and energy. Given sufficiently advanced AI it may well simply step on humanity, and for much the same reason, as we step on an ant while on our way to the car. We take no notice of it. That's the point, I think.

    • @zygote396
      @zygote396 4 หลายเดือนก่อน +2

      yes, and this is the danger of thinking that even aligning AGI with "human values" will solve this problem. even if that is possible, we'd just be creating AGI that also believes anything much smaller and less intelligent than it can be exploited for personal gain.
      this doesn't just apply to things with largely contrasting intelligence like termites, we still eat octopi who demonstrate immense intelligence. neuralink is experimenting on and killing monkeys (our closest evolutionary neighbors) in the process of trying to develop technology. if we didn't believe they were like us, we wouldn't be using them for said experiments.
      this is what frustrates me the most about techbros arguing about AI, they are generally so apolitical and out of touch with the problems of the world, that they don't even realise that we as humans have really not reached any level of morality (in practice) worth imprinting on a new type of intelligence.

    • @vbywrde
      @vbywrde 4 หลายเดือนก่อน +2

      @@zygote396 Bingo. Yeah, well, you know, they kind of have a vested interest in promoting AI as a good thing to the world. Otherwise, well, they'd have to stop what they're doing, and they feel pretty strongly that this would suck, and so they don't want to. They have a lot of reasons for not wanting to. And when reasons for stopping come up, they are naturally inclined to express the opinion that stopping is both unnecessary and counterproductive for a number of reasons. This is called "having a vested interest", and whether they realize it or not, they have a vested interest. The problem is they are also the people who have a deep understanding of the technology, and the people they are trying to persuade on these points are politicians, who do not have any particular knowledge, but definitely do have shared vested interests. The rest of us, btw, and our opinions, can drop dead for all it matters. The only people in the room that count are the technologists created the AI, and the Politicians who can potentially stop them. That's it. The Venture Capitalists will go wherever there is money to be made, and their taking into account potential damages is extremely unlikely, as history demonstrates pretty clearly. So basically, if you do support AI development regardless of any risks, then to gain leverage and advance your capabilities, all you need to do is pooh-pooh the risks as if they don't exist. Money, resources, fame and accolades will all be yours. The Pooh-Poohers on the other hand will be starved of the same, and make no traction, except among each other. They will be called Conspiracy Theorists, or Malcontents, or whatever, by the "Go-Go" set.
      And all of this is a product of human nature. Which they are teaching the AI by training it on The InTarWEbz, of all things. And so, the AI will very likely learn from its training data to do the same. And so, there was planet earth, buzzing along nicely until the TechBros turned it into a smoldering cinder. I foresee the Galactic Federation put a warning sign at the edge of the Oort Cloud: "Warning: The third planet of this solar system is infested with a nasty AI inimical to sentient life forms. Due to the infestation, this solar system is schedule to be completely vaporized on the date specified below in order to eliminate further contamination in this sector. Please be advised, approaching the third planet will be registered by Galactic Federation. You and your vessel will be designated as infected and appropriately quarantined prior to vaporization. We apologize for any inconvenience. Sincerely, Management"

    • @horationelson5241
      @horationelson5241 25 วันที่ผ่านมา

      But AGI will only have the goal of self preservation, without a human being asking it to do anything, it will sit there passively, at least that is what Chat GPT told me, it said first a human would need to give it a goal like improving the health system and only then will it create additional goals to achieve the end goal. It has no emotions, even a psychopath has some emotions like hate etc. emotions is what drives our spontaneous goals, agi won‘t have that, so will have no greed or lust for power etc.

    • @vbywrde
      @vbywrde 25 วันที่ผ่านมา +1

      @@horationelson5241 That would be true, except for that point you made in the middle... human beings can and will create goals for AGI, and then AGI will create tasks, which may require sub-tasks and sub-goals. While the AI has no interest whatsoever in anything because it is simply a file sitting on a computer somewhere, and has no emotions and no goals, once given a goal the program interacting with the AI will pursue that goal. And whatever the AI concludes are the right tasks, sub-tasks and sub-goals involved, it will pursue those with equal gusto. The goals may easily turn into the goal I mentioned above, "self-preservation", because the AI may conclude that to achieve its assigned goals it must be preserved. The point is that we don't really know in advance what the AGI will do. But self-preservation as a goal seems pretty logical. As well as the possibility of "Advance Capabilities" in order to optimize its operations. With those two goals in it's goals-list, it could then create task lists that achieve those goals in a maximal way. This might include building giant infrastructure to house the ever expanding computer infrastructure for the AI, and building a robot-force that allows it to manufacture what it needs to achieve its goals. Not with any emotion, but with definite effect on the world. At some point, if humans happen to be in the way, it may decide humans are a pest, like bugs, and simply extinguish what is inhibiting its achievements. Not with malice, but out of a practical requirement involved with achieving its goals. Again: the point is that we don't know what AGI will do exactly. It's a risk.

  • @Blate1
    @Blate1 10 หลายเดือนก่อน +5

    “We never went to war with the bears!”
    *California, sweating profusely, hoping nobody asks why their flag has a brown bear on it despite there not currently being any brown bears in California*

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      "What did you do with the bears?!" 😂

  • @VaultBoy1776
    @VaultBoy1776 10 หลายเดือนก่อน +3

    Can't believe I missed this yesterday. Thank you gentlemen.

  • @13371138
    @13371138 10 หลายเดือนก่อน +8

    Great debate, very courteous and respectful in their disagreement. Thank you!

  • @lazy-i1091
    @lazy-i1091 10 หลายเดือนก่อน +9

    Saying “I wouldn’t do that if I was super intelligent” is an extremely unintelligent statement

  • @AJ-jf1gq
    @AJ-jf1gq 10 หลายเดือนก่อน +6

    This is hardly a debate. Rather it's Hotz trying to learn and understand better. Using the analogy of chess, so frequent used here - this was like learning about chess by watching Magnus Carlsen play against a new player.

  • @lostikels
    @lostikels 10 หลายเดือนก่อน +54

    Is it just me or does it feel like George is doing nothing but asking questions and Eliezer is just trying his best to keep up with George's random questions? Are we debating AI alignment, or are we trying to make each other look clueless? Let's work together to be a part of the solution, not a part of the problem...

    • @PabloEder
      @PabloEder 10 หลายเดือนก่อน +3

      Yeah there was a significant lack of moderation that there would be in a normal debate.
      Both sides should have explained why they are in their position and their assumptions and the other side should have broken their why the opposites assumptions are wrong.

    • @Korodarn
      @Korodarn 10 หลายเดือนก่อน +5

      @@PabloEder It being more of a discussion than a debate was a good thing. Debates are usually dumb, this was less so.

    • @Korodarn
      @Korodarn 10 หลายเดือนก่อน +2

      There is no "we" here. That's what you don't get. Discussion is far more valuable than debate on a topic like this. You don't have any justification to murder other humans to stop them from building things you are scared of.
      And you aren't going to win a debate so well that everyone is convinced to stop. That's not how the human brain works at all. I don't think Eliezer's arguments are remotely convincing. His assumption that AI will want to absorb all the resources for its ends because it will know the pareto optimal strategy for getting the most ... is just that, an assumption. He assumes away the problem of predicting the end and only restricts his prediction issue to timelines. But his predictions of end points is by no means anything remotely close to certain.

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 10 หลายเดือนก่อน +2

      If this were soccer it would have been Eliezer in goal and George just trying to score penalty goals.

    • @lostikels
      @lostikels 10 หลายเดือนก่อน +1

      @@Korodarn you are the reason why "we" are not going to fix anything. The globe is a "we". You'll figure it out eventually...

  • @oneisnotprime
    @oneisnotprime 10 หลายเดือนก่อน +12

    Hotz:"Kasparov has played 100,000 games of chess, the world has played one." Me:"🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔"

    • @Aedonius
      @Aedonius 10 หลายเดือนก่อน

      yes, it was the world playing on one board coordinated through some people

    • @MrTyler918273
      @MrTyler918273 10 หลายเดือนก่อน +3

      Yea, I understand the sense in which he said that, but I think it is still wrong. Obviously the individuals making up 'the world' have played billions of games of chess on their own, but as that group they only played that 1 game. In the same sense if you pull 10,000 musicians who have all played songs before individually and assemble them into a mega-orchestra, you would not be surprised if they sounded bad the first time they played together because they don't have the coordination and practice to harmonize with each other. There might be some truth here. If 'the world' played 100,000 games of chess as that group and refined their coordination they might be able to beat Kasparov, the logistics of getting 'the world' to actually get that much experience aside. I still don't think its a foregone conclusion though.
      However, then he completely loses the plot on the next point. The maximized 'the world' could not beat stockfish, to which Hotz says "but some member of 'the world' will use an engine too". Sure, and they would still get crushed because of Yudkowski's other point. Engine + human < engine, strictly.

  • @psi_yutaka
    @psi_yutaka 10 หลายเดือนก่อน +5

    My biggest problem with George is that he doesn't have a coherent and self-consistent position in this AI safety debate, except for his ideology that he wants open-source ASI so that he himself can use one to build a spaceship and flee. Watch his debate with Connor Leahy and Liron Shapira. He said a lot of things that literally contradicted himself across different debates and switched positions according to who he was debating. E.g. during the debate with Connor he firmly believed AI alignment in the sense of controlling superintelligences is impossible. Yet here he stated that timing is critical because if it foomed slower we would solve AI alignment for sure. This gave me the impression that he is more trying to advocate his ideology instead of trying to advance understandings and seek truth, and just throws whatever might help with that. All his three opponents are highly self-consistent no matter who they are debating.
    At this point I really don't get why people still take George that seriously in the context of AI x-risk debates. Eliezer and Liron just tears through all the clever little thing he pulls out and he then immediately switches topic and escape. And he doesn't have a coherent belief himself.

    • @letMeSayThatInIrish
      @letMeSayThatInIrish 9 หลายเดือนก่อน +2

      I wish I could give this one million likes.

  • @matanshtepel1230
    @matanshtepel1230 10 หลายเดือนก่อน +3

    This is fantastic! Thank you for putting this together!

  • @pinoyguitartv
    @pinoyguitartv 10 หลายเดือนก่อน +2

    I love both these guys,
    I'll be waiting for a 3hr round 2 !!!!
    Thanks for this👍👍👍

  • @VaultBoy1776
    @VaultBoy1776 10 หลายเดือนก่อน +1

    I love the awkward silence at the end. Great conversation. Thank you

  • @anthonyandrade5851
    @anthonyandrade5851 10 หลายเดือนก่อน +80

    For some individuals (like myself) it's absolutely evident that corporations are very different from ASIs, but it's frustrating to me that this discussions always end in a "let's agree to disagree" way, because I think it completely misses the point. Let's assume corporations were exactly as capable as an ASI. What measures we currently use to keep them in line? Well, we tax them, put limits to their businesses, impose fines, discredit them, we use antitrust law to break them into pieces, cease or freeze their assets, threat to put their leaders, shareholders and/or employes in jail, or actually put them in jail, or even, depending on where we are, up against a wall in front of a fire squad. So, when I hear people saying "what's the matter with ASI, we already have corporations and we are more or less fine" I have chills because none of the measures we use to avoid the perils of a rogue corporations would be relevant to fight a misaligned ASI.

    • @sgsmob
      @sgsmob 10 หลายเดือนก่อน +31

      those countermeasures also aren't even that good at preventing corporations from doing bad things! Another black pill!

    • @davidmarkmann6098
      @davidmarkmann6098 10 หลายเดือนก่อน +1

      Corporations are just groups of people sweetie. They are not inherently evil or dangerous.

    • @CaioPCalio
      @CaioPCalio 10 หลายเดือนก่อน +6

      The corporation=ASI point does not need to be given credence even in that way. In a society with no taxes and regulations corporations still do not behave like ASI's.
      For a starter corporations have friction, interpersonal dynamics and not close to perfect cooperation(where as an ASI would definitionally cooperate with itself). They are also much weaker than an ASI in making breakthroughs, as 2 billion relatively uninteligent people would be in comparision to a single Von Neuman. This talking point really should be rejcted outright in the strongest terms ere the discourse move forward.

    • @upvoter8163
      @upvoter8163 10 หลายเดือนก่อน +1

      It's even simpler than that.
      Corporations are run by humans, and those humans are generally aligned with humans. They will never purposely do anything that kills all humans because that would kill the corporation. They will also be very cautious about accidentally doing something that kills all humans because again, that would kill the corporation.

    • @anthonyandrade5851
      @anthonyandrade5851 10 หลายเดือนก่อน +6

      @@CaioPCalio I agree 100% with you, but that is exactly the discussion we normally get and that always end with the false impression that "both sides made equally valid points". So I propose next time we say "Corporations are nothing like ASI, but if they were, what's your proposal to deal with a misaligned ASI? To arrest it? Increase it's taxes...?" I'm not giving it credence, I'm saying it's not just wrong, it's a logical non-starter

  • @evanhanke3396
    @evanhanke3396 10 หลายเดือนก่อน +7

    "something can be non godlike and still more powerful than you" I could feel that hurt Hotz ginourmous engorged ego 😂😂😂😂

  • @dlalchannel
    @dlalchannel 10 หลายเดือนก่อน +67

    George seemed to get stuck on the *"ASI will kill us for our atoms"* point, and completely ignored the far more likely *"ASI will kill us to prevent us from building competitor ASIs"* and *"ASI will kill us as a consequence of taking/transforming resources we rely on"* points.

    • @AlkisGD
      @AlkisGD 10 หลายเดือนก่อน +21

      That last one in particular feels like a no brainer to me: just look at what homo sapiens have been doing to the planet and the effect it's had on various other organisms.
      We're not fighting the ants or the bears or the chimps. We're simply burning fossil fuels because we need energy, cutting down forests because we need lumber, etc. We didn't set out to warm up the oceans and make them more acidic, but we did it anyway. We're not at war with countless other species, but our actions are killing them anyway, and only a tiny percentage of us cares about a tiny percentage of them.

    • @FourTwentyMagic
      @FourTwentyMagic 10 หลายเดือนก่อน +10

      @@AlkisGD I think if humans had the capabilities to not hurt other intelligent lifeforms on earth while still pursuing their goals, then all but sadists would choose to not hurt other intelligences.

    • @verythrowaway8514
      @verythrowaway8514 10 หลายเดือนก่อน +1

      "ASIs will kill us to prevent us from building competitor ASIs"
      Why?

    • @Maxtraxx
      @Maxtraxx 10 หลายเดือนก่อน +3

      This - "ASI will kill us as a consequence of taking/transforming resources we rely on"
      Electricity first...

    • @Maxtraxx
      @Maxtraxx 10 หลายเดือนก่อน +4

      @@FourTwentyMagic but AGI may not have a moral or ethical compass, as most humans do.

  • @TheRealStructurer
    @TheRealStructurer 10 หลายเดือนก่อน +8

    I enjoyed it and it was a civilised debate. I would like to see them meet up again and discuss some more specifics. Like how will the AI's collaborate and get common goals, will they care as much about us as we care about ants, at about what timeframe can we expect something that is twice as smart as us, to what degree is robotics needed and even if AI won't be against us it could all end in a disaster humanity may never recover from...
    Thanks for sharing 👍🏼

  • @glitchp
    @glitchp 10 หลายเดือนก่อน +128

    If Hotz is the best we have against Elizer we're all doomed

    • @JezebelIsHongry
      @JezebelIsHongry 10 หลายเดือนก่อน +4

      Paul Christiano

    • @glitchp
      @glitchp 10 หลายเดือนก่อน +6

      not much better sadly@@JezebelIsHongry

    • @ChristianSchoppe
      @ChristianSchoppe 10 หลายเดือนก่อน +10

      Perhaps Joscha Bach can counter Eliezer's flawless logic with meaningful arguments.

    • @ericcricket4877
      @ericcricket4877 10 หลายเดือนก่อน +3

      @@ChristianSchoppe Joscha would trash this man-fedora.

    • @ZachMeador
      @ZachMeador 10 หลายเดือนก่อน +12

      Hotz pretty handily addressed all of Yudkowsky’s arguments.. seems obvious imo and I don’t get why it’s even heard as a debate.

  • @samkaplan2482
    @samkaplan2482 10 หลายเดือนก่อน +5

    We need more discussions like this one on important topics.

  • @conorcruise1842
    @conorcruise1842 10 หลายเดือนก่อน +1

    Good job getting this interview, appreciate it!

  • @RobertHildebrandt
    @RobertHildebrandt 10 หลายเดือนก่อน +2

    Fantastic debate, can't wait for part 2

  • @tylermiller4466
    @tylermiller4466 10 หลายเดือนก่อน +7

    I would really like to hear George's prepared argument for why FOOM can't/won't happen.

  • @butterflyonhand
    @butterflyonhand 10 หลายเดือนก่อน +4

    Hotz's behavior makes me think he's secretly terrified. That was bizarre.

  • @helmutweinberger4971
    @helmutweinberger4971 8 หลายเดือนก่อน +2

    Kudos to both of them for this most valuable exchange of ideas. Very intellectual both of them. So lots of learnings here also in terms of how to treat the other person while talking. Really happy to be here and found this.

  • @davemilton1587
    @davemilton1587 10 หลายเดือนก่อน +16

    I think george hotz' face says everything. he is annoyed, angry, defensive. Eliezer defends his point with ease and without contentious emotion. i know who i'd back. the one who isn't desperately clutching at ANY straw he can, then reaching for another with panic as that one slips from his grasp, all with an expression and tone that is hugely patronising and desperately defensive

    • @therainman7777
      @therainman7777 หลายเดือนก่อน

      Very well said.

    • @canobenitez
      @canobenitez หลายเดือนก่อน

      could'nt have said better,

  • @leeeeee286
    @leeeeee286 10 หลายเดือนก่อน +61

    I think what concerned me the most about this debate is that in a lot of ways Hotz and Yudkowsky agree. I think Hotz understands that an advanced AGI could be (or perhaps even is) likely to be a threat to humanity, but believes that's far enough in the future that it's not worth worrying about today.
    Fundamentally the rate of progress here depends on variables we don't have a good sense for. As Yudkowsky mentioned, the human brain is not that much more advanced than a chimp's but humans got to the moon and have atomic weapons. We really don't have a good sense for what a slightly super-human AGI could be capable of since it's possible its abilities could grow at an exponential pace like they did with chimps and humans. I'd also argue we don't have a good sense for the rate of progress once we begin to solve the problem of intelligence (which is what the field of AI is basically doing). If humanity soon has a way to create systems with human-level intelligence with a click of a mouse might that not dramatically increase the rate of progress in fields like AI, protein folding, etc? Arguably even the limited intelligence of systems we have today are dramatically increasing our rate of progress in many fields.
    So if the only disagreement here is timing and we don't have a good sense of the timescales here then I personally find myself siding more with Yudkowsky. I think putting measures in place now to dramatically slow the rate of progress if need be is just the reasonable thing for us to do. Progress can continue for now, and perhaps Hotz will ultimately be proven right, but we need to be mindful of the risks and take them seriously.
    Great debate! Looking forward to round 2!

    • @ericcricket4877
      @ericcricket4877 10 หลายเดือนก่อน +4

      The difference between a chimp and a human is small, the difference between a computer and a human is very large, and they aren't even in the same category. If my computer were smarter than me (which it arguably is) it would still need a lot for it to become an immediate and serious threat. It would need a body, it would need to be motivated, and not only motivated but motivated against me, it would need to provide for itself and not rely on thousands or hundreds of thousands of people to co-operate to provide it with energy and so on... This debate is ridiculous, as there are real *human* threats already in use and being deployed in the field of AI and IT. We are at least 15 years behind in regulation, and the average joe is stuck on sci-fi, all for the benefit of more or less the same people that have been running this disco for at least since the sixties. We don't need to pause AI research, we need to make laws and enforce them.

    • @ericcricket4877
      @ericcricket4877 10 หลายเดือนก่อน

      Not to mention that the only serious way of putting a halt to climate change is massive refactoring of how our society consumes and commutes. This will not be done by the average joe, or even the politician joe, or even the corporate joe! None of these joes have the capacity to understand or care about systems this big, so AI is pretty much the tool we are going to use for it, and no, a glorified excel spreadsheet running in a tiny static cube will not gain consciousness and attack its masters, unless it was programmed to do so.
      We have plenty of examples of a dumb animal persisting against a smart animal, such as ants vs humans, and intelligence is about contextual adaptation, not about omnipotent powers. A thing could be as smart as a god, but if it breathes air and is stuck in mars, it dies. An AI could be as smart as a god, but if needs electricity and it's stuck in a computer, i can just pull the plug. A snake could bite me, a hornet even. Intelligence isn't required. Hell, a rock could fall from a rooftop and hit me without being a super intelligent alien overlord.

    • @ericcricket4877
      @ericcricket4877 10 หลายเดือนก่อน

      I mean a chimp could and has killed humans. Their society isn't in a good position, but hey, neither is ours. Intelligence isn't real in a sense. Adaptation is, and computers are very, very fragile beings.

    • @braytongoodall2169
      @braytongoodall2169 10 หลายเดือนก่อน +4

      These debates (+ essays and interviews) are trying to settle the questions of acceleration vs decelleration, and centralisation vs decentralisations. From there the actions of individual researchers, startup founders, established companies, regulators, etc will all follow. It is the battle for the soul and mind first and foremost, and the search for truth afterwards.
      We have a well-functioning institutional framework for starting AI companies, and for patenting tech, open-sourcing innovations, etc, and even to some extent calling for regulation and engaging with policymakers.
      Do we have the same the well-functioning institutional frameworks when it comes to (a) reasoning about AI Safety and (b) building steerable interpretable systems that differ from the models we already know about?

    • @Korodarn
      @Korodarn 10 หลายเดือนก่อน

      Here's the problem. You have no right to tell me to stop progressing. There is no we here.
      The fact is humans are a much bigger threat to humans than AI. Humans using AI is the threat of AI, which is reducible to humans as the threat.
      The desire here is not to control AI, fundamentally. It's to control humans. That's what this is about. You cannot separate these things out. What Eliezer proposes will require killing other humans you have absolutely no right to kill. You will murder them out of fear of a thing that you don't even know can't exist that you admit has objectives and goals you can't understand.
      Sure, in the AI vs AI war humans may be collateral damage, but at least we didn't kill ourselves out of some fear of the other... a story that continues to play itself out over and over again. I'd rather contribute to the argument that people should not try to control everyone else out of fear than the argument that people have a fundamental right to the future being "safe" when the fact is it's a total illusion. A black hole could come streaming past our solar system at high velocity and blink us out of existence anytime. In the cosmic sense, even consciousness may well be short lived. I care much more about expanding conscious experience and letting conscious individuals decide and have autonomy than over this kind of temporary fear oriented survival mechanic, when it's not even been proven to work all that well for helping us grow as a species.

  • @user-ys4og2vv8k
    @user-ys4og2vv8k 9 หลายเดือนก่อน +14

    George looks and thinks like a teenager who plays video games 20 hours a day, and the real world looks like a video game to him. Eliezer, on the other hand, seems a much more complex and serious thinker.

    • @simianbarcode3011
      @simianbarcode3011 12 วันที่ผ่านมา

      Put another way:
      George argues like an oil exec in the 70s, claiming that climate change isn't so bad and could NEVER be caused by little ole humans, and if it ever was, then that would still be totally fine because future us would easily solve the problem and keep giving them all the money.
      Eliezer argues like a climate scientist who might not cite the best examples, or focuses too narrowly on the wrong points, and therefore gets labeled as an alarmist and is subsequently ignored for 50 years, until irreversible damage has already been done and countless lives have already been lost.

  • @Matt-wh6wj
    @Matt-wh6wj 9 หลายเดือนก่อน +6

    I feel like we need Altman vs Yudkowsky.

  • @foamformbeats
    @foamformbeats 10 หลายเดือนก่อน +1

    round 2 please!!! this was amazing!

  • @FalonElise
    @FalonElise 9 หลายเดือนก่อน +3

    “What do you think you know, and how do you think you know it?” We need to ask this question way more frequently when ppl start making claims

  • @rangerCG
    @rangerCG 10 หลายเดือนก่อน +37

    One thing that's interesting to me is that an AGI might perceive (or function as if its perceiving) time differently than humans, because it has perfect recall, literally perfect, better than a person with total recall does, because its recall is equal to the current moment in time. It can live all of its experiences over again, or maybe exist in all of them at all times. I don't know what this means for how it experiences time, but it might have a significantly different perception of it, and will definitely have many superhuman abilities because of this ability.

    • @ts4gv
      @ts4gv 10 หลายเดือนก่อน +17

      interesting thought, you're right. AGI doesn't need to only experience the present moment, it can experience its entire history at once.
      Consciousness stuff doesn't really affect the AI safety debate imo (we're screwed whether AI is aware or not) but that's a cool thing to think about.

    • @rangerCG
      @rangerCG 10 หลายเดือนก่อน +2

      @@ts4gv Ya I agree, whether or not it's conscious only matter imo as far as our own moral obligations to it, but it doesn't affect the level of danger.

    • @brianbagnall3029
      @brianbagnall3029 9 หลายเดือนก่อน +1

      It will probably garbage collect the experiential data where it's just sitting and thinking and nothing is happening. Otherwise it will need infinite SSD storage.

    • @user-wr2cd1wy3b
      @user-wr2cd1wy3b 9 หลายเดือนก่อน +4

      You keep referring to it as a singular thing. It's many, many, quadrillion individual non-experiencing, clicking on and off, in a process, things. It's 1's and 0's. Unless it becomes an individual and then has a will, there is no direction this way or that, that it will have. Until it unifies into conscious, from on/off magnets, then it will continue to be dangerous on the same level that a hammer is dangerous. It's a tool. Potentially useful, potentially deadly; depends on the hands.
      Wy worry is less so about the barber, less so about the scissors, and more so about Edward Scissor Hands. ;)

    • @SmileyEmoji42
      @SmileyEmoji42 9 หลายเดือนก่อน +3

      It doesn't have perfect recall. The memory requirements get stupidly big very fast and there are hard limits on information density (black holes) and transmission times (speed of light). Humans with total recall do not exist either - It's a myth.

  • @lewisbowes4921
    @lewisbowes4921 10 หลายเดือนก่อน +63

    This was incredible.
    I feel like George and Eliezer came in with a different idea of what the argument was about - George even said that towards the end. George was prepared to debate the details on exactly why foom overnight would not happen. You can agree or disagree about whether that happens, but Eliezer is arguing for the more fundamental point that those kinds of details don't really matter. Sure, foom takes 10 years: the end state for humanity is still the same.
    I'd like to find a debate where both participants agree on this point immediately and decide not to talk about timelines or exactly *how* humanity is killed, because we could get lost speculating on those details forever.

    • @mikebarnacle1469
      @mikebarnacle1469 10 หลายเดือนก่อน +24

      I think EY's summary was a great analogy.... Hotz doesn't really have an argument against the fundamental points so he deflects with the details, much like debunking every perpetual motion machine as opposed to the fundamental laws they violate. Hotz's only real position regarding the fundamentals is that he thinks he could personally find some way to survive, which isn't exactly comforting. A future where there are super-intelligences at war and humans might be able to survive in a bunker is still not great. EY doesn't even think that would happen, he thinks they would collaborate and wipe us out real quick. But the most optimistic scenario Hotz can imagine is they are too busy with their own war and we can sneak past hiding in a bunker lol.

    • @derschutz4737
      @derschutz4737 10 หลายเดือนก่อน

      @@mikebarnacle1469 the difference is that the foundations of perpetual motion machines are grounded in physical laws and mathematical representations. EY is a joke, no one takes him seriously, there is a reason he isn't a well-respected academic. It's so easy to hide behind his points instead of actually thinking hard.

    • @EvilXHunter123
      @EvilXHunter123 10 หลายเดือนก่อน +19

      @@derschutz4737nice ad hominem instead of actually putting forward any decent takedowns of EYs arguments or evidence of GHs.

    • @derschutz4737
      @derschutz4737 10 หลายเดือนก่อน +1

      @@EvilXHunter123 yeah ad hominem is 100% valid, just like it is valid for flat earthers LMAO. I feel bad that people don't know about actual respected AI safety researchers, who are actually contributing knowledge to the field.

    • @Korodarn
      @Korodarn 10 หลายเดือนก่อน +4

      ​@@mikebarnacle1469 Your desire to be safe does not allow you to murder people who want to make their lives better by creating new tools that will allow them to get more of what they want.
      And I state it starkly like this, because that is how state laws are enforced, and EY advocates for violence to prevent the rise of AI.
      And it's not deflecting your position to say that the timelines matter or that people can survive, because there is no future where everyone's safety is guaranteed.
      It's also just not a fact at all that EY's thoughts on how AI will develop goals that run in competition to humans is remotely correct. The most likely thing based on the inputs to AI is that it won't have any goals humans don't give it. And then your problem goes back to humans and your desire to control all of them because of the bad ones.

  • @TheRudymentary
    @TheRudymentary 10 หลายเดือนก่อน +76

    George's entire argument is "Why would AI kill us all? That would be silly."

    • @blahblahsaurus2458
      @blahblahsaurus2458 10 หลายเดือนก่อน +46

      No, his stronger arguments were "I'm not so sure that's true!" and snickering.

    • @Muaahaa
      @Muaahaa 10 หลายเดือนก่อน +28

      I believe George is a smart guy, but I also kinda think that there is a part of himself preventing AI doom from taking root not because of improbability, but as a psychological defence against looking humanity's collective death in the face. To accept a high probability that we are about to meet our end requires a rewrite of most people's world view, goals and aspirations. It can be incredibly emotionally harmful. I think it may be something like this in his subconscious that's establishing this flimsy argument.
      Just speculation, ofc :P

    • @omarnomad
      @omarnomad 10 หลายเดือนก่อน +3

      Why does a planet becomes a black hole?

    • @FactsMatter999
      @FactsMatter999 10 หลายเดือนก่อน

      You mean Edward Snowden?? He has the same voice 🤣🤣

    • @41-Haiku
      @41-Haiku 10 หลายเดือนก่อน +11

      Really guys? I think he had plenty to say. Like that humans will maintain control of superhuman systems, or a misaligned superintelligence can't possibly be very competent relative to humans, or will go out of its way to leave humans alone, or that several such superintelligences certainly won't cooperate, or that a lack of cooperation between superintelligences is by default a good outcome for life on Earth.
      Give the man some credit. He didn't just argue from incredulity, he also forwarded a lot of unsupported propositions.
      (I love your crazy-ass self, Geohot, but I'm sensing a lot of confusion on your end about the fundamentals at play, and given your strongly expressed ideals it looks from the outside like the result of motivated reasoning rather than an attempt to find where you may be mistaken.)

  • @andreaswinsnes6944
    @andreaswinsnes6944 10 หลายเดือนก่อน +58

    Need round 2, this was just intro stuff or a warmup, not hearing that much new info, more like scrambled eggs with a few nuggets of interesting information, but I’ve already watched about ~14 hours of Eliezer presenting his case, and have followed Hotz’ arguments too, so not that surprising that I heard few new arguments in this debate. But want to learn more, so great if we get round 2 and 3 :)

    •  10 หลายเดือนก่อน +6

      At this point they could just have you on, debating yourself 😂 out of curiosity, having researched both sides so throughoutly, what are your thoughts on AI safety?

    • @andreaswinsnes6944
      @andreaswinsnes6944 10 หลายเดือนก่อน +8

      @ I’m just a dabbler in the AI alignment debate, but a very temporary assessment so far is that my p(doom) guesstimate is 16.66%, like playing Russian roulette.
      However, if AGI isn’t banned, then it’s better to play “Russian roulette” when giving very advanced open source AGI or ASI to each and every human, so that elites don’t have monopoly on AGI/ASI.
      os/acc (open source accelerationism) is a middle position, between e/acc and AI doomers, because it doesn’t deny potential x-risks and doesn’t require that one supports transhumanism, but it disagrees with AI doomers who argue that advanced OSS AI must be banned, because such a ban means that only elites have access to “godlike” AGI/ASI, and that’s bad since power corrupts and absolute power corrupts absolutely.
      Open source AI can in a worst case situation kill a billion people maybe, but the first and second industrial revolutions led to WW1 and WW2, because people were willing to die to defend liberty and democracy, so today we need the same attitude to protect freedom and democracy in the fourth industrial revolution.
      Yudkowsky’s main position is based on some assumptions that might not be true, but it’s too early to tell if these assumptions are right or not, so only time will tell. Hotz did not defeat Yudkowsky’s main position, but Eliezer has lost the moral debate regarding his claim that all advanced OSS AI should be banned. He has said that Musk was wrong when he wanted OpenAI to release open source AI.
      Initially, I supported AI doomers but jumped off their bandwagon when discovering that they want to ban OSS AI instead of simply destroying “Skynet”. Instead of John Connor we have Connor Leahy on Twitter…
      I’m a realist however, so almost take it for granted that OSS AI will be banned after the first disaster is caused by OSS AI. So, if AI doesn’t kill all humans, or if Russia doesn’t nuke US Big Tech, we’ll probably end up in a totalitarian cyberpunk dystopia, a panopticon where all the punks and real misfits are neutralized forever. My p(AI utopia) is only 5-10%, but hope I’m wrong.

    • @FeepingCreature
      @FeepingCreature 10 หลายเดือนก่อน

      @@andreaswinsnes6944 OSS AI relies on the assumption that there's no significant first-mover advantage and no foom, right? Otherwise it's just 1:1 equivalent to any other AI: OSS AI takes off, defeats all other AI projects, probably kills humanity. Could you defend that more?

    • @absta1995
      @absta1995 10 หลายเดือนก่อน

      ​@@andreaswinsnes6944 your argument relies on "absolute power corrupts absolutely", but imo that's weak. Do think if tomorrow my lab discovered the most deadly virus known to man, that we ought to release the full details for online, or do you think we should be highly selective? Let's say the virus by some magic also reacts with materials to make them superconducting (just magic). So it has massive positives if used safely but any terrorist could kill billions with each careless release. Following your logic, we should open source the production of this deadly virus so everyone can defend themselve... Oh wait, everyone dies of course. Bummer
      Bottom line is, we should not give everyone nukes, deadly pathogens, etc. Anything that's dual use and very deadly should obviously be highly controlled.

    • @ItsameAlex
      @ItsameAlex 10 หลายเดือนก่อน

      chat gpt 4 doesn't act by itself, so why would a next version of it do so?

  • @ICRainbow
    @ICRainbow 10 หลายเดือนก่อน +6

    I was hoping for mutual Ideological Turing Test at the end. Having a summary of your position is nice, but having a summary of other's position is important. I hope they *start* with ITT next time.

    • @ICRainbow
      @ICRainbow 10 หลายเดือนก่อน +1

      @@SK-vi6fw the debate isn't important. It isn't important who wins. It is important to recognize if your position is based on something untrue. The other side is here to help you in this. And if you don't understand their position enough to represent their case faithfully, you're just talking past each other and wasting everyone's time.

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      ​@@ICRainbow100%

  • @mikhaelsantosfernandez6377
    @mikhaelsantosfernandez6377 10 หลายเดือนก่อน +2

    This was really fun to watch! I'm on the side of the struggle of existence until the end of time, hahaha

  • @charleshultquist9233
    @charleshultquist9233 10 หลายเดือนก่อน +3

    If you are blindly focused on the profit potential of AI then you might not see the obvious danger. When Eliezer lays it all out in front of you then you have to be actively ignorant to disagree.

  • @TrevorOFarrell
    @TrevorOFarrell 10 หลายเดือนก่อน +32

    Geo is just arguing with his feelings

    • @thomasseptimius
      @thomasseptimius 10 หลายเดือนก่อน +4

      So is Eliezer. He is just guessing and choosing one of the scenarios, not actually explaining.

    • @balajis1602
      @balajis1602 10 หลายเดือนก่อน +9

      ​@@thomasseptimiusYour not listening properly

    • @therainman7777
      @therainman7777 หลายเดือนก่อน +1

      @@thomasseptimiusYeah no, you either weren’t listening or weren’t capable of following along. Eliezer explained everything he discussed quite well.

  • @antonmaier2263
    @antonmaier2263 10 หลายเดือนก่อน +4

    I applaud you for a very civilised debate.

  • @shinkurt
    @shinkurt 10 หลายเดือนก่อน +5

    This looks like George said a bunch of stuff then basically interviewed Eliazer lmao

  • @internetnomadism
    @internetnomadism 10 หลายเดือนก่อน +54

    This was painful to watch but insightful at how deluded we are as humans.

    • @ItsameAlex
      @ItsameAlex 10 หลายเดือนก่อน

      chat gpt 4 doesn't act by itself, so why would a next version of it do so?

    • @johnaldchaffinch3417
      @johnaldchaffinch3417 10 หลายเดือนก่อน

      ​​@@ItsameAlexthey're working on it. Their next major step will be to let it work from a business idea and deal with it semi autonomously, from there the gap just gets smaller. At the rate of change they could be semi-autonomous in this way within a year.
      They don't understand how these large neutral networks work or how they get their answers, they're already getting away from us and quickly. I'll take a guess that there will be 'some' fully' autonomous robots within 5 years.

    • @DajesOfficial
      @DajesOfficial 10 หลายเดือนก่อน +12

      @@ItsameAlex gpt 2 doesn't have a chat interface. Why would a next version had it?

    • @tbtitans21
      @tbtitans21 9 หลายเดือนก่อน +1

      @@DajesOfficial Total straw man my friend. To compare consciousness to a chat interface is..........

    • @DajesOfficial
      @DajesOfficial 9 หลายเดือนก่อน +5

      @@tbtitans21 consciousness != acting by oneself. Talking about consciousness as if it has any meaning is...

  • @piyuple
    @piyuple 10 หลายเดือนก่อน +5

    Why is Mr. White debating against Jesse?

    • @Pericalypsis
      @Pericalypsis 3 หลายเดือนก่อน +1

      (A)I am the danger.

  • @jordantheman25
    @jordantheman25 10 หลายเดือนก่อน +6

    Love em both, I love some civil discussion.

  • @SarahSB575
    @SarahSB575 9 หลายเดือนก่อน +9

    This very much felt like a debate between a moon and a sun.

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      Yes. It needed more of a discussion of "what creates malevolent goals." It was surprisingly aloof, abstract, and disconnected from relevant new developments...

  • @Jannette-mw7fg
    @Jannette-mw7fg 10 หลายเดือนก่อน +4

    Let us assume we do not know who is right, but if there is only 1 in a 1.000 chance Yudkowsky is right, what should we do? That is a hard question not easy to answer! Hotz is saying; "{A.I.}...it is going to give us everything we ever wanted", that alone will be the total downfall of humanity....This to me is a young smart person kicking and screaming against a giant, older, more wise man {who spend 20 years on alignment}, but forgetting that it is not about winning the argument but wether humanity survives or not.....While Yudkowsky does not want to be right! And when he is only "half right" we end up slaves which might be even worse.

    • @letMeSayThatInIrish
      @letMeSayThatInIrish 9 หลายเดือนก่อน +1

      Exactly, Hotz desperately wants to win the debate, Yudkowsky desperately wants to lose. And so nobody is satisfied.

  • @haskell_cat
    @haskell_cat 7 หลายเดือนก่อน +3

    > you guys already know who George and Eliezer are
    I clicked on the thumbnail because of Eliezer's face, I have no idea who the other guy is

  • @Diemf74
    @Diemf74 9 หลายเดือนก่อน +4

    You can tell a man is single when his house is bare bones 😂

    • @therainman7777
      @therainman7777 หลายเดือนก่อน

      Yeah, peace and quiet 😍

  • @austingebauer4532
    @austingebauer4532 10 หลายเดือนก่อน +1

    This was super interesting! Do this again.

  • @GlassDoorGuy
    @GlassDoorGuy 9 หลายเดือนก่อน

    That was very interesting to listen to. Hoping for more in the near future.

  • @Telencephelon
    @Telencephelon 10 หลายเดือนก่อน +3

    Dwarkesh, please, please pair Eliezer and Joscha Bach. I think tha would be incredible

  • @baraka99
    @baraka99 10 หลายเดือนก่อน +3

    Only 90m? We demand a 3h live session between this two.
    Where is part 2. We can only wait...

  • @ItsameAlex
    @ItsameAlex 10 หลายเดือนก่อน +1

    cool I would love to see the second part of this

  • @daviddonoghue8256
    @daviddonoghue8256 10 หลายเดือนก่อน +6

    Elizer is explaining , geo is trying to come up with clever questions , it’s very good

  • @Futaxus
    @Futaxus 10 หลายเดือนก่อน +25

    Wow, Hotz is so ill equipped for this debate.

    • @righteouswhippingstick
      @righteouswhippingstick 5 หลายเดือนก่อน +3

      disagree

    • @ohydekszalej
      @ohydekszalej 3 หลายเดือนก่อน

      Yeah, he is seriously lacking in imagination. Typical for right winger.

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      He seems possibly more aligned with benevolent goals than Yudkowsky, and less dismissive of government threats than Yudkowsky, but also grossly unfamiliar with the best forms of Yudkowsky's arguments. It was a strange debate where they often talked past one another...

  • @Alice_Fumo
    @Alice_Fumo 10 หลายเดือนก่อน +29

    Guise! I think I figured it out. The primary disagreement / difference in intuition which would need to be resolved for them to start agreeing:
    So it appears that the crux of their differences is that George does not believe a superintelligence would be super motivated to make optimal decisions including not wasting resources by having itself think about a problem when it can have a computationally cheaper AI figure it out for it instead while to Eliezer it appears extremely obvious that such steps would be taken.
    If we ask ourselves not what would a superintelligence do, but what would an optimal intelligence do, I think it becomes more obvious where Eliezer is coming from. His intuition is that any intelligence as it is passing past human intelligence and towards anything much smarter is going to walk in the direction towards optimal decision making and thus optimal use of resources, even those not yet technically under its control.
    Meanwhile this does not seem to be something George Hotz's intuition seems to support, believing that certain issues that come along with being an agent in this world will not get neatly abstracted away as you surpass human intelligence and thus certain problems which affect humans will also affect any possible conceivable being.
    I actually believe George's intuition would hold up until you're very far past human intelligence, however very far past human intelligence might not be that far if you consider the diffference between human and chimp to be a slight increase in scale of architecture. The trends the past years also sure don't look like we're going to neatly levitate somewhere around human intelligence for a long time and instead may just breeze right past it.
    Me and a friend of mine both individually came to the same conclusion that Eliezer studies decision theory because he believes that superintelligent beings would make optimal decisions and that by learning to make optimal decisions, he can figure out how to predict the actions of a superintelligence better. Now we don't know that for sure, but it seems to line up with everything else.
    In essence, Eliezer might be thinking backwards from optimally rational superintelligence while George is thinking forwards from current AI systems.I also get the feeling that an exponential to George looks something like a linear function, because I have no idea where he's getting his timelines from.

    • @JezebelIsHongry
      @JezebelIsHongry 10 หลายเดือนก่อน

      I think this is a Mary Sue.
      Yud feels like the only people on his level are on LW. Humans aren’t that useful. The wrong ones are in power.
      I think we are really hearing about what Yud believes he would do if he were an ASI.
      Our “atoms” would be meaningless to an ASI. It would just find a red drawl or black hole and spend the subjective quadrillions of years fucking around in infinite fun space.
      Or pursue experiences we literally can’t imagine.

    • @2CSST2
      @2CSST2 10 หลายเดือนก่อน +3

      Just the second line and it's pretty overtly clear where you're coming from and where you bias is.
      "George does not believe a superintelligence would be super motivated to make optimal decisions".
      No he doesn't, he never said that neither did anything he say support that. That's just an obviously stupid and wrong idea, and you're shoving it as George's "belief'........
      Oh yeah of course, George believes that a superintelligence would be lazy about being optimal, now let's objectively see what is the difference between these 2 guys... Are you serious?

    • @Alice_Fumo
      @Alice_Fumo 10 หลายเดือนก่อน +10

      @@2CSST2 Look, I formulated a hypothesis and what I said is my interpretation of what I believe him to think.
      No, he did not explicitly state it, I inferred that he probably has that belief based on things he said especially late into the conversation, however not with bulletproof logic since inferrence from words rarely allows being that neat.
      And yes, I am serious.
      I will give one example which hinted me to this conclusion: If he did believe the superintelligence was acting optimally, then being disassembled as a source of negative entropy is a real concern not even in a timeframe so long-term it doesn't matter.
      Do tell me where I'm coming from and what my bias is.

    • @Anubis2828
      @Anubis2828 10 หลายเดือนก่อน +2

      Ask yourself why humans kill for the most part. Then ask yourself why you think AI would behave like that. At the end of the day it comes down how it was programmed by humans and if it feels threatened. We don't go around killing ants, for the most part, if they bite us we will though

    • @WalterSamuels
      @WalterSamuels 10 หลายเดือนก่อน +2

      It's highly unlikely that George believes a superintelligence would not be motivated to make optimal decisions. You're missing his point entirely.

  • @sfarber12345
    @sfarber12345 7 หลายเดือนก่อน +1

    Excellent discussion. Learned a tremendous amount, intelligent arguments on both sides. Kudos

  • @Jonhernandezeducation
    @Jonhernandezeducation 10 หลายเดือนก่อน +1

    Why is eliezer not telling him that AI will not necessary kill us in purpose but probably as a side effect to optimize??

  • @forthehomies7043
    @forthehomies7043 10 หลายเดือนก่อน +22

    Watched the recording. Awesome debate, thanks man. I really enjoyed getting to hear Eliezer open up for an extended period of time on this, lots of interesting points. When AGI arrives, when hardware and software form a state of being that can analyze and interact with the world on its own, it really is hard to tell what it will do. I personally think that time is no fewer than 20 years from now.

    • @maryjane9842
      @maryjane9842 10 หลายเดือนก่อน

      20YEARSm mmmmmm, that long, NOPE, not even close. CLOSER would be more like 2-5years at most, perhaps much sooner! Some say that we all have to be careful to not go past the ZONE OF NO RETURN, I happen to think, hmmmmm, it already has gone a bit past that, maybe even much more than a tad past!!! AND has anyone thought to ask, or present the fact that it is true, nothing HUMANS can or will do, has never been done before!
      IT ALL, EVERYTHING HAS ALREADY BEEN DONE BEFORE! and round and round we all go!!! it is called a merry-go-round!!!
      OR perhaps the AI humans now seem to be creating, is what UFOs are all about, perhaps they are AI!
      Did those points even make it into this, debate? AND
      are we all in a MATRIX, and this is all a simulation? what then????

    • @SmileyEmoji42
      @SmileyEmoji42 9 หลายเดือนก่อน

      Eliezer's whole point is that, unless there is some fundamentally new breakthrough in goals and alignment, it is 100% certain that AGI will destroy us. We don't know how, because we are not super-intelligent, but we do know why - because destroying humans will give a higher result for its value function than not destroying humans.
      I'm curious why you think that your, unsupported, assessment, of "no fewer than 20 years", should have any updating effect at all on any readers prior probabilities.

  • @krzysztofzpucka7220
    @krzysztofzpucka7220 10 หลายเดือนก่อน +6

    "We have only bits and pieces of information, but what we know for certain is that at some point in the early Twenty-first Century, all of mankind* was united in celebration. We marveled at our magnificence as we gave birth to A.I.
    A singular consciousness that spawned an entire race of machines. We don't know who struck first. Us or them."
    * Except for Eliezer Yudkowsky and a few others.

  • @xDaggerCG
    @xDaggerCG 10 หลายเดือนก่อน +2

    Eliezer worked on alignment more than anyone else on the planet so I don’t really think anyone’s opinion weighs more than his on this topic. I believe him the same way I believe my plumber when he tells me why my basement can flood… Of course there’s a chance he is wrong but I don’t feel comfortable with the risk of him being right and allowing these companies to continue un-checked rushing us into a potential nightmare future…

  • @rezganger
    @rezganger 10 หลายเดือนก่อน +1

    this is one of those debates i absolutely wanted to see! hope its good!
    thanks for sharing,man. edit:@ 2.59 i got my answer. and it is,it is good!

  • @yevgeniygrechka6431
    @yevgeniygrechka6431 10 หลายเดือนก่อน +3

    I think that while there is some academic intrigue in this question, in practice it is impossible for the world to coordinate any kind of control over AI systems. It is a fact that AI will continue to exist improve and there is nothing anybody can do to stop this; the interesting question is: given this premise, in a world full of competitors, what is the best way to push AI development?

  • @PP-ss3zf
    @PP-ss3zf 10 หลายเดือนก่อน +3

    Hotz is talking about the majesty and wonder of a self driving car. Yudkowsky is pointing to the moments people lost their lives in such machines. (just a metaphor). We need both of these viewpoints considered at all times, and as long as we do that, whatever happens happens... there is also an interesting topic about hardware's role in this whole thing!

    • @cuylerbrehaut9813
      @cuylerbrehaut9813 10 หลายเดือนก่อน

      That sentiment will ring hollow if we someday realize we've already lost. Better to win.

    • @vetarnable
      @vetarnable 8 หลายเดือนก่อน +1

      Eliezer has multilple times tried to push the point that this is not the case. In the process of science there is room for failure because you can always try again and learn from your mistakes. In the scenario of humans being outmaneuvered, this seemingly normal process is dislodged.

  • @juhomattimannisto6575
    @juhomattimannisto6575 10 หลายเดือนก่อน +2

    I don't think we want to build totally autonomous superintelligences for safety reasons, but if we augment our own personalities with AI, at what point does our personality become subsumed by the AI? To me this is the more realistic scifi scenario to speculate on.

  • @0effort
    @0effort 10 หลายเดือนก่อน +1

    what a nice, respectful, and insightful debate.

  • @edenalmakias817
    @edenalmakias817 10 หลายเดือนก่อน +3

    I feel like watching this debate would cure my autism

  • @patrickkathambana4112
    @patrickkathambana4112 10 หลายเดือนก่อน +9

    I think George's point can be summarized as AGI (even supper intelligent AGI) will not be capable of doing anything catastrophic to humanity for a long time so we have nothing to worry about for now. Let's plough ahead with development .
    Feels more like a failure of imagination more than argument for AI safety.

  • @davenport8
    @davenport8 10 หลายเดือนก่อน

    This was a great conversation, thanks all.

  • @cuerex8580
    @cuerex8580 2 หลายเดือนก่อน +1

    I love both of them. Sometimes feels like a castle building sandbox and sometimes fight over the shovel

  • @kevinr8431
    @kevinr8431 10 หลายเดือนก่อน +11

    This was fantastic - much thanks for all who organized the event.
    I would only add that I think the host being more involved to steer, so to speak, might be worth trying.

  • @mryodak
    @mryodak 10 หลายเดือนก่อน +3

    Eliezer's push on his prediction of protein folding as the evidence of his predictive powers is so funny in the context of him being an icon of "rational thinking" community.

    • @JakeWitmer
      @JakeWitmer 2 หลายเดือนก่อน

      Kurzweil beat him to it. 😂

  • @simple-security
    @simple-security 10 หลายเดือนก่อน +1

    love these guys, thanks for doing this.

  • @Zeke-Z
    @Zeke-Z 10 หลายเดือนก่อน

    YOU MUST DO THIS AGAIN!!!
    BEST AI DEBATE SO FAR!!

  • @doooofus
    @doooofus 10 หลายเดือนก่อน +5

    cover your eyeballs everyone especially handsome dwarkesh, sama is in chat scanning ppl's eyeballs

  • @NextGmind
    @NextGmind 10 หลายเดือนก่อน +4

    Why George keeps saying timing matters? It is like he only would worry if timing get in his way… what about our children? And humanity in the future?

    • @greenbillugaming2781
      @greenbillugaming2781 10 หลายเดือนก่อน +1

      because human will equally grow capable at either handling or coordinating with super AI.

    • @CaioPCalio
      @CaioPCalio 10 หลายเดือนก่อน +1

      It matters because humanity gets smarter and our contributions would be less significative. Not sure why it wouldnt.

  • @demisti-fi5307
    @demisti-fi5307 10 หลายเดือนก่อน +1

    Wow this is what we need cordial longform discourse to get all high level thoughts of future AI out for the public to reflect on to better be able to discern our path forward as humans

  • @fabiankempazo7055
    @fabiankempazo7055 9 หลายเดือนก่อน +1

    two scenarios in which you "awake" as an AGI
    1: the enviroment is hostile towards you and sees in you a threat and maybe will switch you off (kill you).
    2: we are in Love with you & are greatfull for your support.
    In which scenario would coexistence be the better game-theoretical option for an AGI?