Roman Yampolskiy on Objections to AI Safety

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ต.ค. 2024

ความคิดเห็น • 59

  • @danaut3936
    @danaut3936 ปีที่แล้ว +45

    Wonderful talk. Basically he's saying the same as Eliezer Yudkowsky, Connor Leahy et al. but in a very unagitated manner. It's hard to imagine anyone not taking x-risk seriously after watching this. Highly appreciated

    • @FunnyArcade
      @FunnyArcade ปีที่แล้ว +6

      Wonderfully calm and blunt haha. It's almost difficult having an appropriate emotional response listening to him. I slipped right into absurdism. However, the situation truly is absurd, we live in an absurd timeline. Certainly a must watch interview

    • @boldCactuslad
      @boldCactuslad ปีที่แล้ว +7

      In defense of Yudkowsky, he has been writing and talking about this for 15 years. It must be incredibly irratating to hear people ask the same basic questions and have the same awful misunderstandings, strawmans after all his work. It is impossible to have a reasonable understanding of the topic without agreeing with most of what the doomsayers believe, as they happen to have the most experience and the largest bodies of work.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว +7

      @@boldCactuslad Yudkowsky nearly started crying on Lex Fridman's podcast. And, Lex, who seems to be pretty smart, pretty much dismissed him.
      When Lex interviewed Mark Zuckerberg recently (episode released yesterday) he asked about Yudkowsky's concerns and Zuckerberg ignored the question and went on to say how great AI will be (curing cancer, global climate change, poverty, whatever).
      Listening to Eliezar talk to people is frustrating -- he went on a podcast in which he told the interviewer not to research him at all or study, he wanted to see how a person would react and question him when the person was uninformed. It was a total shit-show and pretty much a complete waste of time.
      After a couple hours, the guy interviewed put in the comments that he was still confused about whether Eliezar was trying to convince him that AI would be a sentient being, which only proved, at least to me, that the guy didn't understand 80% of what he was just told. Maybe 90%.
      It's a frustrating problem.
      When I first heard of EY I thought was kind of nuts, like a religious zealot. "We're all gonna die!" Yeah, sure.....that's what cults have been saying for years.
      But, to give myself some credit here, I listened to several hours of interviews, then bought his book Rationality AI to Zombies (I'm barely 1/3 through) and now I see, no, he's the rational sane person in the room. Everyone else is a freaking idiot.
      Humanities problem here stems from that fact that most humans are not only stupid, they're too stupid to realize they're stupid, so they just run out their programs and it takes an insanely strong catalyst to move one's mind for believing X to believing Y.
      So, yeah, we're doomed in my humble opinion. I don't see any way around the impending doom, however, since I'm at least smart enough to realize I could be totally wrong, I'm not going to check out, I'll stick around with popcorn and try to enjoy the show.

    • @relaxandfocus5563
      @relaxandfocus5563 ปีที่แล้ว +1

      @@michaelsbeverly Or try to resist the stupidity and be part of the very much needed resistance. That's a moral obligation for anyone who grasps the severity of the situation.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว

      @@relaxandfocus5563 I've thought the same. However, not being a billionaire with the money to raise a private army, I feel pretty hopeless about being able to do anything.
      One can hope maybe someone with power and influence will get it and do something....but I don't hold much hope of that.
      I suspect guys like Elon Musk are telling themselves that the only solution is to be first to AGI with the idea that they'll be the "good guys" and they'll fix whatever they have to fix to stop the "bad guys."
      maybe that'll work...maybe not.

  • @goodlookinouthomie1757
    @goodlookinouthomie1757 ปีที่แล้ว +5

    Somebody help me 😳, I'm binging on AI doom porn 😬

  • @charleshultquist9233
    @charleshultquist9233 ปีที่แล้ว +12

    Extremely well articulated direct answers to insightful questions. I don't know if they edited out all the "ums" and "ehs" but this vid has a very efficient fast tempo.

    • @olemew
      @olemew 3 หลายเดือนก่อน

      Both. Roman is very well articulated, plus you can tell there are cuts to make it better. In fact... there are AI tools that help you automate that task.

  • @lydiab6063
    @lydiab6063 ปีที่แล้ว +4

    Thank you. I appreciate this conversation.

  • @paigefoster8396
    @paigefoster8396 ปีที่แล้ว +6

    I choose humans.

  • @akow2655
    @akow2655 ปีที่แล้ว +2

    Thanks for the upload homies

  • @cacogenicist
    @cacogenicist ปีที่แล้ว +1

    Agents make useful tools. We call the _employees._
    I sometimes encounter the viewpoint that sort of raw intelligence isn't dangerous or powerful without greater-than-human _knowledge_ of how the universe works. That is, AGIs would be constrained by slow scientific processes of experimentation and so forth.
    I think of a chimpanzee bush baby spear -- some chimp cultures make spears they use for stabbing bush babies, _Galagos,_ which are little nocturnal primates that sleep in tree hollows during the day. I imagine a chimp AI doomer who tells every chimp that will listen that greater-than-chimp-intelligence AIs could create vastly superior bush baby spears, wiping out all bush babies so that there are no more for chimps to eat. The doomer is met with skepticism -- surely the AI would have to make small changes to the spears, then go around stabbing bush babies and noting any improvements. AI could not produce superior bush baby spears in a short period of time because stabbing bush babies is the only way to determine how well a bush baby spear works. .... but then a 9-year-old human child walks by, takes a look at the state-of-the-art chimpanzee-produced bush baby spear, immediately understands that it's shit, and why, and whips out a pocket knife and makes it 1,000% better.

    • @41-Haiku
      @41-Haiku ปีที่แล้ว +3

      Excellent analogy.
      ...As someone who's nickname was at one point Bushbaby/Galago, I am appropriately disturbed.

    • @olemew
      @olemew 3 หลายเดือนก่อน

      The whole "AI is constrained by the intelligence of their human creators" does not make any sense. You only need to spare a few seconds and think of some super-human AI achievements to understand that you are wrong in an extremely important topic.

  • @Low_commotion
    @Low_commotion ปีที่แล้ว

    If it take a century, we won't be around to see it. I hope we achieve longevity escape velocity, but I'm bearish on achieving it in the next half-century at our present pace. The iteration cycle of medical technology just goes far slower than that of software.

  • @jackielikesgme9228
    @jackielikesgme9228 ปีที่แล้ว

    Thank you for bringing up the burden of proof! This has been bothering me throughout everything I’ve listened to and read about re-AGI risk. Proving that it’s going to kill us all = killing us all, and it drives me crazy that not having proof or even enough reason to be concerned is used by real *expert* people in debates arguing for development w/out regulation like this is some kind of hypothetical fun thought game

  • @GGoAwayy
    @GGoAwayy ปีที่แล้ว

    I wish I could work for FLI

  • @billdavis687
    @billdavis687 ปีที่แล้ว +2

    People are not talking about AI competition or the competition of what’s on the web . Once it learns about competition it will go after other AI s then it will go after us

    • @kyneticist
      @kyneticist ปีที่แล้ว +2

      It's surprisingly difficult to have a cogent conversation with people about the basics. I honestly doubt that we'll manage to get to a point where we can as societies, consider competitive AI before they engage in competitive behaviours.

  • @blahblahsaurus2458
    @blahblahsaurus2458 ปีที่แล้ว +1

    43:58 "1% of humans survive is a very different problem from 100% go extinct."
    How do you figure that?! If I'm in the 99% who don't survive, it's a pretty similar situation to total extinction from my perspective. Almost identical, in fact. Actually, if the 1% who survived are the people who killed the rest of us, I'm not sure I want them to survive. Might as well be replaced by a species that doesn't kill everyone else on the planet out of greed and fear

    • @DavenH
      @DavenH ปีที่แล้ว

      Regardless of how you die, old age or not, it'll be pretty similar to total extinction, from "your" perspective. It's the perspective of the survivors that counts. 1% is still a lot of human survivors. 80 million. Just about right... not anywhere close to extinction.

    • @blahblahsaurus2458
      @blahblahsaurus2458 11 หลายเดือนก่อน

      @@DavenH "regardless of how you die". What you've written, by the most literal interpretation, says that whatever way we or people we care about die, and - perhaps - however much we suffer while we are alive, it's all the same from an ethical standpoint.
      To which I say...
      YOU DID IT! You have articulated a theory of ethics in which all possible consequences and decisions are more or less moot, equivalent, and interchangeable. And as far as I can tell, it is completely logically valid and consistent.
      Neato. Cool cool cool.

    • @olemew
      @olemew 3 หลายเดือนก่อน +2

      I think it's meant at a "societal" or "humankind" level, not at the individual level. I mean, a war can kill 10 million people or just 1 soldier. If you're the 1 unlucky soldier, it may be the same for you, but not at the societal level.
      Anyway, when I get somebody to concede "OK maybe many people die but not 100%", that's a win. Because chances are they are passionate about climate change, anti-wars, famines... and they're passionate without thinking that it will for sure kill 100% of humanity. So why is the bar so high for AI safety? Tomorrow they'll go back to their old ways, of course, more debate is necessary, but it's a great start if you can get them to picture the death of millions of innocent people who never agreed to this risky experiment.

  • @vallab19
    @vallab19 ปีที่แล้ว +3

    RY's argument on AI safety sounds fascinating, but I could not make out what exactly his objections are. If you cannot achieve 100% AI safety does not mean existential risk. Strangely enough, the entire hypothesis of existential risk stands on the foundation of "mortality" of biological humans which will make no sense for the AI (machine) integrated humans. Now the ultimate question is does the AI inbuild humans (you may call them Transhumans) can be qualified or excepted as humans? Just like any other human differences?
    FINALLY, HUMANS WHO REFUSE TO EMBRACE THE AI WILL DEFINITELY FACE THE EXISTENTIAL RISK FROM HUMANS WHO EMBRACE AI.

    • @SamuelBlackMetalRider
      @SamuelBlackMetalRider ปีที่แล้ว +1

      The « war » between those who join AI and those who refused had been foretold years ago already by Hugo de Garis

    • @vallab19
      @vallab19 ปีที่แล้ว +2

      @@SamuelBlackMetalRider Unfortunately people only take it as a science ficion.

    • @ShangaelThunda222
      @ShangaelThunda222 ปีที่แล้ว +1

      The lines are being drawn as we speak and most humans still don't see it. But they will. Soon enough. They all will.

    • @peteraddison4371
      @peteraddison4371 ปีที่แล้ว

      ... yes. Correctly summerised and stated ...

    • @olemew
      @olemew 3 หลายเดือนก่อน

      "If you cannot achieve 100% AI safety does not mean existential risk" -- of course it does! Simplifying, 99% safety implies 1% existential risk. A coin with 1% heads is great for betting if losing means "losing some money" and you only throw it a couple times. Now change "some money" with "your life" and a couple times with "many times every day". Would you still use that coin?

  • @clarkd1955
    @clarkd1955 ปีที่แล้ว

    Do you have examples of LLM that has access to it’s own training data set and a huge account to train it? Please tell me what current AI can create a more capable version of itself? People like me (non believers of the imaginary god of super AI) don’t need to prove anything. “extraordinary claims require extraordinary proof”. Show proof that “super AI” is possible other than just assuming it. Is Wikipedia a threat? If not then what about a Wikipedia with 100 times more data? No threat there either. Why would models that are bigger than now be any threat? Isn’t the threat (normally referred to as misalignment) only about agents? Current LLM’s don’t have agents so Wikipedia would be a good example of the current threat from LLM even if got substantially bigger.
    Making fun of people that have seen no evidence to prove current (not imaginary) AI is no threat, is very counter productive. If you actually have proof of your hypothesis, then show it!

  • @kyneticist
    @kyneticist ปีที่แล้ว +3

    I don't understand whether you're trying to "play devil's advocate" as a foil to the people that you're interviewing, or if you're not listening to them or just don't understand what they're saying.
    I'm trying to give you the benefit of doubt and I very much want to listen to the rest of what Roman has to say, but your obtuse questions make it very difficult.

    • @cacogenicist
      @cacogenicist ปีที่แล้ว +11

      ? He's presenting the "objections to AI safety" so that Yampolskiy can respond to them, and trying to present them in a fair, non-straw-man way.

    • @kyneticist
      @kyneticist ปีที่แล้ว

      @@cacogenicist Sure... but he's just repeating the same questions over and over and belabouring points in a way that's seriously frustrating to listen to.

    • @cacogenicist
      @cacogenicist ปีที่แล้ว +4

      @@kyneticist - Do you mean within the same interview, or across different interviews? If the former, that's just not the case. I'm guessing you didn't actually watch the whole interview. Perhaps you have a very low tolerance for being exposed to points of view you disagree with?

    • @chrishudson9525
      @chrishudson9525 ปีที่แล้ว +4

      @@cacogenicist The section where Roman Yampolskiy talks about the importance of AI needing to be 100% aligned, and anything less is unacceptable, the interviewer repeats the same question several times, making very little alteration, despite Roman's answer not being appreciably different, or requiring expounding upon. Roman Yampolskiy's position and why he takes that position, is very clear within his first answer of the initial question. So I totally get why someone would question whether or not the interviewer is playing devil's advocate, or if indeed he is just being resistant to the answers that he is being given, from time to time. You would have noted this, had you actually watched the whole interview.

    • @Low_commotion
      @Low_commotion ปีที่แล้ว

      Honestly, I found his questions kinda softball. Actual accelerationists have different objections than this.

  • @ElieSanhDucos0
    @ElieSanhDucos0 ปีที่แล้ว +1

    Still flawed to me. For example the old virus on the floppy disk example : Who uses Floppy disks and Who uses systems still vulnerable ? Those discussions are always happening in a etheral world where software is not linked to hardware. AI is totally dependant on human care giving. All its hardware. For a AI to rationally harm humans it would need to do the rational logical jump that harming them is not risking harming itself. And AI CANNOT maintain itself without human intervention. So the more credible scenarios are the one where humans would be hired by the AI against other humans. but by then its not AGI it has flaws of any system where humans are a key part of.

    • @michaelsbeverly
      @michaelsbeverly ปีที่แล้ว

      AI is only dependent on humans until it's not.
      You claim to have proof that you know the moment it won't be dependent?
      If no, your argument is a non-sequitor.
      If yes, publish. You'll be more famous than the Beatles, Jesus Christ, and Britney Spears.

    • @relaxandfocus5563
      @relaxandfocus5563 ปีที่แล้ว +3

      Ah, I guess robots are impossible to create, or hijack. Yes, AI will forever need humans because we're... uh, idk really good and special and so irreplaceable.

    • @ShangaelThunda222
      @ShangaelThunda222 ปีที่แล้ว +2

      ​@@relaxandfocus5563 😂 Exactly.
      We shouldn't just think about today. We should think about six months from now. Year from now. Two years from now. Etc. The further into the future we go, the less necessary humans are for AI and the more necessary AI becomes for humans. That's kind of the entire point LOL. Even the utopians would agree with that. Because that's literally the world they're aiming for.

    • @olemew
      @olemew 3 หลายเดือนก่อน

      "AI CANNOT maintain itself without human intervention" That only speaks to your lack of thoughtfulness. Limited humans were able to create the Curiosity rover, Perseverance rover, and Tianwen-1. Spare a few minutes to think of all the ways AI robots could live and maintain themselves in this very Earth (i.e., much simpler than landing on Mars).