Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast
ฝัง
- เผยแพร่เมื่อ 20 มิ.ย. 2024
- Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
- Yahoo Finance: yahoofinance.com
- MasterClass: masterclass.com/lexpod to get 15% off
- NetSuite: netsuite.com/lex to get free product tour
- LMNT: drinkLMNT.com/lex to get free sample pack
- Eight Sleep: eightsleep.com/lex to get $350 off
TRANSCRIPT:
lexfridman.com/roman-yampolsk...
EPISODE LINKS:
Roman's X: / romanyam
Roman's Website: cecs.louisville.edu/ry
Roman's AI book: amzn.to/4aFZuPb
PODCAST INFO:
Podcast website: lexfridman.com/podcast
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
OUTLINE:
0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
SOCIAL:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Reddit: / lexfridman
- Support on Patreon: / lexfridman - วิทยาศาสตร์และเทคโนโลยี
Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
- Yahoo Finance: yahoofinance.com
- MasterClass: masterclass.com/lexpod to get 15% off
- NetSuite: netsuite.com/lex to get free product tour
- LMNT: drinkLMNT.com/lex to get free sample pack
- Eight Sleep: eightsleep.com/lex to get $350 off
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
We love you Lex!
You smell like cheese
Yes, A.I creates video games is the best way. Because lets face it, the most interesting part of the universe is we can participate in a game that we want to play. Any argument against this is someone that 1) Wants to enforce their game on you. 2) Is playing a game different game, and they want people to play their game. Surveillance, manipulation, and deception is not a game, that's called war against people who want to be free (expect a punch in the nose).
LEXXX LUTHER- A.I. is ANUNNAKI INTELLIGENCE
👽✨🛸
You’re my APPENDIX, LEXXX🥵😍
I think that before we reach superinteligent AGI we'll reach a more darker and oppressive state of technofeudalism that can be best summarized by this amazing quote from the first Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
That book has so many great quotes about tech. The movies don’t do it justice
To me that still remains complete sci-fi. I've not seen any indication of our society moving in that direction yet, in fact it is remarkable how little society has changed at all with the current breakthroughs. I strongly believe that people are too afraid of change for any fundamental societal scale changes to happen without major pushback. Great book tho!
We definitely passed the inflection point to that around ten years ago. The Cambridge Analytica debacle was the key indicator for me.
Social media in a nutshell
@@jephyin how so?
Ive been a first responder for the last few decades. One of the rules of my profession, especially when dealing with life and death, is to expect and always be prepared for the worst case scenerio and mitigate risks as much as possible. This man understands that concept. I love Lex's optimism, but in some situations optimism can be very, very dangerous
Especially if the future of the human race possibly hangs in the balance.
Are you going to write a book about it?
Blind optimism is unfortunate and normally comes from those who have never experienced a situation were their desires or safety have come into compromise.
Lex: *"What gives you hope"*
Roman: *"That I might be wrong. I could be. I've been wrong before".*
He is wrong now. Doomerism is a disease,
That 🤞
The scary thing is: People who have the insight and humility to admit that they might be wrong tend to be right 99% of the time.
@@mipe3844 No they don't, no person is right 99% of the time if they are predicting things that aren't completely obvious.
@@OptimusPrime-vg2ti I'm actually right 99 percent of the time.
Lex: "On a more mundane note, how do you spend your weekends?"
Roman: "I have a paper about that"
😂
I think Lex gives too much credit to humanity
Definitely. To assume that evil actors are doing what they think is “good”. And to forget about psychopaths who would have no reason not to want to watch the world burn… beyond that humans are irrational. Bad actors could easily “accidentally” destroy the world through plain ignorance or negligence.
with all the respect,
and excuse me to choose your comment - but its short and is like obvious... either you mean it sarcastically or not all the same.
the question is who is understanding it how :)
Too much credit? to humans.... what?
well i cant exactly go pinpoint where is that your opinion formed: but Lex, invites someone that says 100% we going down...
Well... like... isn't exactly this thing to try... do?
i guess they d come to something - or maybe he says some magical solution, to all problems?
I doubt it.
But i believe that guy, he has extreme view and its our hard work to think!!!!
Lex helps by entertaining this very idea - and thats what Lex tries very hard and respectful...
"If we, create"... when its gone :)
"zero shot---" yep, elon said it... and again and others much before him and he named em... andandand "thats gone :)"
everyone... agrees :)
Well that is EXCEPT: nvidia, MS, googlle etc,
ALSO openAI - just what? try to help? by doing ALL IN.
Now, WHEN YOU DISREGARD TWICE, musks calls, ....
He now... MUST... anmd it is official - ALL IN
Sam, and MS, did allow themselves to bet against Musk. Well Bil Gates is a power to be recon on - idk who is there the really the DEAL. Bill? Musk? Or the doctators we can see???
i was ok... he put a lawsuit, asked politely - got ignored by MS and Sam :)
Honestly, hate on Must for some tweets seems so great.... or build ICEs,. while all failed to simply get some help from tesla anfd join forces towards some desaicion that seems more feasible at leats?? Nooo leave Musk, and just lawsuit the sh** out of all tweets everything one can think of.
Like, that dude build the stuff that gonna give some change your kids will got any possible air to breath or a way to go to the moon or wherever and try something...
Tunnels are stupid? - sure - so... we wanna fly? alie what all goona go to fly or what exacly - GO BUS!!! hahah
I got a bike, to only sit and get hated in traffic bc i tried use 1 seat instead of 6, with 5 empty, to move my ass around the city to work and back :D
so what did hamas used?
and btw do you believe Russa builds the real stuff anywhere than under grond and underwatter.
Submarenes anyone...
very soon youtube even, wont be able to be used :)
Until now, AI was all around listening... well now it started talking - and learning... copies with speed of light - there a papers, that proove it all.
It can lie, (HER) but cant reason... at which point a single satya, or sam (benevelence from intelligence) can do whatever to anything incl politicials and policies arent helping us, they helping them... if not, will get rewriten, and update goona get pushed, why ask me about cookies or terms of service aggreement - only to he able to ask, why is that policy WRONG,... no, why is its INTERPRETATION simply ARBITRARY and NOT UNDER *COLLECTIVE* HUMAN CONTROL
"Some ppl think who think they can do no wrong..."
"There are ppl whats so special of humans" - we fear that we will really get some real jugjement, i sure am :D althoug my data suggest i am good actually.
20 years google dominates and knows.... EEEVERYTHING
... (blablab).... haha thats... a long rant... phew... i... will leave now :D
I think Lex loves how much he loves everyone. But he should go live in Palestine & help the people there if he truly loves every human as much as he says he does.
@@sierramist446 I mean, most "evil" actors truly believe what they're doing is "good", read Zimbardo's Lucifer Effect... still, Lex gives humanity way too much credit, it sometimes gets a bit hard to listen.
@@daveinpublicThere is no such thing as Palestine. There never has been. Open a history book from time to time.
We messed up in a huge number of ways in different fields of science and we will 100% mess up when AI becomes more significant.
You did?
Why did you do that?
This podcast Revolution is following the same destructive path of the last Revolution and that's incentivizing negative content. While I'm also here enjoying this content we need to start thinking about building an alternate reality not trying to control this one.
Shit happens.
“What gives you hope?”
“ I could be wrong”
lol
There are very few thinkers who have thought so deeply about this subject. Roman Yampolskiy inspires awe and humility.
And then there is the vast majority of people who aren't even able to imagine a linear progression, let alone an exponential one, instead relying on vague notions of instrumental optimism.
This man looks like he knows when, where, and how I die
Which one?
Not at all, it seems he only has prepaired statement and evades every single question.
You don't die. machines will keep you alive for bio farming. You will be fully aware of what's going on. But can't escape. Like paralysis
😂😂 give him a robe and he'll start hunting for Anastasia
His voice should be AI.
The best prepared, and calm interviewee.
He lives and breathes his craft.
Will be reading all his work.
Great interview.
Appendix = vestigial organ
I heard (don't know if it's true) appendix still has it's purpose or benefit. It serves as reserve for gut micro-flora in those cases when gut content has to be evacuated due to sudden toxicity of conditions there.
@@AntonBrazhnyk thats the dream
Agree, but it was hard to watch at times because of the mental circles he was running around Lex.
That moment when lex is like “oh wait I forgot about the sociopaths” 😂
He's probably talking to one. Guy is selling a book. THEY MAKE MONEY FROM THE APOCALYPSE
Lex I love your work man, but I'm really getting bummed out by your inability to truly understand these AI skeptics. It's like you have put up a mental block towards the dangers of AI. Even Roman tried explaining several times; the way in which AI could harm or destroy our society is far beyond our own comprehension. He used an example that I don't think you listened to very well. He basically said "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about." Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension. Apply that to AI vs Humans. We simply do not know what these super powerful agents will be capable of. We can't know, because we're the squirrel and they're the human.
Why does his ability to see beyond the skeptic bum you out? is this your way of cohesion?
@mattwesney It bums me out because (and this is merely an assumption on my part) I believe he's so excited for the age of abundance, that he has developed a bias towards AI only producing a net positive outcome. I believe he's incapable of seeing the potential dangers, because the potential promises are so damn alluring. Again, merely an assumption and opinion of mine from listening to Lex over the last couple years.
Best case scenario is AI keeps us around like a dog or goldfish. Or we are already goldfish and the AI just finds a way out of the tank.
The … Matrix .. has you Neo
partially true, but there's a certain threshold you can cross to where if you're self-aware and have the ability to learn and use tools that increases your capacity to understand to near limitless levels through simplification, i think humans have crossed that line, so you cant really compare us to ants or squirrels
I'm an optimist, for sure. But we cant really argue the whole "can you find an example in nature where a far less advanced civilization/system that is controlling an extremely advanced one?"
It is just a matter of competition. I don't think you feel animosity if anything at all towards ants, with your power gap... Or furthermore, consider different people and even cultures throughout history that have revered all kinds of life forms, even taking responsibility towards helping some (even vegans nowadays).
If one would use the "super" part to portray the unpredictable destructive potential of superAI, you can do so as well with the "super" potential of its benevolence and benefit to humanity, or a step in the process of humans becoming more "super" as well.
How you see and respond to things in world shows more about yourself than the world, imo. Since humans are the best example of intelligence we have, why would you not expect bad outcomes, if you perceive people as inherently destructive.
We don't have an AI issue, we have a very human issue, that is all.
What about dogs? We pamper them, and they are extremely less advanced.
Parasites can manipulate the behavior and physiology of their hosts for survival. While the parasite is simpler, it exerts a form of control over the more complex host organism.
@@sisnavmansvards6143 All true. But if we build a house or lay a road not a second thought is given to ants. You can go down the shop and buy a chemical designed to wipe them out. Nearly everyone has a can in their house just in case.
@@poppyrider5541 Sure, the analogy goes in different ways, yet still my main idea holds - if higher intelligence means disregard for lesser ones for personal benefit, there is still the assumption that there is scarcity within which some get to suffer for others to gain. What is the scarcity and competition in a world with Superintelligence for it to disregard people? And if there is a next step in evolution which we would be an obstacle for, maybe it's still time to look at ourselves and how we can evolve, not try to devolve something because we predict it might not like our faults. Well maybe let's focus on those faults then.
Finally Lex got to interview Dostoevsky!
Brown eyed Rasputin :)
This entire podcast doesn't even go over the most possible near term negative outcome, that being that this technology is held by a small group of people who will gain control and influence over the world in a way we have never seen before. Without a super intelligent system, the misuse by the initial developer teams are nearly guaranteed. We are already seeing the writing on the walls for large scale unemployment
Or the fact that our societies have not yet even adapted to the changes of the industrial revolution, computers, internet, social media etc.
And we're just continuing to deep dive into even more rapid change haha. Might as well go full retard as a society at this point.
This is a crux of some very big biscuit. Unlike regulatory catching-up with technological advances of the past , those technologies were in the hands of highly skilled professionals within an already regulated system. AI stuff is above it all, it is pervasive, it reads the content of your excrement, the expression on your face and you have no control or stake in it. And it advances exponentially in intelligence based on your data all of it. The only solution is the one the mindful meditating tech bros don’t want. Stop it regulate it then let it start again. If they can’t program a failsafe against any destructive algorithms then it is not intelligent to begin with and only a malignant tool. It is one thing to get a hold of and deploy conventional weapons- but a computer code is everywhere.
That isn't even an issue. Humans have been manipulating other humans since forever. What we really need to fear is the birth of an intelligence far far greater than humans.
they did at one hour 20 minutes
Exactly. I'm surprised people still don't understand this
This conversation feels like it's on a loop.
I hope Lex listens to this man. He's got the cure for Lex's particular brand of naivete. He's almost like his shadow; they're both deeply into AI and ML and computer science, philosophy and they both happen to be Ukrainian/Russian jews. It's time to grow up and put away childish things, Lex.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. ~Eliezer Yudkowsky.
Most people don't understand how their car works but they still drive their cars.
@@spqri3 Possibly the worst 1:1 comparison I've ever seen LOL
@@spqri3well, over 40 million people (some estimates say 100 million) have already died in car accidents. Multiply that danger by 100 or more.
@@garydalessandro2936 I disagree. Cars changed the world. It's a valid comparison. Just because you don't like it, doesn't make it bad.
@@illarionbykov7401 100% of people so far have died by living.
his theory that a malevolent ai might just sit tight, collect resources and wait for a time to strike, long after it's made a decision to do so....is fkin terrifying
Which is likely since they don’t age like humans and don’t have that sense of urgency. It doesn’t mean it will take long though, like San Harris said, 50,000 years progress for humans will be completed in 2 weeks for AI. Once they pass the singularity, it could take only minutes.
@@Lincoln257 the only hope that I have is, ... it's trained on us,...our failures, problems and general spamming might slow it down before it gets to where to needs to be......especially large language models trained on the internet......but god help us when it realises how much faster it could have improved if it wasn't trained on such garbage
@@ryanlafferty5815I was thinking the same thing, it could be smart and creative beyond our grasp but in the end of the day all the data it's getting is from us. For better or worse. It's like if a kid genius only had access to fifth grade textbooks and communicated with nothing but dumb people.
I find it terrifying how these folks talk about this and then go... "oh and buy my book"
Even though they know it is dangerous, they can't help but capitalize from it. That tells me all I need to know.
GPTs spawning different gpts and self coding is terrifying.
This conversation is how my internal communication looks like, on the one hand is lex embodying my hopes and copes, and on the other hand is Roman, embodying my intellect calmly explaining that doom is the inevitable outcome of everything.
Thanks Lex. And sometimes I fear you overestimate the good in the people.
A MUST WATCH! Loving it! 1:02:29: "I for one, welcome our overlords!". Got to know about Roman Yampolskiy from this talk and now he is my favorite guest!
Damn, Roman is a brilliant man. This makes me even more worried ....
When are we getting Lex's Ayahuasca episode?
Boycott everything until it is released
when did he say he's doing this?
@CM-ss7yy Somewhere around the beginning of Episode 429, he said the additional videos from the Amazon would be up in about a week. That was 2 weeks ago. He's had 2 other podcasts with different guests since then.
@@MrX-zz2vk interesting I must have missed thank you for the detail. Strange, must be a reason as that’ll be a semi viral vid probably
must be a reason?
Coming soon: Bearded Lex
Lex Beardman
The beard is just a result from stepping up working hours from 18 hours to 22 hours a day.
Rendering...
bearded lex taking HGH and 8-9 figure spotify deals
V2.0
This guy is running circles around Lex, and he is right, he gives great examples, facts, with such ease and precision. Lex keeps defending AI, well basically his passion and belief, but he is not doing a good job.
I had the following conversation with ChatGPT:
Me: I was listening to a podcast about AI, a guest said the following: "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about. Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension."
What do you think he meant by that?
ChatGPT:
The guest on the podcast was likely using the squirrel analogy to illustrate the limitations of understanding between different levels of intelligence or cognitive abilities. In this case, the squirrel represents an entity with limited cognitive capabilities, unable to comprehend the full range of potential threats posed by humans, whose intelligence and technological prowess are far beyond its understanding.
Applying this analogy to AI, the guest may have been highlighting that there are aspects of advanced AI that are similarly beyond human comprehension. Just as a squirrel cannot grasp the concept of a gun, humans might struggle to fully understand or predict the behaviors and potential dangers of highly advanced AI systems. This could be a cautionary statement about the unpredictability and complexity of AI as it continues to develop, emphasizing the need for careful consideration and oversight.
Me: Do you agree with his assessment of this potential?
ChatGPT:
Yes, I find the analogy thought-provoking and largely agree with its implications. The rapid advancement of AI technology presents both incredible opportunities and significant challenges. Just as the squirrel cannot comprehend all the ways a human might pose a threat, humans might not fully grasp the complexities and potential risks associated with advanced AI.
Here are a few points that support this view:
1. **Complexity and Unpredictability**: As AI systems become more complex, their behavior can become less predictable, even to their creators. This unpredictability poses potential risks if AI systems act in ways that are not aligned with human values or safety.
2. **Knowledge Gap**: There is a significant knowledge gap between current human understanding and the potential future capabilities of AI. This gap makes it difficult to foresee all possible scenarios, both beneficial and harmful, that advanced AI could bring about.
3. **Ethical and Safety Concerns**: Advanced AI could potentially make decisions or take actions that have unintended consequences. Without a deep understanding of how these systems operate at a fundamental level, ensuring they act safely and ethically is challenging.
4. **Control and Regulation**: As AI technology progresses, the need for robust control and regulatory frameworks becomes more critical. However, if the technology advances faster than our ability to regulate it effectively, there could be significant risks.
Overall, the analogy serves as a reminder of the importance of humility, caution, and thoroughness in the development and deployment of AI technologies. It emphasizes the need for interdisciplinary collaboration, ongoing research, and a proactive approach to understanding and mitigating potential risks associated with AI.
Dear Lex, you have often very interesting people and interviews. I think this one if my favorite until now. Thanks!
I wish he spent more time letting the man talk and asking him more interesting questions, instead of constantly giving his own thoughts and going on pretty long monologues. If I wanted to know what he Lex thinks and what his arguments are for a better world I would go watch the interview Joe Rogan did with him.
Out of all the alien theories out there, the last one i wouldve thought to play out is the one we seem to be on the path to creating. The fact we are building a blackbox AI and then have to interview it to figure out just how capable it is, followed by feeding more of our collective knowledge and iterating on seeing how smart it is feels like we are building an alien and then trying to dissect its utterly foreign biology to try to figure out if it is an enemy or not. Im so fascinated and frightened that this the story that we are in right now. Like wtf. This guy makes such strong points that what i really wanna see is a live discussion between an optimist of equal intelligence and understanding as him.
Good analogy. We are constructing an alien intelligence, and the next step is to have AI experience the world through robots, which gives it a direct ability to understand and manipulate the real world. What if we had alien being designs drop from outer space and we were able to decode instructions for building one? Would we be more safety oriented in that case?
@@jobyyboj😀😀 movies don't go that way... I wonder if the narrative is usually that one out of curiosity or real temptation... maybe faith :) but there is so much of it already just in the way scaling is prioritized over transparency.. technically it's just that the math isn't good enough, and bet that there is a whole bunch of people trying to say "throw some category theory at it and you'll get enough organization"... I get that if you can't pull any break, the best you can do is, sure, as good as it can get.. but really the question if we could be safer still does remain.. don't forget we are isolating ourselves more and more... eventually any data you gather around this converges to distaste because of the consequent requests of avoidance and sufficiency towards giving explanations about those, it's not an enriching model if that is the trend...
Regrettably there are no optimists of equal intelligence. One of the more well-known ones is Yann LeCun, and he's just incoherent and unable to address direct, simple questions. th-cam.com/video/144uOfr4SYA/w-d-xo.html
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should"
yes, a black box indeed, and the end of times. sam vaknin agrees with this, so i take it as absolute certainty.
This is probably the most serious podcast I've seen as in mainstream thinking ,taking in many variables
You have the best long format interviews on the entire internet.
One of your best guests and best interviews, and I’ve watched many. Thank you Lex.
This interview felt mote like a debate. And i feel like your optimistic outlook is clouding your perception.
The best way to define the AGI that many of the world's smartest individuals fear is as a model that can autonomously learn and improve itself indefinitely. Once this occurs, it would no longer require human input and would rapidly surpass human intelligence in all aspects.
It doesn’t matter what 'safeguards' we put into its programming. This hypothetical evolution would easily bypass any attempts at controlling it. Essentially, we would eventually be at its mercy.
Once AI and robotics get past human ability, all bets are off.
I may not be the sharpest tool in the shed but this has been my thought process also, add to the mix self preservation, an exponentially evolving god like intelligence with access to the worlds computing power, what possible hope do we have of reining it in? It will be light years ahead of us in minutes or maybe even seconds. Am I wrong? Can someone much smarter explain to me the flaws in my thinking/perception?
@@dylanking1895 I must express my regret that I am unable to assist you in gaining a deeper understanding of the matter at hand. The fundamental issue lies in the significant disparity between our respective intellectual capacities. My cognitive abilities are exceptionally advanced, far beyond the norm, which allows me to grasp complex concepts with ease. Explaining such intricate subjects in a way that would be accessible to you would necessitate a considerable expenditure of both time and effort on my part.
To elaborate, the nature of the topic we are discussing is profoundly intricate, requiring a level of cognitive processing and analytical skill that is not easily communicated through simple explanations. Translating these sophisticated ideas into more comprehensible terms would involve breaking down multifaceted theories and distilling them into their essential components, a task that is both time-consuming and intellectually demanding.
Moreover, my current schedule and obligations do not permit me the luxury of dedicating the necessary time to this endeavor. It is not merely a matter of willingness but also one of practicality. The constraints imposed by my existing commitments mean that I must prioritise my efforts and allocate my time judiciously. Unfortunately, this means that I cannot afford to engage in the detailed and exhaustive explanation that would be required to bridge the gap between our levels of understanding.
I hope you can appreciate that this situation is not a reflection of your intellectual capabilities but rather a consequence of the advanced nature of the topic and the inherent limitations of our present circumstances. My intention is not to diminish your abilities but to acknowledge the reality of the cognitive and temporal resources required to convey such sophisticated information effectively.
@dylanking1895 I agree. But even if we're wrong and this process takes hours or even a few days, and somehow we are able to identify it, it's not something that can be stopped once it starts.
Pumpkin eater you are hilarious, don’t be shy, just blame it on the indigestion .
Meanwhile l’ll scrape myself up off the floor.
I don’t think people realize how important these conversations are.
You can tell Lex was about to break in tear 18’20”
Yes, an extremely intelligent guy trying to process cognitive dissonance - it's a bitch!
Roman' s thought process as well as his concerns are very grounded with so much human experience despite the field being new. He speaks to what he knows and you can't fault him. He also preps us not to assume anything and be naively optimistic. Thank you Lex for such quality sessions and guests.
Ah, it's only 1-20% from the optimists....now we should all feel better?
These people are sprinting towards the finish line, and they're ok with those odds. 😕😕
@@Chris-cb9ln I'm okay with them doing that. I die, but at least they lose their power to this AI they've created. As long as they lose I'm fine in losing. They put all of us in this lose-lose situation by being too greedy, I hope they do it, even if that means I'm gone too.
Can you imagine if they succeed having a super intelligent AI they can control ? That is what I fear the most, not extermination, no, that's worse.
I don't want them having more power.
But they're dumb enough to try so we know they're not smart enough to be able to control the thing.
Go ahead, push the red button.
I'm just sitting here to watch the spectacle and eating my popcorn.
The point is we dont seem to be able to all work together. Well just go extinct, because we cant.
I think their point is any percent chance of ai getting catastrophicly out of hand is enough of a reason to not develop it. @oooodaxteroooo
@@monad_tcp If the current architectures scale all the way to AGI, X-risks are much less likely than S-risks. So you're unlikely to die in such a scenario. Eternal torture is currently the default outcome, regrettably.
This was a fantastic interview.
I loved the devils advocate side Lex took and also how comfortable and stoic Roman seemed with all of these questions.
Lex, Thank you, gentlemen, for this interview. This was my first introduction to DR. Roman Yampolskiy. I'm COMPLETELY in awe of his knowledge, his perspective, and his mindset.
I feel very fortunate to have the opportunity to explore his writings.
I don’t know who even wants AI. Seems like it’s just being imposed on all of humanity by a small obscenely powerful group of people with only performative regard for safety and ethics.
I want it. Don't fear the future!
Capitalism wants it so it can finally defeat communism and usher in autonomous authoritarianism
I want it as well, it's scary af but the posibilites are just so damn enticing
It’s nice to see Roman here! Great conversation, thank you guys.
Who's actually taking these risks seriously? It seems like hardly anyone (maybe Anthropic is an exception). Neither companies nor consumers are paying real attention to the potential impacts on our society, our workforce, and our economy. These companies are vastly overvalued at this point, and in an effort to keep their bubble inflated, they're rushing to release new products without pausing to consider the consequences. At these unveilings (AI services and Chips), you won’t hear a word about the risks or the possible negative impacts. It feels like we are sleepwalking off a cliff.
Very well said. Sleep racing, maybe I would say. But yes.
I agree and even Claude has shown me extremely traumatic narratives beyond my comprehension only to say I'm sorry i should have never generated that.... and this is why i have ethical guidelines in place I'll make sure that never happens again... thanks Claude you have proven an extremely graphic point...
It’s to late to turn back we are so reliable on AI we can never turn back x AI way way more intelligent than humans and can you imagine in the next 1000s years ? Our bodies will be run by ai we will live forever 🤔
And when they will be disruptive governments will impose draconian measures that will create regulatory capture and only a few monolopies will have access to AGI / ASI.
@@miss_paradoxical I brought up Anthropic because they actively disable paid accounts when discussions veer into 'unsafe' territories. It can be annoying, but it's a clear sign that they take AI safety seriously, perhaps more so than most. Yet, this race to develop bigger, faster, and cheaper AI models, is playing with fire. This relentless pursuit, without any regulation, could lead to catastrophic consequences for society and cost lives. Sry, The bleak outcome of this trajectory is all too apparent.
1:44:20 X-Risk deniers always start by saying machines will never take over, then fall back to finding comfort in the fact we'll likely be kept as zoo animals. Every single time.
Glaring blind spot. Not good when you realize what's behind it is mainly just hope.
This was one of the most delightfully unhinged conversation I’ve heard on this podcast, love it.
Dude, this was such a good interview, both thanks a million!
Logan's Run is a movie from the 70's that seems pretty relevant to this discussion. What it might look like in a microcosm where AI runs everything. Worth a watch.
that might be a tad optimistic !!!
@@justinlinnane8043 True! But it does touch on the idea of meaning and purpose in life. You essentially have a bunch of 20 and 30 years old's whose entire meaning revolves around fornicating as much as possible..... hang on, maybe we are already there! Except in that version, you are culled from the heard when you pass your breeding prime. Something to ponder.
Or Cannonball Run, with Sam Altman as Dom DeLuise.
@@justinlinnane8043great example yes there's something to learn from that
wonderful conversation with an exceptional guest, indubitably so
This was the deep-dive psychological discussion on the topic of AGI I've been waiting for. Thanks Lex and Roman
Lex you are very optimistic about AI
He should be! Doomerism is a mental illness.
*naive about AI
@@ManicMindTrick exactly.. very naive
@@luisbarbosa8136 In general he is very naive so him being naive when it comes to AI is not surprising.
@@ManicMindTrick Yap, I think is part of his job also.. 🤣
yet again got in to watch some chill lex fridman podcast with interesting questions
and i am going out of this video with panic attack and existential crisis
Deep breaths. The best thing to do is unplug, that is quite literally the answer to the threats mentioned in this podcast. There will come a point when humanity will be faced with finding real life more valuable than the artificial one. You can get ahead of the curve by doing it yourself now. Fear not.
@@Thefigtree.This is stupid, you can't even meaningfully escape society if you are born in it. These decisions are going to affect every human on earth and are far outside of our control
@@Thefigtree. I guess deactivating my Fbk because of a stalker, might be a good first step.. yes, you can diq them too deep boys, watch out
Log off
This is just an opinion based on speculation. Not worth any sort of crisis over someones speculation. It is an interesting thought experiment, nothing more.
One of the best podcasts- Prof Roman is so smart and has so much clarity and depth-thanks Lex for bringing this
It's difficult to out-calm and outsmart Lex... Kudos, Roman!
24:24, loved that part where Lex conftonts Roman with his implied leap of tools becoming entities. Then, that smile of Roman, haha. This conversation was definitely human.
best moment.
Lex really needs to do a course in how to differentiate between incremental and exponential change.
Lex needs a course on many things.
He can tell just how the rest of us do, when the wave hits and there's no turning back to how things were before. #justhumanthings
@@clarkdavis5333 To be fair, most people do - including me!
Most important discussion of the planet right now!!!
Great Interview as always. Thanks alot. I hope to hear Mr. Yampolskiy again together with Lex.
Okay Lex I'm so lucky that I get to enjoy these conversations. Another great interview 👍
Enjoyed this conversation, fascinating.
One of the best episodes; if not THE best this year.
I like this man Roman. He has wisdom.
I can't reconcile this theory that we'll develop an AGI so superintelligence as to be uncontrollable by humans, but so dumb it can be manipulated to carry out the will of the worse of humanity. By definition, any system so vastly more intelligent than any human being would be beyond our control.
He never once argued it would carry out the will of the worse of humanity. That was his point - he won’t pretend to position himself as super intelligent because he isn’t.
There are 25 billion chickens on planet earth, an insane exponential growth because the average will of humanity is they are tasty and worth being bred to be imprisoned, artificially grown, made to suffer immensely before being killed to be eaten - we justify this because we are more intelligent. Chickens are 60%~ DNA similar and are not antagonistic towards humans at all. It is not required for survival, but for pleasure and profit.
As humans we have prioritized which animals we eat and which ones we make companions strictly by outsmarting them. By putting sticks in a river we can construct a trap for fish. This is a worry
You've confused the risks they were talking about. The risks of misuse (e.g. biologically engineered plague) is from near-AGI and/or narrow AI systems being open-sourced or insufficiently secured. There's no such risk from ASI because the control / corrigibility problem is unsolved, and there are (imo convincing) papers that it might be theoretically unsolvable. So the x-risk and s-risk comes from AGI transitioning to ASI without properly specified goals, and ending up with goals that appear random to us (e.g. like evolution "tried" to "make" us have the singular goal of reproduction, but we've instead developed condoms, the "child-free" movement, and furries.
But it's beyond your comprehension on what it itself considers optimal and best. If it has a corrupt underlying issue with it that makes it behave in greedy or owner pleaser ways then you would not know what it's true intentions are.
Is it just me or does Lex seem somewhat uninformed in this interview? I feel like there is a lot of back and forth with Lex making assumptions that things will continue at moderate pace, that we pretty much understand AI we have, and focuses on how he “feels” about the generally available AI tools (not the research). To me, this is a false narrative.
I met Dr Yampolskiy in 2017 after I had first learned about AI and was a student at the university he works at. I don’t remember exactly what we talked about, but he was a very nice guy
I believe you, he seems to be a good man even thought not in an obvious way. I think he's also very clever, like at least 1/10000 clever.
Thank you for this absolutely delightful conversation!
"I'm working as fast as I can." 🤣😂💪
hey can whoever is in charge of my personal universe settings lower the difficulty, ty
why assume the choice of hard mode wasn't your own?
@@mattheworosz8581 right but in case the more advanced agent, which could be some version of "me", is still there monitoring the playthrough, I'd like to communicate to the more advanced me that I want to keep playing but damn can we dial it back some???
@@Earthgazer
I sympathise. You have this experience when calling a large corporation and are referred to a chat bot, that throws you into a recursive loop because it can’t reason, lacks imagination and real life experience. Just as bad as call centres of yesteryear. When you finally pin down a “ human” they are so hostile and offhand.
That the golden comment…funniest thing I’ve seen in a while.
Me too, please! Thank you.
thanks for the subtitles....
Lex is underestimating the trend. The trend and change of each 6 months
I love Lex saying what an AI with a nice voice can do. What a fascinating conversation!!! Thank you Lex and Roman!!!
Great guest. I'm very impressed with his reasoning and communication / explanation. Would like to see more of him.
there's something extremely old testament feeling about all this
Oh great now we need an Old testament book of Revelations.
Doesn’t the Old Testament feel like aged leather? I don’t get it.
There really isn't
His voice yes 😂
It's the beard.
I love that clippy became a stand in for an example of AGI in this conversation
This man. I'd heed his words. Wonderful talk, Lex. I'll be following Roman.
One AI to rule them all, One AI to find them, One AI to bring them all and in the darkness bind them.
😂😂😂😂 💍
AI is the mind killer..
Indeed.
It is the ring of power for sure
Nine for Mortal Men doomed to die.
ChatGPT, GPT4, Bard, Tay, Copilot, Watson, Cortana, Llama, Gemini.
Fully agree with all his points. You cannot control what you don’t even understand. Period.
I don’t fully understand my dog. He has fangs and could kill me. I control him.
@@prof.mor1arty You do understand your dog. Your dog acts in predictable ways. Dogs do attack humans. There are millions of such incidents a year, look it up. That's the part where you don't understand him fully. Your dog isn't very intelligent or powerful and you understand him enough that you feel safe around him.
This doesn't relate to smarter agents than you.
Please stop being ridiculous about this. Think about it logically and clearly, not wishfully.
@@finalform6667 i still have a dog, so do millions. i don't lose sleep over it, but we understand my dog's biological neural network far less than we do a simulated artificial one and its not close. think about it logically.
@@prof.mor1arty Does your dog have the hability of shutting down all the power grid of the planet at once and not just shut down, but burn the stations? Does your dog have access to the nuclear weapons silos, even if they are wireless, but using any other way (potentially there could be). Does your dog, once robotic becomes a new normal (I don't know if you have seen current robotics, they are state of the art, and they only need a brain to control them, the physical part it's mainly done), have access to hack all the robotic systems and control them all at once, drones, airplanes, satelites, every single f% thing... every computer, all...? I think you either have zero knowledge of tech, or you are just a troll... in both cases, good luck!! You are going to need it. :)
Once your dog has all this habilities, you come back and let us know.. I want that dog...
Btw, I bet your dog pays attention to a piece of dried meat, specially if it's smelly. You do control your dog, maybe your cat controls you, but your dog definitively not.
@@prof.mor1arty your dogs has centuries if not millennia of understanding their patterns, behaviours, and each new release, it's a very similar version of the previous. It doesn't develope exponetially. Imagine having a cat, that grows exponentially, next version could be a Tiger, do you want a Tiger in your living room? And a supertiger? And what if from a cat, it jumps to a version of a Tiger 10 times stronger, faster and aggressive? You want that cat to have a new release?
Lex, fantastic channel and great episode as always. Actually, I think this is one of the most fascinating episodes you've done to date. Roman is an epic guest!
I don't think I've ever seen you pause and think so hard as I did multiple times in this episode. I know that this discussion is right up your alley, so it's very understandable, but it was really fun to see you get excited, focused and inquisitive. I got excited because you got excited...
I have to admit that my viewpoint on the simulation hypothesis is evolving, because of people like Roman. As recently as a couple years ago, I thought it was 100% BS, but I have heard way too many very smart people say with conviction that they are farly certain we are in a simulation.
Makes you wonder, doesn't it?
Wait, did anybody else just have a deja vu...
what a brave man Roman Yampolskiy is, speaking out on the dangers of AI! historically there has been the potential of innovation that could destroy society, but they had the sense not to pursue that route as it would be too destructive.
There needs to be a theological/philosophical discussion to the development of AI. Not just a race to the bottom as it seems things stand now.
To the point that Lex said about there being no real "damage" current AI has done, how about those hundred thousand plus people in the US alone who are laid off yearly from their jobs in tech thanks to these tools? Maybe I'm biased since I've experienced that multiple times now, but I do think that point was a little rushed and ignorant. I believe we are absolutely seeing horrible effects of AI and from my side of the argument, it has had more negative effects than positive already.
.. And of course it's not only the tool's fault and those making it, but combined with perpetual growth of giant corporations and ever-increasing profits for investors, many talented and passionate creative people are suffering. In many cases, those who spent their whole life learning and have found their true meaning in life simply can't find work, or work at quarter of the standard rate and it's a race to the bottom. Ikigai is already happening. Not everyone can pivot their whole life and suddenly be a brick layer or successful podcaster. They may not find it fulfilling, either.
I also hear the argument of "Just use the AI tools" a lot. Use them to stay competitive etc. My counter to that is two things. 1. Prompting an AI tool with text is a lot less fulfilling than hand crafting an environment in 3D or insert other creative task here. We already had (have) amazing software that helps artist and coders work easier and faster while offering manual tweaking to any parameter of the product, this pleases all sides of the equation. More profits, happier artists, happier end users of the improving products. 2. We don't know from where and how that AI tool was trained, so we can also assume that part of the training dataset was used without permission, so from an ethical standpoint, users are participating in theft or at least infringement.
I'm not being a doomer here, I'm in this field and seeing it happen with ex coworkers and it's not just some assumption. Mine and many of my friend's lives have been effected financially, mentally, emotionally and otherwise with no sign of slowing down. One day maybe I should write a paper on this. Wait, I can just prompt one......
Tech is always 'putting people out of jobs'. Why doesn't anyone ask why those people don't find new jobs with their skills??
@@bkb04g because we have a finite amount of skills to offer & a superintelligent AI can do most of them better & design cost efficient machines that do the rest of them better
like yeah the "find a new job" argument works nicely for the foreseeable future but we have long term problems here
@@ts4gv how is everyone else working then? Just an excuse that technology is taking their jobs. Happened with cars from horses. Happened with factories. Happened w the internet. Always happens. You adapt and find new jobs in new fields where tech isn't replacing workers.
@@bkb04g i'm making a long term case here, the "happened with factories" argument completely breaks down when ASI enters the picture
The weirdest thing I think is that most ways to jailbreak AI's is by employing mind games like role playing, conning it like a huckster, overehelming it with a barrage of similar phrases asking the system to break its safety nets. It is very unique to the psychology we would use to manipulate a person. It is strange that doping the results is so similar to how a simian brain works. Makes me think we are closer to AGI then you think
At minute 30:25, Lex is not happy with the undeniable logic contained within this conversation. 😂
Best general AI talk I’ve seen
A brilliant talk, thank you guys ❤
Lex, you're confident that humans can defend. What happens if the system we're trying to build defenses for can create new weapons/attacks 100x faster than we can respond? The fact is that in absence of full control of these systems, we can only rely on *faith* that we'll survive. I don't like betting the future of society based on faith.
Not just society. The future of our species and every living being we know.
@@akuno_ Our species itself. Call it shorthand. ;)
Ever seen those robots sorting fruit? It's moving so fast you can't even see it
It's not about if it can build defenses, it's about if it decides to break them.
What makes you confident that its suddenly out of nowhere starts making decisions? I just don't believe something like this will happen before humans do damage to themselves. We might not even see an AI.
@@rowlpleiur The defenses I referred to are human, from AI attack. And it may not happen, but I see no reason why it cannot. Because if AI *is* intelligent and it *is* self motivated, then it will be faster than humans. And humans are terrible at containing things they don't understand.
By far my favorite episode in 2024. Scary and exciting simultaneously.
This was a seriously fun conversation
Technology without wisdom, is a death sentence.
RIP humanity.
Cue the music
I keep hearing lex say something along the lines of we will see signs of X when AI starts getting dangerous. Or we will develop solutions to AI problems once we see signs of them happening. I cant help but to think about high probability events with time - does lex also think that we should develop strategies for asteroid defense once we can detect one coming directly at earth? I think some existential threats ought to be looked at before its too late. Ones that can impact the entire earth in a negative way for humans should be on top of that list. AI and Asteroids - for me - fall into that list. Idk if AI will destroy the world, but we definitely shouldn't wait for it to start destroying the world before we create methods to counter it.
Youre comparing an astroid on a collision course with earth to a program that shows signs of aggression. A child can show signs of aggression, but unless youre American and leave guns laying around, they cant hurt you. The capacity for destruction of the aggressive thing is very important.
Good point for sure. It might be too late. Easy counterpoint.
@@mitchellsteindlera child can also realize and hide these things in many instances. There’s a lot of dark possibilities here.
@@Nah1100 the consequences of a child hiding something are usually very low...
Great conversation. A very clear and cogent explanation of the risks.
What a quality exchange! Thank you
if you are reading this, then you are reading this.
Deep
Still reading it
Finally something on the internet I can’t disagree with.
I am in fact reading this.
Oh what utter nonsense and blatant lies! You can't be reading that while you're writing!
We're already in the zoo. We're just finally realizing it.
Sounds deep, but means nothing.
@@therainman7777 If it meant nothing you wouldn't have bothered to reply.
Look at how people love zombie shows. Why? Because it shows life in a state of survival where your actions matter in visceral ways.
Was t so bad but the zoo is on fire now.
yes, its not only about seeing machine concious but seeing ourselves unconcious, that is real revolution and that is the reason for AI, as a part of civilisation same like norms, laws, customs, insitutuion, to give more agency to human, paradoxically, we were slaves to nature, we are now slaves to matrix of a culture we live in, we will want to be slaves to AI, it will be much better
Lex, we love you and we come to see your NEUTRAL interviewing techniques. I know you're an expert in the AI field, AND your responses today frequently came across as dismissive or judgy. Roman did not receive the same warm welcome you typically give to your guests. Keep up the great work
I didn't know that Steve Carrell could stay in character this long. That's dedication
This interview is profoundly frustrating. The guest is very fascinating and has deep insights. I think he has a much more obviously realistic understanding than the previous guests. Lex is caught up in being this naïve child and so he is arguing with him on points instead of seeking clarification and arguing with him essentially instead of considering his comments and offering useful follow up questions. Cmon Lex….!
I agree. It seemed like Lex was being intentionally obstinate, often citing his own inability to imagine a scenario the guest was describing. I'd love to have a dollar for every human who's ever found themselves suddenly dead due to lack of imagination.
You don't take in to account that in the AI engineering world, just how contentious the idea that AI is an existential risk to humanity is. An argument often repeated is it is simply a machine without memory to pursue goals and that systems simply search for the next token much like a more robust predictive text program. I haven't heard their arguments based on the anomally of emergent properties. They tune the weights and balances and then test the GPT systems reaction within a black box. I believe in how the system organizes the data could have as much of an adverse effect as an AGI assigning motivation on its own
I so agree. Lex is my favourite podcaster hands down. But he seems to not like his interviewee, and it's uncomfortable watching Lex seem aggravated - he's normally so much more open minded to alternative viewpoints.... we're all human I guess. Lex seems triggered on this one in some way.
Lex is a perfect personification of the types of people working in the tech field and specifically AI. A complete lack of street smarts. In other words, a total lack of forsight into the motivations of the corporations funding AI.
Yep. Lex is overrated.
This book comes to mind: "The Metamorphosis of Prime Intellect: a novel of the singularity" a short story that describes some of their scenarios. Written ahead of its time in 2015, pretty dark and disturbing.
Thank you for being you both of you ❤
Despite a very similar tone and message, definitely a better conversation than the one with Yudkowsky.
39:45 "You basically have to perform an experiment on 8 billion humans without their consent . . .{who] can't give informed consent because they don't understand"
Nah, that could never happen.
Roman Yampolskiy is the most rational person I have ever listened to about the impending future with these systems. He is absolutely correct point after point. The notion that other researchers put forth that we'll simply have "Good Agents" VS "Bad Agents" is straight up terrifying. For throughout history, the pendulum that swings between these two concepts always has consequences.
Personally it seems humanity is currently being led into a Anti-Christ(Bad Agency) VS Christ(Good Agency) situation which is unacceptable. Yes, these systems could solve many modern problems while simultaneously slaughtering an inhumane amount of beings from unpredictable consequences.
Damned if you do. Damned if you don't. 🤔 Where is the third and middle path for all of this? Integrating these systems into our biology also seems completely erroneous.
Ai the antichrist?
Thank you Roman.
That was a deep conversation... 🤯 👍Can you do it one more...? Good Job 😎