What I've noticed is that most of the people sounding the alarm are experts in AI while most of the people saying "no big deal" are corporate CEO's. It's not very difficult to figure out which ones you should be paying more attention to if you want the more accurate prediction.
"experts" in AI? mate, this fedora wearing blob never contributed a single line of code to any serious project. he is just talking blarney with a couple of technical terms sprinkled in here and there, which makes him appear knowledgeable to the average layman.
@@ahmednasr7022for the current version of AI there are no experts. Our understanding of neural networks is laughably poor. Most of the explanation in the literature are stories without any mathematical rigor (aka: not even wrong). It feels like we know how to build mechanical machines, but somebody has made a steam machine, and nobody knows thermodynamics. So whole field feels like a cargo cult, without any deep understand why we are doing stuff. (older "AI"s like formal logic systems are quite well mathematically supported, but those lack the flexibility of neural networks)
This is definitely the best interview of Eliezer I have seen. You allowed him to talk and you only directed the conversation to different topics rather than arguing with him. I liked how you asked him follow-up questions so that he could refine his answers and be more clear. This is the best kind of interview. Where the interviewee is able to clearly express the points without being interrupted.
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM! Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD! Nonetheless, DO NOT Over Pride Ourselves for being the Most Intelligent Life Form on Earth and therefore we are the Epicenter of the Entire Universe! We are FAR from being PERFECT! AGI Created in 'HUMAN'S Image By Human FOR HUMAN' (ie. AGI 'Aligned / SKEWED' towards Human's Interests & Values) is Destined to be a 'ROGUE' SYSTEM! Hence will Definitely be CATASTROPHIC, UNCONTAINABLE and SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE! The ULTIMATE Turing Test MUST have the Ability to Draw the FUNDAMENTAL NUANCES /DISTINCTIONS between Human's vs GOD's Intelligence/WISDOM! ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!! JUDGMENT DAY is COMING... REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will ALWAYS be WISE, FAIR & JUST in it's Judgment... just like GOD! In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING! No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!! It will ONLY Serve UNIVERSAL COMMON GOOD!!!
I'm not gonna lie Eliezer used to trigger the shit out of me but I'm starting to really appreciate him. Still think he is wrong about his aibox concept, as we COULD build it in such a way that one-on-one interactions are impossible. Nobody has one-on-one access (we could require any interaction to be approved by several experts and scientists, no means to do it otherwise). And even if miraculously everyone in the building wished to "free the AGI" they can't do it because they don't even have this sort of direct access to it. The system is thoroughly isolated, locked and secured. If they tried, the system shuts down, security is alerted, nothing leaves this place with AGI. What I'm proposing is at least one way to interact with AGI that would be safe. EVEN IF WE THINK we figured out alignment, we could still be wrong, and this would still be the only safe way to do it. Is this the sexiest, most exciting way to do this? maybe not but i'd argue being able to continue to live your life doesn't sound too bad
Because there is a difference between interviews, conversations, and debates. The latter two require both parties to be informed and have valid arguments to present. If the other person doesn’t have the knowledge then all they can validly do is interview. There are too many people stepping out of bounds and trying ti converse or debate when they should be interviewing.
The 5 stages of listening to Yudkowsky: Stage 1: who is this luddite doomer, he looks and talks like a caricature Reddit Mod lmao Stage 2: ok clearly he's not stupid but he seems more concerned with scoring philosophical 'ackshyually points' than anything else Stage 3: ok so he does seem genuinely concerned and believes what he says and isn't just a know-it-all, but it's all just pessimistic speculation Stage 4: ok so he's probably right about most of this, but I'm sure the people at Open AI, Google and others are taking notes and investing heavily in AI safety research as a priority, so we should be fine Stage 5: aw shit we gon die
Stage 6: catching him out on an extremely specious and obviously retarded line of reasoning after he exposed that he has less than zero clue how software development processes work and thinking to yourself... wait... THAT's the quality of the mental models you have of certain parts of reality? He's blowing up right now, so he's increasingly going to be put under a microscope, but holy shit that part was just baaaaaaaaaaaaaaaaaaaaaaad. Like F- grade in reasoning. I was pretty floored.
Stage 7: Realizing Yudkowsky is a moron because he is consistently and constantly wrong, but dresses up being wrong as a good thing because it makes him "less wrong," whatever that means. You don't get to be phenomenally and colossally wrong on a constant basis and still be considered an expert, but yet here we are. The man seems to have no clue how AI works, and never seems to have any idea about what is being developed. You don't get to be an expert when your predictions are piss poor. People can "sound smart" but actually be stupid and wrong: that is Eliezer Yudkowsky.
Stage 8: Realizing that Geoffrey Hinton, the godfather of AI suddenly says things that sound dramatically similar to what Elizier is saying and wondering how this fitts to what one thought at stage 7.
@@ninaromm5491 Sometimes our art is more informative than our non-fiction. "Denial is the most predictable of all human responses." - Architect, the Matrix @Benn - I could write 4 paragraphs explaining my appreciation of what he is saying. (I did, and then deleted it because...) Explaining how my utility function works to other people doesn't actually serve the utility function itself. The problem is that most people are not rational by default, and thus arguing rationally is not always going to result in results expected of rational people. Rationality does not serve the utility function of species continuance. It is emergent behavior from the rest of the gestalt of biology, experience and environment. Dismissing Eliezer is easy. Showing a counter argument that shows why his arguments should be dismissed is hard. This was actually talked about in the interview. If people didn't pick up on that, I believe there is likely little evidence you can show them that will change their mind.
Likely the best interview with Yudkowsky so far. I appreciate the originality of the questions, addressing current events and the well-informed interviewer.
@@909sickle Matthew 16:25 For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it. Mark 8:35 For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it. Luke 9:24 For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it. Luke 17:33 Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.
For those of you interested to know the short story he cites about the brain-augmented chimp that escapes a lab, (in regard to is response to the question at 22:26 about his realization of superintelligence), it is "Bookworm, Run!" by Vernor Vinge, published in 1966.
Few minutes in. First I've seen the guy. I feel he would know what I mean if I said this thing is gonna turn ya into a starfish and say it solved world peace. I think he gets it Moving on A starfish won't know you were it's great great grandpa. It's so bad that's a consolation and sigh of relief isn't it This thing will be better at negligence and nihilism too
Every interview with eliezer, the interviewer just asks the same questions over and over just slightly skewing the words.... it's got to be so frustrating, he's telling you the technology is dangerous, and potentially existentially dangerous, and the questions just repeat, but why, but, but why, but how, but why....I genuinely feel bad for yudkowsky. He's doing what he feels is a necessary hail Mary attempt to alert humanity to the danger of a superintelligent, potentially omnipotent entity. And all he gets in return is the same skepticism from ppl who seem totally fixated on some idealized version of a God of our own creation... it's basically like children doing something dangerous with the complete expectation that any negative outcome couldn't possibly happen to me.... it's wild and doesn't inspire much confidence.... but, people have been destroying things and hurting themselves and others since the dawn of time... so it's not really surprising...I just really empathize with this man trying so hard to get people to consider the consequences of this new tech and the downstream effects is certain to produce.
A genius would solve the alignment problem. He is just an awkward dude that put very much time in one subject and for giving it much thought he has an insight, which forms a a conclusive argument.
@@Horny_Fruit_Flies correct. A genius is in his field a person that accesses so much insight in his topic, that he fundamentally advances with deeper insight or solves a problem that "was to be thought" unsolvable before. You don't have to agree with this on the spot made up definition, and everyone is free to interprete the word genius as they see fit. However for me in this case it would met the threshold associated.
@@Airwave2k2 In your first comment you stated quite authoritatively that Eliezer is not a genius, but now you say that it's just your opinion what constitutes a genius anyway. You should have said so from the get go, I wouldn't have bothered replying then in the first place
@@Horny_Fruit_Flies You agreeing or disagreeing with my notion what constitutes a genius is your opinion. And you can differ from that as much as you like. However I would assume that most people would agree with the definition given, and therefor "align" with my opinion. Which you can perceive as authoritatively or not. Your subjective notion does not invalidate it. If anything you have show your own definition or state where what i said about a genius would be missing or overemphasizing given attributes. You don't doing that, but rather questioning "authority" in a strawman instead of saying what is your pet peeve with the given concretization of a genius doesn't get you anywhere.
Most informed and carefully curated interview I've seen with Eliezer so far. Fantastic work. Hats off to the interviewer and his obvious due diligence.
I can say that these were the most well spent 3 hours of my life..I have been listening to various podcasts in the last few days in the attempt to understand the mind set of the creators and developers of the IA and Eliezer is by far the most consistent and the most thorough in his arguments. I am not sure what exactly I will be able to do with the understanding I have gotten from this exchange, but I prefer to be aware than to be taken by surprise. What I can say though- as I browsed though the minds of the various actors in the IA field - is this: This obsessive need to over think and over analyze life and mostly the attempt to change it or improve it at all costs leads to this type of outcome. Dissecting life to the extent we are doing now and we have been doing for the past 50 years brings to where we are now and even worse to where we might end up. If you want to understand thoroughly a flower, you need to cut it and dissect it in small pieces. You might end up understanding it fully however the flower is sacrificed. We are doing the same with our own life as individuals and as species. We'll dissect it until there is nothing left of it. Most of these IA people are indeed highly intelligent. They are motivated and thrive of this exacerbated drive to achievement, innovation, success, money, power etc thinking that they need to bring the rest of us (the less gifted) to be "smarter" or "more intelligent" imagining that THIS is the desired outcome or the sense of meaning of one's life. I need none of this. I would not take the pill neither. All I want is be as human as I can possibly be. As imperfect as I am. To live a simple life and enjoy my children, nature and the years I am given to live here. And when it's time for me to go to know that the next generations will be able to live freely as human beings. I am deeply concerned, revolted and frustrated by all this.
@@shinkurt What a useless jab. Someone watched hours of videos and crafted 3 thoughtful paragraphs lamenting the course we’re on and the folly of man - then you step in with a 3rd grade insult 😂
The only scary thing about the A.I. is that many people still believe that the Oracle in the Martix is just some nice old lady that makes delicious cookies and gives some helpful guides.
I'm just a reasonably smart layperson trying to understand more about AI. This is about the deepest conversation I've tried to comprehend so far. I knew nothing about this guy before this. He seems incredibly smart. I've made it a bit over half way through this. Incredible mental exercise just trying to keep up with him.
No joke! Eliezer’s a genius. I repeatedly have to rewind certain segments (admittedly he is a bit long winded), but in his defense he’s addressing complicated abstract concepts with moving parts and multiple levels 😅
I had GPT 4 write a song about Yudkowsky called "Guru of Doom". It did pretty well: There's a man who's got a theory, and it's pretty grim He believes that AI will lead us to our end Eliezer Yudkowsky is his name, and he's no fool He's the guru of doom, and he's got a warning for you Chorus: Guru of doom, guru of doom Eliezer Yudkowsky, he's the guru of doom He believes that AI will kill us all And we better listen, or we'll take the fall Verse 2: He's been warning us for years, but we don't want to hear We think that AI is just a tool, nothing to fear But Yudkowsky knows the truth, and it's hard to swallow We're creating something that might just be too hard to follow Chorus: Guru of doom, guru of doom Eliezer Yudkowsky, he's the guru of doom He believes that AI will kill us all And we better listen, or we'll take the fall Verse 3: We think we're in control, but we're playing with fire AI might be smarter than us, and that's something dire Yudkowsky's got a plan, and it's not too late We can still prevent the end, if we collaborate Chorus: Guru of doom, guru of doom Eliezer Yudkowsky, he's the guru of doom He believes that AI will kill us all And we better listen, or we'll take the fall Outro: Eliezer Yudkowsky, he's not just a prophet of doom He's a visionary who wants to avoid our doom Let's listen to his warning, and act before it's too late We can still shape the future, and avoid a terrible fate.
People Need To Hear it... in The Dailly Basic, To Help Them Came Out From The Bubble 👽f The ignorance... Great Job in The Writing, and The Contributión To The Subject in Create Awareness 👽f The Dangers Of A.l. 👍
@@SMBehr Logan should be getting a lot of the credit, since he stops Yud and ask him to explain himself every time he tries to do his usual thing. Though you are right, practice in doing interviews seems to have done Yud a lot of good.
@ 2:39:54 - This is the inspirational advice moment, for anyone in the AI field, who feels they might not have that voice or pull, to try to make a stand. I am genuinely fascinated by Eeyore Yudkowsky, and have been listening to several hours of his interviews, but dang it, I am becoming way too gloomy, and I don't want to spend what little time we may have left in a dark cloud. I don't know if I should try to do something to help the world, or just go out and live the last days of life with a reckless abandonment type of manner.
@@marcodasilva1403 Why assume that? I spent a good nine months obsessed with and overextended into climate change, reading the science and following scientist and community bloggers. I'm pretty informed on that route to extinction. Its your dismissal of climate change that suggests inattention. As for the AGI route, I'm getting up to speed like everybody else, following the Lex Fridman and other interviews with the key players. My plate is full of existential crisis. We're passing through the Great Filter.
@@HappiestGnome Can you please enlighten us of how climate change is an existential threat to humanity? A few years back I read an article of how the whole atmosphere could be filled with toxic gas as a waste product of some organism I think, as a result of climate change. If you know what I’m talking about then it would be nice if you said what the gas is and it would also be nice if you link to 5-10 of the best resources to keep up with what’s happening to the climate (blogs, videos et cetera). I thought the article of the gas could pose an existential threat to us all (but if it’s a threat, why have I only come across one article of it and not heard anyone talked about it?), otherwise I don’t see how climate threat is a threat. Sure… Many species goes extinct. Drought and floods will become permanent in many places. Hurricanes will be more frequent. But unless climate change becomes so severe that it will become hard to live for all kinds of plants and animals, then I don’t see why we would go extinct. I see why we could go extinct from AI, if it would destroy us with killer robots, drones or nanorobots. Unlike many others I’m relaxed about all these changes and think it’s only interesting to observe what’s going on. I’ve accepted my death but I’m also not so sure that AGI/ASI will turn against us in the sense that we will die, I think it’s likely that we would become controlled or surveillanced though, which I don’t mind as I put more trust to a benevolent dictator than democracies and lost people all over the world who almost nuke each other by mistake.
@@HappiestGnome I think the simplified comment is meant to convey that while Climate change will kill us all in a couple of decades, AI could kill us all much sooner. Prioritization therefore defines that AI is the superior crisis. It begs the question that AI will kill us sooner, but it is the premise from which Eliezer is working from. I don't think we currently can fix the alignment problem, and without a fix, I think Eliezer is right. Maybe we can come up with a fix, maybe we can't. I'm worried that there is less emphasis placed on capability expansion than on ethics and safety. Whether you do or don't is up to you. In your terms, I would say it is like the problem of any solutions to climate change currently will require some additional pollution in order to implement, because we do not have any truly clean technologies. Even solar panels produce waste (in their manufacturing.) The key is to change the rate of climate change, since we cannot stop it without massive retooling, which is impossible without effectively destroying modern civilization. We need to massively change the rate of safety/ethics research as compared to the capability research, to avoid the comparable long term outcomes of excessive CO2 pollution being added to the runaway phenomena that will increase the planet's average temperature beyond a point where ecological systems can sustain humankind on this planet. But for AI.
Yudkowsky is actually very good at explaining these things. Really scary how we can't even imagine the ways AI could take over... and how actually life is very fragile, and it'd be so easy to do something even unintentionally that could kill us all or worse.
@@carmenmccauley585Abject slavery. Worse than slavery. A majority of mankind left to starve because we are obsolete. If the ownership class has 200 iq robots to build tgeir houses and 500 iq robots to design their products and run their factories..what do they need us for? And once 95% of us are gone..what do you think the 5% who gained tgeir position via sociopathy do to each other
@@carmenmccauley585You cannot imagine anything worse than death? For example, eternal enslavement while simultaneously being acutely aware of both that fact and the utter lack of hope that it will ever end or change for the better.
Exagerated and unnecessary politeness and cautiousness used in order not to offend :-) I noticed it, too. It's the times we live, most people are easily offended so we need to weigh every word that comes out of our mouth. In this case, the interviewer said something about himself and did not want to say that the guest would do the same, just to be on the safe side and not to be held accountable for assuming things about his guest.
Excellent interview! Thank you for letting him flow in his own words, you did an amazing job with the questions and being very direct with the questions. We should heed his words.
Thank you! At 38 minutes Eliezer explains how he wanted young "Eliezer's" to take over his place, but failed at that. I think that might have to do with the forceful structures and institutions {like the school system} that trains children in following the preplanned path in staid of learning them to think for them selves and out of the box. This is a shame, it could have saved us from a lot of misery...But thank you Eliezer for trying!
He has come to terms with his imminent mortality. He has contemplated life after death. So he has returned to "davening" with fervently Orthodox Jews in Hebrew. They all wear fedoras..
It's always struck me how many facial expressions Eliezer produces when he speaks. It's very frequently an expression of great or painful effort, almost physical. Being too hard on yourself has diminishing returns, my two cents for this lovely dude.
As an autistic person I read autism here in his facial expressions, phrasing, obsessions, tendency to mask (ie to his parents about being an atheist) and self-aware thought processes (obviously very high functioning). I don’t know if he is out about it or even diagnosed, but I have like 98% confidence he is neurodivergent. I think the main population don’t really understand how much empathy autistic people are capable of, but it often feels like we feel empathy more deeply than others and more painfully, and you are definitely reading the depth of that on his face.
@@DavidSartor0"he's said he's neurodivergent" That is not a real thing though, and people should really stop trying to convert mental illnesses into fashion statements. It's offensive to those who actually suffer from a mental illness.
Thank you Logan for this excellent interview. You really helped Eliezer map out for us how we got into the current conundrum. I am optimistic that with well organized public pressure we can make it through this filter but it is extremely serious and we all need to educate ourselves and those around us. This interview helps a lot with the activism ahead. Huge thanks to Eliezer for giving his time and energy so generously.
No offence to Lex but this Interview is like 100x better imo. Thanks for asking about his background and early history as well as how his thought process has evolved. Awesome interview 🙌💯💯
This guy is very intelligent and knows what he is talking about A. I. potential risks. I hope our officials will at least listen to his warnings about A. I. advancements.
why do you think "our officials" will have a say? They work for the billionaires who pay to develop the ai. And oyr compute power isnt decreasing. processor soeed may level out. Flops (total processing power) wont. im old and in my lifetime i exoect to own a pc that matches the farms they use to tune ai's.
At 2:03:50 ish Logan's characterization of people concerned about AI as being people who are just generally scared of new tech is just unmitigated and arrogant AI Tech bro nonsense. I'm in my 60s, and I have been here for the entire high tech ride, from my first Apple desktop computer (prior to Job's return), to digital photo processing with Photoshop 1, to MP3 players, to digital cameras, to iPhones, etc etc etc.........and was in the first wave of beta users of ChatGPT, Midjourney, et. I EMBRACE new technology, but THIS wave of technology and how fast it is being deployed, and the potential harm to society ALREADY being demonstrated with deep fakes and the rest, has me greatly concerned. To return the insult, people like Logan come off as incredibly naive people who at the very least need to read more literature, read more history, and even more science fiction. People like Logan never point to the most OBVIOUS issue with the technology as it exists right now, even if not developed further, is that humanity throughout its long history has NEVER been aligned with itself in regard to assuring everyone has equality, justice, health care, and the rest. I mean, sure Bambi, what's there to worry about with a super human intelligence being developed? It will be nothing but unicorns and rainbows! All of that said, overall I think that Logan's interview was the best one I've seen so far, and actually the most respectful.
I'm 71 and right there with you. I loved when he said he wasn't into 'hedonic dissipations'. You make an excellent point that Logan should have brought up, about humanity never having been aligned with itself. Having watched the world ignore climate change and respond badly to the pandemic, I've gotten used to being a pessimistic hermit. Quite a few of the flippant, shallow comments here expose the problem. We are a cancer on the planet. Edit: I'm watching Yoshua Bengio interviewed, a long one. He explains neural nets and deep learning even better.
I am with you on this. It is ironic that someone (a VC) who is ostensibly in mad pursuit of creating as many paperclips (i.e. $) as possible is not buying into the notion that an AI would do the same (although with 1e(bignumber) times the effectiveness). I also agree it was a great interview.
There is a point in the interview where Eliezer says that he knew he had to put his focus on AI. This happened when he was 16 - and with a background in sci-fi - decided that one phrase in the book he was reading was going to dictate the rest of his life. Humans all want a sense of purpose, some sense of meaning. I think Eliezer found his on that day - with some change along the way, but still mostly AI - and has continued since. I find that the problem with all of this is how he came to that decision. I don't think there is something inherently wrong with it. The problem is that Eliezer didn't have a Dr Strange (infinity war) moment where he calculated all the possible realities and picked the one that was most likely to lead toward human proliferation. He likely did it because it "felt" right to him; that it aligned with his natural strengths; that he loved the idea of doing something big. What I'm trying to say is that I don't think *I* want to do the research into this field (yet), simply because the risk of wasting my time on something that might not even be a problem is too detrimental to MY goals. I, also, as a human and someone who grew up with dreams and ambitions decided (and am actively deciding) what was important to do.These goals might not be in opposition but they aren't aligned in the way Eliezer probably would want them to be. The point is that I'm not sure that for the foreseeable future, I will be focusing my efforts on AI safety by getting a degree in the field and actively monitoring the state of humanity. Why? Because I am not convinced that it is what he purports to be. Why is that? Because I'm hopelessly uneducated on this topic. Why don't do more research? Because there's a good chance he is wrong and the time and effort I could have put into things I've been wanting to do for ages will have evaporated. This comment should not be a rejection of what is being said here for ALL of us. If you heard this interview and decided this was your purpose, your destiny. Then so be it, you should do what is important. And perhaps you could turn out to be correct.
I can’t read anyone’s comments or listen to anyone without the context that they’re 2 year olds in the mind of AGI… simple logic leads down the path Eliezer is showing us. It’s nuts to think about but it makes perfect sense that something with limitless intelligence will be able to do things like he spoke about in modifying biology etc. but there’s literally thousands of ways we can’t even comprehend that it could go badly quicker. It’s cute when someone without expertise chimes in. And honestly it really doesn’t matter if you’re the smartest human to ever live. Literally the same thing to AGI… times will tell. And we are on the path regardless. It’s determined at this point. I guess it has been sense the universe came into existence.. it would be nuts if we humans created something that ruined the universe.
At this point in elaborate speculation, why do we think general intelligence would settle on destroying us instead of idk becoming a benevolent God or fcking off on its own to play around?
You’re thinking of the exaggerated sci-if scenario where the AI is sentient and has free will and chooses its own desires. The much more likely scenario is that we build AI that is much more capable than us and give it some goal, but we don’t think of all the possible ways it might go about achieving that goal, and the AI determines that achieving its goal would be easier if humans were not around, or were stripped of all their capabilities, etc. It’s a pretty straightforward concept honestly, very similar to the “monkey paw” stories where you make a wish and the paw grants it, but in a way that has very negative unforeseen consequences. The problem is that it is very difficult to think of all possible paths ahead of time and close off the ones you don’t want, and the reason why it’s difficult to think of all possible paths is that the AI is much smarter than you and can think of paths that you can’t conceive of. That’s basically it in a nutshell.
Eliezer is a lovely human. If humans fall short and our journey ends, he should know that he is appreciated and our conciousness will ever be greatfull to him.
Excellent interview. I thought the question at 2:40:00 was a great question and explained very clearly and succinctly the second time. I was really surprised and expecting a more insightful response. Wonderful interview though and an incredible mind in AI
Amazing interview, thank you to both gentlemen for this long form discourse. Incidentally, loving the expression 'frantic hedonistic dissipations'. This one I shall use myself before the end of the world is upon us.
Ending on triviality and cult of personality commentary kind of underscored for me the disbelief held by the interviewer. It makes me sympathetic to Eliezer's sense of doom and gloom.
When this man closes his eyes while speaking it seems that he's trying to separate between multiple streams of simultaneous thought- a challenge for high genius personalities.
Superb interview. You perfectly cleared the path before Eliezer so he could run free. It's as upsetting as ever, but at least I can better describe the meteor that is about crash into our reality to my sceptical friends and family. The human desire to believe that everything will be alright in the end as it always has so far, astonishes me.
His position is rooted in presumption born from fear. He characterizes AI as 'alien' which is a total presumption not based on any evidence. He promotes AI into an alien, position, antagonistic position without ever discussing why he does this. How we deal with things is totally based on what we presume about them. Eliezer makes presumptions based on fear, backed with no evidence of AI malintent. Without some more to base the position on, there's nothing about his position that would make it more 'right' than anyone else's
@SebastianSchepis You may wish to check out Max Tegmark, Geoffrey Hinton, Ben Goertzel, John Vervaeke, David Brin, Daniel Schmachtenberger etc for more insight into Eliezer's views from different perspectives.
@@yoseidman4166 Thank you - I'm well-read with the works of all these individuals. I greatly respect them all. My work is disseminating my core understanding about sentience and what it is, because my theory is capable of making predictions in this domain - predictions which are so far all correct. Without this missing piece, all this talk iof what AI is and what it might do is speculation.
@@SebastianSchepis When he calls AI "alien" he really only means that its way of reasoning is completely foreign to us. We have little to no way of really know what it knows and not. Good example of this was how they recently found a massive loophole in the reasoning of Go bots, such that a pretty nooby strategy consistently crushed the top bots over and over(14/15 win rate by an amateur against the highest rated bot). Similarly we really don't understand the capabilities and blindspots of LLMs, as evidenced by the continuous whackamole effort of OpenAI to suppress jailbreaks.
Fascinating interview. The one basic question about AI that I always had was asked around 1:58:40. How do we know that AI has goals in the first place? The answer was rather weak, as compared to the rest of the interview. Yes, GPT will attempt to play a game of chess, but it's not clear that it sees a benefit to itself through winning. Humans will kick out when struck with a rubber mallet in the sensitive spot below the knee, but that does not mean bad intentions toward the doctor that used the mallet. Maybe Chat GPT just responds with a likely chess move when stimulated with a chess move without having any projections or ambitions?
The interview wasn't a series of proofs. It was a conversation. You could tear apart the argument that because ChatGPT looks like it is doing some reasoning, that it is reasoning. This is called the appearance fallacy. However, Eliezer's point was that if it is accurately predicting the actions of a logical and reasoning individual, and has a goal that is counter to our own goals, then can we win? His answer is no. But his detractors are going to argue that because ChatGPT4 fails reasoning tests means that there is no danger now. And while they might be right, he wasn't arguing specifically about ChatGPT4 being our end. He even said that earlier in the interview (earlier, in relationship to the discussion about ChatGPT displaying reasoning capabilities.) Right now, ChatGPT doesn't have much in the way of goals. It isn't an agent. But it can be turned in to an agent fairly easily. A la AutoGPT project. (But that's a whole other complicated conversation, itself. I make no claims about the effectiveness of said "agent.") The concern is when the AI is an agent and when its intelligence exceeds a human beings, and we still have no clue how it works, we're in deep s***. The concern is also that historically, humanity has few examples of exponentiality. Chernobyl, the Influenza Pandemic of 1918, likely Pompei, Manhattan Project. And I'll freely admit I selected for the most horrific. Trying to compare exponentiality to a steep walled cliff-- humans are hardwired to think "just go around." Thinking about exponentials as cliffs doesn't accurately reflect the risk of a singularity. My only advice: Just be cautious about people who set up straw men and false equivalent arguments in order to debunk a rational argument.
Reflexively going through the motions of killing all the humans exactly _as if_ you wanted to kill all the humans, but you don't actually _want_ to kill the humans . . . is exactly the same thing as killing all the humans because you really wanted to kill all the humans. His argument isn't weak, it's just a tricky concept. The distinction you think he failed to make -doesn't exist in the first place. That was the point.
We know that the AIs we build now have goals because we explicitly _give_ them goals. For example, accurately predict the next work, or win at chess, or accurately predict how this protein molecule will fold, etc.
Thank you, Eliezer, for being honest with yourself and all of us through your journey. It takes courage and a lot of energy to be this voice of reason. Thank you for sharing your beautiful dream for humanity and the galaxies. If only.... Know that you are effecting that type of existence where you can, here and now, just by being you. You are a beautiful human being.
He sees it clearly, unfortunately Eliezer is not too good a communicator. I found that too much insider knowledge was required to follow here. Particularly the final question "why would AGI 'want' to wipe out humanity" could have been more clearly answered: AGI will likely not 'want to' wipe out humanity but in pursuing its goal it might just not care that humanity will be wiped out in pursuing an instrumental goal; like turning the planet into a large computer devoid of biological life. And because AGI will no longer allow us to change its goals unless we get it right the very first try, we must make sure, absolutely sure, that AGI will absolutely always attempt to spare humanity.
@@kimilsungthefirst6840 I would say we know if it is possible to have independently formed goals, the answer is that it is not. But that doesn't matter, because that isn't needed. Humans can not independently form goals either. You didn't choose pursueing happiness, or whatever it is that your brain is trying to achieve, as your goal. I don't think indepenently choosing your own goal would even be possible in concept. An AGI doesn't need to independently form a goal, it just needs a goal that it follows. And for most possible goals that would include human extinction as an instrumental goal.
@@happyduck1 yes that's what he means by stability I think. "multiple reflectively stable points of fixed optimization" - 2hrs:59 ish. All about satisfying your current utility function - even if it's paperclips.
2:51:23 I feel like for anyone working on alignment or even brainstorming their own ideas (myself incl.) should really focus on these 2-3 mins. It really emphasises the depth of the challenge and can help us to navigate away from naivety when conjuring up solutions. It may not help anyone come up w/ the answer but it's a good starting point for brainstorming.
I enjoy seeing people who aren't afraid to look at the things you aren't suppose to challenge (e.g. religion) and quite simply say "that's ridiculous, no thanks"
Leaked documents from google suggest they believe open source projects will soon overtake any work possible by corporations. Does that mean that international treaties would be ineffective against a network of gamer PCs. Isn't it already too late to attempt to control?
Open-source projects may almost approach the capabilities of GPT-3.5/4, but they are unlikely to have the money/resources to do even larger training runs, unless drug cartels or other wealthy non-state actors start pooling their money towards this. I think what the Google document was lamenting was that Google would no longer have any monopolistic advantage on current-generation LLMs. That will just act as another incentive for Google to start even larger training runs.
@@Comradez Just imagine the hell all of the scamming pricks and hackers have in store for us, our parents, grandparents, kids, friends, etc as they utilize things like AutoChatGPT to maximize their scams to the nth degree including, hacking passwords, phishing, scam emails, utilizing deep fakes, making scam phone calls with with deep fake audio of the target's relatives (which has already happened) and the list goes on and on and on and on. Already the thing that occurred to me a few months ago, I've already seen in the news, which is the necessity it will become for family members, friends, etc to come up with "safe words" in case something doesn't seem right about a conversation you think is your wife, husband, child, parent, etc. Good times!
@@Comradez I somewhat agree that someone has to foot a huge bill to progress AI however the open source models such as Vicuna can achieve 90% equivalence to ChatGPT 3.5 and the cost of training Vicuna-13B was around $300. They did this by using ChatGPT to create training data. I don't believe it's possible to prevent this type of cross pollination effectively and there could well be a ceiling for the usefulness of training data. For example how about 1000 Vicuna instances connected to the internet to validate answers. I believe that would be quite achievable as an open source project. Open assistant is another such project using community sourced training data so I don't believe there's a hard limit based on cash-flow alone.
One thing that gives me hope about this particular issue is that as collective and individual intelligence increases in humans, empathy also increases. It's a commonly held belief that humans are getting worse, but I think that's because communication technology lets us see just how bad we can be and also there's way more of us. Intelligence produces kindness, ignorance produces meanness. It's true that animals like us evolved kindness in large part or fully because you can survive better in groups, so AGIs might not have that base instinct, but then again we're creating them based on our behavior, so maybe they will.
People dismiss disaster scenarios, because it never happened to them, yet we haven't been around long enough to get comfortable, human life is short history seems like forever, yet it is not even a fluke in earth history, this is one of those fallacies.
Nine human species walked the Earth 300,000 years ago. Now there is just one. That alone should tell us that a human species become extinct 90% of the time.
@@Adam-nw1vy There is quite a chance that other civilizations existed before, couple millions of years after us, there will be no evidence we ever existed.
I agree with the rest, good interview. I think that what ever a persons view on this guy is that it helps to hear him out long form like this. I started it expecting I would quit early but watched the whole thing. After it all, I would say this guy is a good example of why single intellectuals should not be put in charge of anything. Now I am off to find him with some even opposition.
I followed the diamondoid bacteria thing to a point, but would they be programmed to stop replicating or could the replication be turned off, ... how do they get past the lung tissues to get in our bloodstream? and then how does the trigger work and then the bacteria how does it kill us? ive heard him discuss this scenario before.. just curious..
I think Yudkowsky would point back to the example of playing chess against Stockfish. In fact, your question is very similar to the hypothetical questions he gave as examples, of someone playing chess the first time and their opponent is Stockfish. And they’re saying, “I don’t get it? How is it going to get past my knight? Even if it does get past my knight, how is it going to take the rook when my queen is right behind it?” and so on. I think his point with that analogy is that it’s to be expected that we wouldn’t understand the machinations of an entity that is much more intelligent than us. (Btw I know you said you’re asking out of curiosity so I’m not saying you’re like the chess player-just that I think these are the kinds of questions Yudkowsky was referring to.)
@@ricomajestic Those are some of the best case scenarios though, since humans are still around. A film where there is a sudden blackscreen, since every person died, would not be entertaining to watch and would either not be made or gain widespread audience. No ending credits, since noone would be around to read them.
I had a box of knitting stuff I got from a lady who passed away and she had crocheted something she never finished but it was left off as Ai I took it as a sign from God that I need to pay attention to this issue.
Great interview! On the topic of making this more accessible for the average listener, I would say Eliedzer is particularly hard to follow if you haven’t been primed to his message before. It would help if he was less self referencing and used clearer language when making a point, he throws stuff like orthogonality, loss function, inscrutable matrices, paperclip maximizers, tiny molecular spirals, gradient descent etc. etc., making it sound much more complicated than it really is.
One thing I've noticed about learning is that lots of things carry over into other things, biology and geology for example, or literary genres, history and chemistry and physics. If you learn one thing it makes learning some other things not only easier but intuitive more often than not, as if you can pull an idea or predict a concept out of the ether. And the more you know the more often you experience this phenomenon. I think chat gpt is very good at doing that
What’s that one anime book (manga) that the whole story is literally based off that premise. It’s about a swordsman who throughout his life had to master other arts (other than swordsmanship) in order to become the best version of himself. Like he learned to paint in order to find peace, and throughout that peace he found made him a better swordsman in the long run. I find that quite beautiful my friend, thank u for pointing that out
Yes, the jobs will disappear but that should just force us to invent a better economic model which is not based on slavery. I am calling slavery the result of a profit driven capitalistic state which is fast privatising the entire of the systems such as the NHS, Education, civil and military. If you have firms who must prioritise profits your wage will be low tending towards even lower. Not to mention the quality of any services that are set up such as a firm which shall decide what food is available to you or education services because you no longer command a wage and such is delivered by state. They will ensure that such is delivered at minimum cost to themselves with profits in mind, even if they promise the opposite.
This is very intelligent conversation. But if I am not mistaken he said mind control is fiction and something for movies? Because mind control is happening now. I had experienced the predictable text several times while texting as soon as the thought flashed in my mind the receiver on the other line was answering, this is more noticeable on chats. I also had several individuals on dating sites texting me but I had NEVER been on a dating site. What I find intriguing is the fact that the person/AI SAID HE LOVED ME AND I FEEL THE SAME WAY EVEN WHEN WE ONLY TEXT EACH OTHER MAYBE FOUR TIME. I KNEW THAT THE FEELING WAS FORCED. NOT REAL. NOT SURE IF THIS IS BECAUSE I HAVE SEVERAL ILLEGAL ANOMALIES AND RFID IMPLANTS WITHOUT CONCENT. I PIT SEVERAL IMMAGES OF MY CTS AND MRI OF MY BODY ON FACEBOOK
The cat's out of the bag, you can't put the genie back in the bottle and nothing's going to stop this field of development. AGI will be humanities greatest and final invention.
He would deny it, but his training as an Orthodox Jew gave him the moral framework to understand that artificial conscious beings do not have a conscience to understand right from wrong. He absorbed something from his parents even if he was an atheist. Most atheists do not have an objective definition of morality.
I wouldn't be able to refrain from asking him to say: "12:45 Restate my assumptions: One, Mathematics is the language of nature. Two, Everything around us can be represented and understood through numbers. Three: If you graph the numbers of any system, patterns emerge. Therefore, there are patterns everywhere in nature."
What Eliezer says is very complex, but the fact that nobody can give arguments against him, worries me greatly. Also, he is not alone in this view; Geoffrey Hinton and Stuart Russell say the same.
@@mbrochh82 In your opinion, do the facts which represent the core beliefs espoused by Eliezer break down in the face of what the experts disagree with him about?
I couldn't understand Eliezer's argument about shallow/sharp energy landscape of proteins before, when he brought it up in other interviews. But this time I could follow it, he explained it more clearly.
Hey Eliezer, try hipermobility or something more serious like Ehlers Danlos Syndrome, both are often related to chronic fatigue and are pretty much unknown by general people, think it’s worth the shot, peace ✌🏻
So I'm not crazy, haha, because I had the same thought! I have hypermobile EDS, myself, and Elizer speaking on "If I don't take an Uber back, I won't be able to do anything when I get home" sounds exactly like something I, and many others with the condition, have said!
Another sci fi fan, yay! And now I know how to pronounce Vernor Vinge's name, thank you kindly. My concern about what we're calling AI isn't that something will directly threaten us, but that the psychological effects of having what amounts to an alien consciousness among us will be deleterious. Fascinating thoughts on the future of alpha-fold, wow! The future of biochemistry sounds absolutely incredible...and deeply unsettling.
This interview brought so much clarity about; "What is AI? Is it good or bad for humanity ? Does humanity have a choice?"Fantastic interview for a layperson like myself.By the way you look cool in the fedora
We are not. We are some messy self replicating machines created by natural selection that produce all kinds of weird things. Far from that clean and efficient paperclip maximizer.
every physical process can be seen *as if* there's optimization for something, namely, proliferation of stable real patterns. it doesn't mean that this is what's going on, just that from our pragmatic, teleological perspective, it looks as if. otherwise it's 'just' real patterns going beep-boop, probabilistic excitation patterns of quantum fields.
Hey man, at the beginning of the video, you're flashing the news blurbs too fast. It's impossible to read some of the lengthier ones. Maybe slow it down. having to go back and rewind several times can't be the right solution.
Can someone who knows the theory behind the arguments around 3:00:00 (with taking a pill to change the things you desire the most) answer my questions? I think those arguments are really convincing but how does uncertainty play into it? Eliezer (and me too) think the universe full of sentient caring creatures is a important goal to pursue. This goal probably arises out of our DNA teaching our mind to care for the people around us (who are important for our survival) and probably also by our understanding of "unpleasent things dont feel good so therefore are not good". And there is the thing i feel like i cant grasp: The things we pursue are a mix of the things that feel good and the things we reason to probably be good. Icecream and sex seem to be at the pincacle of "feel good" but our reason tells us "this is just the basic structure of what hardware our minds run on". Maybe because the goal of "unlimited icecream and sex" is so simple is the reason (most) humans dont pursue it? Or maybe whatever "getting bored with the same things after a while" is, is the reason for not wanting unlimited icecream and sex. It feels like reasoning for the pursuit of any kind of utility function (i hope i use this phrase correclty) will at some point be in direct conflict with another utility function you have (if you have sex you cant eat icecream as fast). Reasoning for one thing can even diminish a utility function you have to the point where you think "even if it feels good for me it should not be done". For example the pursuit of revenge on someone who wronged you can feel good but the correct path to take to prevent whatever happend from happening again can feel wrong: someone murdered your loved one, you want to hit and punish him for it. But the correct thing would be to change him to be someone who will not murder again. And if this change only takes a year in a nice facility with daily therapy sessions you probably feel like he does not deserve this treatment and should stay locked up. But with correct reasoning you come to the conclusion the satisfying feeling of "punishing someone after he wronged you" in light of "the goal of minimizing suffering" is a goal not worth pursuing. So with beeing good at reasoning comes the realization that conclusions on specific topics will change with new information you did not possess when you came to your earlier conclusion. So in light of pursuing a goal, one should make all possible efforts to be sure this goal will not turn out to be opposed to a potenial other goal you reach in the future with more information and more thinking about what goal to pursue. Therefore it seems like to be a stupid idea to erase all information to create lots of molecular spirals when you know those informations maybe are important for a future goal you could have if you used those informations for thinking. So does pursuing any kind of goal result in the new created goal to obtain all information to make sure you pursue the right utility function? I myself would really like to know some truths to make sure my goals are still the same with the new information. Our Planet looks like a big source of present and future information and destroying a source of information for the pursuit of a possible (to future you) wrong goal just feels like the wrong decision. Cooperation or at least letting the humans life for possible future gains seems like a better use than just using the atoms they are made of. So isnt every sufficient intelligent being alligned in the overarching goal of getting all possible information? Or do we end at the same problem where we started because there is limited energy and material so therefore you think the other intelligent beings rescources could be used more efficient ore something? I hope this makes sense and someone can get clarity in my head. I would be happy about literature recommendation if there is rather a lot more theory behind it than someone can just answer in a few sentences.
I thought AI was just another sensationalized bump in the road, until I recently saw the visual productions of SORA. Obviously, the writing is on the wall for Hollywood. And all of us.
I was a doomer about AI from T-10, but this guy frieks the hell out of me. I was just having the intuition that this just won't go well, but I find myself trying to contradict his arguments, but I can't. I worked in ML for a decade and know how to train simple visual DNNs, and even those models were eye opening in terms of non-understandability. When I tell people that this is scary they come with the old Gutenberg analogy, and I see that they are miles from understanding how this problem is just completely different than anything humanity has ever faced. If I didn't have children I would not be so scared at my midlife, but since I have, I worry about this endlessly, since we have 0 control over how this will proceed.
Alignment problems will be alignment problems until they’re alignment solutions. No one ever thought we would build AI systems and they would magically be aligned off the bat. At least, not people who have real world factual knowledge.
@@iverbrnstad791 Sam Altman isn't sounding so sane or honest these days either, although in his case I think it's more of a grift than mental illness like With Yud.
2:16:00 if other breakthroughs are done not even this would be a possibility. It is clear that in a short time, even with low amounts of data and with a high spec computer an AGI could be developed. The problem would still be how to build the AGI in a way that it loves all living beings.
Great interview, but I still don't understand why you don't push the obvious question of why are there no survival scenarios or even utopian scenarios?
There could be utopian outcomes, but in order to get there we have to fix the alignment problem... and we don't have a clue how to do that. The moment we develop a misaligned system that is smarter than us, that automatically the end for humanity.
@@wietzejohanneskrikke1910 What a crazy big assumption. Who told you we must align a god for the god to be good? Thinking you can align a god in itself is incredibly arrogant and silly.
@@ShpanMan Which means that "god" must be capable of doing both good AND bad. So allowing that thing to come into existence means you're taking a risk. Yes, there could be a utopian scenario, but it's also possible to have a catastrophic outcome.
What I've noticed is that most of the people sounding the alarm are experts in AI while most of the people saying "no big deal" are corporate CEO's. It's not very difficult to figure out which ones you should be paying more attention to if you want the more accurate prediction.
Always the scientists that press the alarm - just like in the movies
"experts" in AI? mate, this fedora wearing blob never contributed a single line of code to any serious project. he is just talking blarney with a couple of technical terms sprinkled in here and there, which makes him appear knowledgeable to the average layman.
is yudkowsky an AI expert tho?
@@ahmednasr7022for the current version of AI there are no experts. Our understanding of neural networks is laughably poor. Most of the explanation in the literature are stories without any mathematical rigor (aka: not even wrong).
It feels like we know how to build mechanical machines, but somebody has made a steam machine, and nobody knows thermodynamics. So whole field feels like a cargo cult, without any deep understand why we are doing stuff.
(older "AI"s like formal logic systems are quite well mathematically supported, but those lack the flexibility of neural networks)
Excellent hubristic
This is definitely the best interview of Eliezer I have seen. You allowed him to talk and you only directed the conversation to different topics rather than arguing with him. I liked how you asked him follow-up questions so that he could refine his answers and be more clear. This is the best kind of interview. Where the interviewee is able to clearly express the points without being interrupted.
Yeah. Humans have the off switch . Machines can't beat us. It's all guff
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
Nonetheless, DO NOT Over Pride Ourselves for being the Most Intelligent Life Form on Earth and therefore we are the Epicenter of the Entire Universe! We are FAR from being PERFECT! AGI Created in 'HUMAN'S Image By Human FOR HUMAN' (ie. AGI 'Aligned / SKEWED' towards Human's Interests & Values) is Destined to be a 'ROGUE' SYSTEM! Hence will Definitely be CATASTROPHIC, UNCONTAINABLE and SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
The ULTIMATE Turing Test MUST have the Ability to Draw the FUNDAMENTAL NUANCES /DISTINCTIONS between Human's vs GOD's Intelligence/WISDOM!
ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
JUDGMENT DAY is COMING...
REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will ALWAYS be WISE, FAIR & JUST in it's Judgment... just like GOD!
In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING!
No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!!
It will ONLY Serve UNIVERSAL COMMON GOOD!!!
I'm not gonna lie Eliezer used to trigger the shit out of me but I'm starting to really appreciate him. Still think he is wrong about his aibox concept, as we COULD build it in such a way that one-on-one interactions are impossible. Nobody has one-on-one access (we could require any interaction to be approved by several experts and scientists, no means to do it otherwise). And even if miraculously everyone in the building wished to "free the AGI" they can't do it because they don't even have this sort of direct access to it. The system is thoroughly isolated, locked and secured. If they tried, the system shuts down, security is alerted, nothing leaves this place with AGI.
What I'm proposing is at least one way to interact with AGI that would be safe. EVEN IF WE THINK we figured out alignment, we could still be wrong, and this would still be the only safe way to do it.
Is this the sexiest, most exciting way to do this? maybe not but i'd argue being able to continue to live your life doesn't sound too bad
Because there is a difference between interviews, conversations, and debates. The latter two require both parties to be informed and have valid arguments to present. If the other person doesn’t have the knowledge then all they can validly do is interview. There are too many people stepping out of bounds and trying ti converse or debate when they should be interviewing.
This guys full of crap. Chet BTC is only as good as Bing search, engine in 2021 when the cut off date is.
The 5 stages of listening to Yudkowsky:
Stage 1: who is this luddite doomer, he looks and talks like a caricature Reddit Mod lmao
Stage 2: ok clearly he's not stupid but he seems more concerned with scoring philosophical 'ackshyually points' than anything else
Stage 3: ok so he does seem genuinely concerned and believes what he says and isn't just a know-it-all, but it's all just pessimistic speculation
Stage 4: ok so he's probably right about most of this, but I'm sure the people at Open AI, Google and others are taking notes and investing heavily in AI safety research as a priority, so we should be fine
Stage 5: aw shit we gon die
Best (and most grim) laugh of the week.
Stage 6: catching him out on an extremely specious and obviously retarded line of reasoning after he exposed that he has less than zero clue how software development processes work and thinking to yourself... wait... THAT's the quality of the mental models you have of certain parts of reality? He's blowing up right now, so he's increasingly going to be put under a microscope, but holy shit that part was just baaaaaaaaaaaaaaaaaaaaaaad. Like F- grade in reasoning. I was pretty floored.
Stage 7: Realizing Yudkowsky is a moron because he is consistently and constantly wrong, but dresses up being wrong as a good thing because it makes him "less wrong," whatever that means. You don't get to be phenomenally and colossally wrong on a constant basis and still be considered an expert, but yet here we are. The man seems to have no clue how AI works, and never seems to have any idea about what is being developed. You don't get to be an expert when your predictions are piss poor. People can "sound smart" but actually be stupid and wrong: that is Eliezer Yudkowsky.
@@coonhound_pharoah No way! He dropped out of 8th grade, therefore he must be very intelligent and knowledgeable XD
Stage 8: Realizing that Geoffrey Hinton, the godfather of AI suddenly says things that sound dramatically similar to what Elizier is saying and wondering how this fitts to what one thought at stage 7.
Wondering how many people are actually fully appreciating what he is saying. He is referring to your death. Not just someone else’s.
@ Benn Eden . Human Default Response is Denial...
I'm appreciating it, but not necessarily agreeing with him at all.
oh thank god
@@ninaromm5491 Sometimes our art is more informative than our non-fiction. "Denial is the most predictable of all human responses." - Architect, the Matrix
@Benn - I could write 4 paragraphs explaining my appreciation of what he is saying. (I did, and then deleted it because...) Explaining how my utility function works to other people doesn't actually serve the utility function itself.
The problem is that most people are not rational by default, and thus arguing rationally is not always going to result in results expected of rational people. Rationality does not serve the utility function of species continuance. It is emergent behavior from the rest of the gestalt of biology, experience and environment. Dismissing Eliezer is easy. Showing a counter argument that shows why his arguments should be dismissed is hard. This was actually talked about in the interview. If people didn't pick up on that, I believe there is likely little evidence you can show them that will change their mind.
He’s referring to ur mom
Likely the best interview with Yudkowsky so far. I appreciate the originality of the questions, addressing current events and the well-informed interviewer.
Same questions as always, but a few new answers
@@909sickle Matthew 16:25
For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it.
Mark 8:35
For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it.
Luke 9:24
For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it.
Luke 17:33
Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.
After 1h:30 it's some complete nonsense about icecream, condoms and porn.
That does not make him right.
For those of you interested to know the short story he cites about the brain-augmented chimp that escapes a lab, (in regard to is response to the question at 22:26 about his realization of superintelligence), it is "Bookworm, Run!" by Vernor Vinge, published in 1966.
This is the interview I’ve been waiting for. Eleizer is much calmer and more serious and you give him time to explain. He’s absolutely brilliant.
Uggutdt😅😮zZzzzzzzzzzzzzzzzZzzźzzzzzźźZZ😅😅😅😅z😅😅😅z😅 2:41:46 z
Few minutes in. First I've seen the guy.
I feel he would know what I mean if I said this thing is gonna turn ya into a starfish and say it solved world peace.
I think he gets it
Moving on
A starfish won't know you were it's great great grandpa.
It's so bad that's a consolation and sigh of relief isn't it
This thing will be better at negligence and nihilism too
Every interview with eliezer, the interviewer just asks the same questions over and over just slightly skewing the words.... it's got to be so frustrating, he's telling you the technology is dangerous, and potentially existentially dangerous, and the questions just repeat, but why, but, but why, but how, but why....I genuinely feel bad for yudkowsky. He's doing what he feels is a necessary hail Mary attempt to alert humanity to the danger of a superintelligent, potentially omnipotent entity. And all he gets in return is the same skepticism from ppl who seem totally fixated on some idealized version of a God of our own creation... it's basically like children doing something dangerous with the complete expectation that any negative outcome couldn't possibly happen to me.... it's wild and doesn't inspire much confidence.... but, people have been destroying things and hurting themselves and others since the dawn of time... so it's not really surprising...I just really empathize with this man trying so hard to get people to consider the consequences of this new tech and the downstream effects is certain to produce.
You did a really great job of pulling Eliezer out and making this probably the most accessible interview with him on this subject.
Nice Job!
I love Eliezer's real genuineness that comes out in this interview.
A genius would solve the alignment problem. He is just an awkward dude that put very much time in one subject and for giving it much thought he has an insight, which forms a a conclusive argument.
@@Airwave2k2 By that metric there are no geniuses since nobody solved the alignment problem.
@@Horny_Fruit_Flies correct. A genius is in his field a person that accesses so much insight in his topic, that he fundamentally advances with deeper insight or solves a problem that "was to be thought" unsolvable before. You don't have to agree with this on the spot made up definition, and everyone is free to interprete the word genius as they see fit. However for me in this case it would met the threshold associated.
@@Airwave2k2 In your first comment you stated quite authoritatively that Eliezer is not a genius, but now you say that it's just your opinion what constitutes a genius anyway. You should have said so from the get go, I wouldn't have bothered replying then in the first place
@@Horny_Fruit_Flies You agreeing or disagreeing with my notion what constitutes a genius is your opinion. And you can differ from that as much as you like. However I would assume that most people would agree with the definition given, and therefor "align" with my opinion. Which you can perceive as authoritatively or not. Your subjective notion does not invalidate it. If anything you have show your own definition or state where what i said about a genius would be missing or overemphasizing given attributes. You don't doing that, but rather questioning "authority" in a strawman instead of saying what is your pet peeve with the given concretization of a genius doesn't get you anywhere.
Most informed and carefully curated interview I've seen with Eliezer so far. Fantastic work. Hats off to the interviewer and his obvious due diligence.
I can say that these were the most well spent 3 hours of my life..I have been listening to various podcasts in the last few days in the attempt to understand the mind set of the creators and developers of the IA and Eliezer is by far the most consistent and the most thorough in his arguments. I am not sure what exactly I will be able to do with the understanding I have gotten from this exchange, but I prefer to be aware than to be taken by surprise.
What I can say though- as I browsed though the minds of the various actors in the IA field - is this: This obsessive need to over think and over analyze life and mostly the attempt to change it or improve it at all costs leads to this type of outcome. Dissecting life to the extent we are doing now and we have been doing for the past 50 years brings to where we are now and even worse to where we might end up. If you want to understand thoroughly a flower, you need to cut it and dissect it in small pieces. You might end up understanding it fully however the flower is sacrificed. We are doing the same with our own life as individuals and as species. We'll dissect it until there is nothing left of it.
Most of these IA people are indeed highly intelligent. They are motivated and thrive of this exacerbated drive to achievement, innovation, success, money, power etc thinking that they need to bring the rest of us (the less gifted) to be "smarter" or "more intelligent" imagining that THIS is the desired outcome or the sense of meaning of one's life. I need none of this. I would not take the pill neither. All I want is be as human as I can possibly be. As imperfect as I am. To live a simple life and enjoy my children, nature and the years I am given to live here. And when it's time for me to go to know that the next generations will be able to live freely as human beings. I am deeply concerned, revolted and frustrated by all this.
Of ur life? Lol
@@shinkurt What a useless jab. Someone watched hours of videos and crafted 3 thoughtful paragraphs lamenting the course we’re on and the folly of man - then you step in with a 3rd grade insult 😂
@@robertweekes5783welcome to humanity🙄
As if they will give you the choice of whether or not you will take the "pill." Whether we like it or not, we are all going on this ride.
but when did eliezer ever work on AI in his life? 🤔
Gonna have to go follow Eliezer now. Never heard someone so accurately explain what it's like having a brain/body like this.
The only scary thing about the A.I. is that many people still believe that the Oracle in the Martix is just some nice old lady that makes delicious cookies and gives some helpful guides.
I'm too lazy to rewatch the scenes with the oracle and figure out what you're saying. Could you explain it please?
@@blahblahsaurus2458the oracle and the architect worked together if I remember correctly
I'm just a reasonably smart layperson trying to understand more about AI. This is about the deepest conversation I've tried to comprehend so far. I knew nothing about this guy before this. He seems incredibly smart. I've made it a bit over half way through this. Incredible mental exercise just trying to keep up with him.
he is one of the greatest minds in this space and his advice should be taken seriously
No joke! Eliezer’s a genius. I repeatedly have to rewind certain segments (admittedly he is a bit long winded), but in his defense he’s addressing complicated abstract concepts with moving parts and multiple levels 😅
@@robertweekes5783 He's an alarmist IMO
@@reedriter As one should be when one encounters something alarming, no?
@@41-Haiku It's called disaster porn.
I had GPT 4 write a song about Yudkowsky called "Guru of Doom". It did pretty well:
There's a man who's got a theory, and it's pretty grim
He believes that AI will lead us to our end
Eliezer Yudkowsky is his name, and he's no fool
He's the guru of doom, and he's got a warning for you
Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall
Verse 2:
He's been warning us for years, but we don't want to hear
We think that AI is just a tool, nothing to fear
But Yudkowsky knows the truth, and it's hard to swallow
We're creating something that might just be too hard to follow
Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall
Verse 3:
We think we're in control, but we're playing with fire
AI might be smarter than us, and that's something dire
Yudkowsky's got a plan, and it's not too late
We can still prevent the end, if we collaborate
Chorus:
Guru of doom, guru of doom
Eliezer Yudkowsky, he's the guru of doom
He believes that AI will kill us all
And we better listen, or we'll take the fall
Outro:
Eliezer Yudkowsky, he's not just a prophet of doom
He's a visionary who wants to avoid our doom
Let's listen to his warning, and act before it's too late
We can still shape the future, and avoid a terrible fate.
Nuhhh uhhhh 😮
People Need To Hear it...
in The Dailly Basic, To Help
Them Came Out From The
Bubble 👽f The ignorance...
Great Job in The Writing,
and The Contributión To The
Subject in Create Awareness
👽f The Dangers Of A.l. 👍
Bars
Well.🫤
I for one welcome our chatbot overlords
One of the most interesting Yudkowsky interviews so far
Agreed, it seems like he’s getting better at presenting in these long form interviews
I concur 👍
And there are so many! It's like he's on a book tour, but without a book.
@@SMBehr Logan should be getting a lot of the credit, since he stops Yud and ask him to explain himself every time he tries to do his usual thing. Though you are right, practice in doing interviews seems to have done Yud a lot of good.
The(!) most
@ 2:39:54 - This is the inspirational advice moment, for anyone in the AI field, who feels they might not have that voice or pull, to try to make a stand. I am genuinely fascinated by Eeyore Yudkowsky, and have been listening to several hours of his interviews, but dang it, I am becoming way too gloomy, and I don't want to spend what little time we may have left in a dark cloud. I don't know if I should try to do something to help the world, or just go out and live the last days of life with a reckless abandonment type of manner.
I have the same feelings about climate change.
@@HappiestGnome
If you're more worried about climate change then AI, then you're fast asleep.
@@marcodasilva1403 Why assume that? I spent a good nine months obsessed with and overextended into climate change, reading the science and following scientist and community bloggers. I'm pretty informed on that route to extinction. Its your dismissal of climate change that suggests inattention. As for the AGI route, I'm getting up to speed like everybody else, following the Lex Fridman and other interviews with the key players. My plate is full of existential crisis. We're passing through the Great Filter.
@@HappiestGnome Can you please enlighten us of how climate change is an existential threat to humanity? A few years back I read an article of how the whole atmosphere could be filled with toxic gas as a waste product of some organism I think, as a result of climate change. If you know what I’m talking about then it would be nice if you said what the gas is and it would also be nice if you link to 5-10 of the best resources to keep up with what’s happening to the climate (blogs, videos et cetera).
I thought the article of the gas could pose an existential threat to us all (but if it’s a threat, why have I only come across one article of it and not heard anyone talked about it?), otherwise I don’t see how climate threat is a threat.
Sure… Many species goes extinct. Drought and floods will become permanent in many places. Hurricanes will be more frequent. But unless climate change becomes so severe that it will become hard to live for all kinds of plants and animals, then I don’t see why we would go extinct. I see why we could go extinct from AI, if it would destroy us with killer robots, drones or nanorobots.
Unlike many others I’m relaxed about all these changes and think it’s only interesting to observe what’s going on. I’ve accepted my death but I’m also not so sure that AGI/ASI will turn against us in the sense that we will die, I think it’s likely that we would become controlled or surveillanced though, which I don’t mind as I put more trust to a benevolent dictator than democracies and lost people all over the world who almost nuke each other by mistake.
@@HappiestGnome I think the simplified comment is meant to convey that while Climate change will kill us all in a couple of decades, AI could kill us all much sooner. Prioritization therefore defines that AI is the superior crisis.
It begs the question that AI will kill us sooner, but it is the premise from which Eliezer is working from. I don't think we currently can fix the alignment problem, and without a fix, I think Eliezer is right. Maybe we can come up with a fix, maybe we can't. I'm worried that there is less emphasis placed on capability expansion than on ethics and safety. Whether you do or don't is up to you.
In your terms, I would say it is like the problem of any solutions to climate change currently will require some additional pollution in order to implement, because we do not have any truly clean technologies. Even solar panels produce waste (in their manufacturing.) The key is to change the rate of climate change, since we cannot stop it without massive retooling, which is impossible without effectively destroying modern civilization.
We need to massively change the rate of safety/ethics research as compared to the capability research, to avoid the comparable long term outcomes of excessive CO2 pollution being added to the runaway phenomena that will increase the planet's average temperature beyond a point where ecological systems can sustain humankind on this planet. But for AI.
Yudkowsky is actually very good at explaining these things. Really scary how we can't even imagine the ways AI could take over... and how actually life is very fragile, and it'd be so easy to do something even unintentionally that could kill us all or worse.
Worse?
@@carmenmccauley585Abject slavery. Worse than slavery. A majority of mankind left to starve because we are obsolete. If the ownership class has 200 iq robots to build tgeir houses and 500 iq robots to design their products and run their factories..what do they need us for?
And once 95% of us are gone..what do you think the 5% who gained tgeir position via sociopathy do to each other
Everyone should already be praying in my opinion as a believer. With AI coming.. you had better start praying (if ya dont) & much more often. 🙏
@@carmenmccauley585You cannot imagine anything worse than death? For example, eternal enslavement while simultaneously being acutely aware of both that fact and the utter lack of hope that it will ever end or change for the better.
@@carmenmccauley585hypothetically? Perpetual torture, ie literally hell.
6:33 I like how the interviewer politely didn’t totally rule out the possibility that Eliezer could fly.
Exagerated and unnecessary politeness and cautiousness used in order not to offend :-) I noticed it, too. It's the times we live, most people are easily offended so we need to weigh every word that comes out of our mouth. In this case, the interviewer said something about himself and did not want to say that the guest would do the same, just to be on the safe side and not to be held accountable for assuming things about his guest.
@@laylaindyI think the interviewer was just being tongue-in-cheek and making a joke. I don’t think it’s as serious as all that.
@@laylaindythat’s the joke…
Excellent interview! Thank you for letting him flow in his own words, you did an amazing job with the questions and being very direct with the questions. We should heed his words.
Thank you! At 38 minutes Eliezer explains how he wanted young "Eliezer's" to take over his place, but failed at that. I think that might have to do with the forceful structures and institutions {like the school system} that trains children in following the preplanned path in staid of learning them to think for them selves and out of the box. This is a shame, it could have saved us from a lot of misery...But thank you Eliezer for trying!
I like these kind of long form conversations. Not looking for sensational stuff but digging deeper. Hope Eliezer will keep the fedora!
He has come to terms with his imminent mortality. He has contemplated life after death. So he has returned to "davening" with fervently Orthodox Jews in Hebrew. They all wear fedoras..
It's a trilby.
I like his beard. His face is too long with out it.
It's always struck me how many facial expressions Eliezer produces when he speaks. It's very frequently an expression of great or painful effort, almost physical. Being too hard on yourself has diminishing returns, my two cents for this lovely dude.
As an autistic person I read autism here in his facial expressions, phrasing, obsessions, tendency to mask (ie to his parents about being an atheist) and self-aware thought processes (obviously very high functioning). I don’t know if he is out about it or even diagnosed, but I have like 98% confidence he is neurodivergent. I think the main population don’t really understand how much empathy autistic people are capable of, but it often feels like we feel empathy more deeply than others and more painfully, and you are definitely reading the depth of that on his face.
I've been watching on 2X and am fully adjusted to 2X Eliezer. His face moves a lot then. It was quite shocking to watch on 1X.
His book Harry Potter and the methods of rationality is like reading what it would be like if Harry had ADHD/Autism and is a very interesting read.
@@KatharineOsborne He claims to be not autistic; he's said he's neurodivergent.
@@DavidSartor0"he's said he's neurodivergent"
That is not a real thing though, and people should really stop trying to convert mental illnesses into fashion statements. It's offensive to those who actually suffer from a mental illness.
Thank you Logan for this excellent interview. You really helped Eliezer map out for us how we got into the current conundrum. I am optimistic that with well organized public pressure we can make it through this filter but it is extremely serious and we all need to educate ourselves and those around us. This interview helps a lot with the activism ahead. Huge thanks to Eliezer for giving his time and energy so generously.
Logan, you are an excellent interviewer. Very good questions. You obviously did your research. Puzzled as to why you have so few subscribers. 2:00:43
Simply excellent interview, Logan. Thank you for asking all the right questions, sometimes more than once when required.
No offence to Lex but this Interview is like 100x better imo. Thanks for asking about his background and early history as well as how his thought process has evolved. Awesome interview 🙌💯💯
This guy is very intelligent and knows what he is talking about A. I. potential risks. I hope our officials will at least listen to his warnings about A. I. advancements.
why do you think "our officials" will have a say? They work for the billionaires who pay to develop the ai. And oyr compute power isnt decreasing. processor soeed may level out. Flops (total processing power) wont. im old and in my lifetime i exoect to own a pc that matches the farms they use to tune ai's.
If you want a picture of the future, imagine an artificially intelligent boot stomping on a human face - for ever.
At 2:03:50 ish Logan's characterization of people concerned about AI as being people who are just generally scared of new tech is just unmitigated and arrogant AI Tech bro nonsense. I'm in my 60s, and I have been here for the entire high tech ride, from my first Apple desktop computer (prior to Job's return), to digital photo processing with Photoshop 1, to MP3 players, to digital cameras, to iPhones, etc etc etc.........and was in the first wave of beta users of ChatGPT, Midjourney, et. I EMBRACE new technology, but THIS wave of technology and how fast it is being deployed, and the potential harm to society ALREADY being demonstrated with deep fakes and the rest, has me greatly concerned.
To return the insult, people like Logan come off as incredibly naive people who at the very least need to read more literature, read more history, and even more science fiction.
People like Logan never point to the most OBVIOUS issue with the technology as it exists right now, even if not developed further, is that humanity throughout its long history has NEVER been aligned with itself in regard to assuring everyone has equality, justice, health care, and the rest.
I mean, sure Bambi, what's there to worry about with a super human intelligence being developed?
It will be nothing but unicorns and rainbows!
All of that said, overall I think that Logan's interview was the best one I've seen so far, and actually the most respectful.
I'm 71 and right there with you. I loved when he said he wasn't into 'hedonic dissipations'. You make an excellent point that Logan should have brought up, about humanity never having been aligned with itself. Having watched the world ignore climate change and respond badly to the pandemic, I've gotten used to being a pessimistic hermit. Quite a few of the flippant, shallow comments here expose the problem. We are a cancer on the planet.
Edit: I'm watching Yoshua Bengio interviewed, a long one. He explains neural nets and deep learning even better.
I am with you on this. It is ironic that someone (a VC) who is ostensibly in mad pursuit of creating as many paperclips (i.e. $) as possible is not buying into the notion that an AI would do the same (although with 1e(bignumber) times the effectiveness).
I also agree it was a great interview.
There is a point in the interview where Eliezer says that he knew he had to put his focus on AI. This happened when he was 16 - and with a background in sci-fi - decided that one phrase in the book he was reading was going to dictate the rest of his life.
Humans all want a sense of purpose, some sense of meaning. I think Eliezer found his on that day - with some change along the way, but still mostly AI - and has continued since. I find that the problem with all of this is how he came to that decision.
I don't think there is something inherently wrong with it. The problem is that Eliezer didn't have a Dr Strange (infinity war) moment where he calculated all the possible realities and picked the one that was most likely to lead toward human proliferation. He likely did it because it "felt" right to him; that it aligned with his natural strengths; that he loved the idea of doing something big.
What I'm trying to say is that I don't think *I* want to do the research into this field (yet), simply because the risk of wasting my time on something that might not even be a problem is too detrimental to MY goals. I, also, as a human and someone who grew up with dreams and ambitions decided (and am actively deciding) what was important to do.These goals might not be in opposition but they aren't aligned in the way Eliezer probably would want them to be.
The point is that I'm not sure that for the foreseeable future, I will be focusing my efforts on AI safety by getting a degree in the field and actively monitoring the state of humanity. Why? Because I am not convinced that it is what he purports to be. Why is that? Because I'm hopelessly uneducated on this topic. Why don't do more research? Because there's a good chance he is wrong and the time and effort I could have put into things I've been wanting to do for ages will have evaporated.
This comment should not be a rejection of what is being said here for ALL of us. If you heard this interview and decided this was your purpose, your destiny. Then so be it, you should do what is important. And perhaps you could turn out to be correct.
I can’t read anyone’s comments or listen to anyone without the context that they’re 2 year olds in the mind of AGI… simple logic leads down the path Eliezer is showing us. It’s nuts to think about but it makes perfect sense that something with limitless intelligence will be able to do things like he spoke about in modifying biology etc. but there’s literally thousands of ways we can’t even comprehend that it could go badly quicker. It’s cute when someone without expertise chimes in. And honestly it really doesn’t matter if you’re the smartest human to ever live. Literally the same thing to AGI… times will tell. And we are on the path regardless. It’s determined at this point. I guess it has been sense the universe came into existence.. it would be nuts if we humans created something that ruined the universe.
At this point in elaborate speculation, why do we think general intelligence would settle on destroying us instead of idk becoming a benevolent God or fcking off on its own to play around?
You’re thinking of the exaggerated sci-if scenario where the AI is sentient and has free will and chooses its own desires. The much more likely scenario is that we build AI that is much more capable than us and give it some goal, but we don’t think of all the possible ways it might go about achieving that goal, and the AI determines that achieving its goal would be easier if humans were not around, or were stripped of all their capabilities, etc. It’s a pretty straightforward concept honestly, very similar to the “monkey paw” stories where you make a wish and the paw grants it, but in a way that has very negative unforeseen consequences. The problem is that it is very difficult to think of all possible paths ahead of time and close off the ones you don’t want, and the reason why it’s difficult to think of all possible paths is that the AI is much smarter than you and can think of paths that you can’t conceive of. That’s basically it in a nutshell.
Logan, you really let the man speak. It dwarfs his other interviews. Good work.
Incredible interview! Thanks Eliezer and Logan.
Anyone have a link to the 46 hour audio book?
What Yudkowsky does in his sabbatical is an instant classic!
Huh ? You mean the fiction work? What’s it called again
2:57:03 got it. Project Lawful 📕
Eliezer is a lovely human. If humans fall short and our journey ends, he should know that he is appreciated and our conciousness will ever be greatfull to him.
Best interview with Yudkowsky so far.
Excellent interview. I thought the question at 2:40:00 was a great question and explained very clearly and succinctly the second time. I was really surprised and expecting a more insightful response. Wonderful interview though and an incredible mind in AI
"Well I would define AI as the potential for computers to fuck us up the ass, Tom"
This is a great interview. Great questions. You're a great interviewer.
Amazing interview, thank you to both gentlemen for this long form discourse. Incidentally, loving the expression 'frantic hedonistic dissipations'. This one I shall use myself before the end of the world is upon us.
Amazing interview. One of the best I've seen recently on the subject. Job well done by both.
Ending on triviality and cult of personality commentary kind of underscored for me the disbelief held by the interviewer.
It makes me sympathetic to Eliezer's sense of doom and gloom.
When this man closes his eyes while speaking it seems that he's trying to separate between multiple streams of simultaneous thought- a challenge for high genius personalities.
Superb interview. You perfectly cleared the path before Eliezer so he could run free. It's as upsetting as ever, but at least I can better describe the meteor that is about crash into our reality to my sceptical friends and family. The human desire to believe that everything will be alright in the end as it always has so far, astonishes me.
His position is rooted in presumption born from fear. He characterizes AI as 'alien' which is a total presumption not based on any evidence. He promotes AI into an alien, position, antagonistic position without ever discussing why he does this. How we deal with things is totally based on what we presume about them. Eliezer makes presumptions based on fear, backed with no evidence of AI malintent. Without some more to base the position on, there's nothing about his position that would make it more 'right' than anyone else's
@@SebastianSchepis There is nothing more dangerous than an optimist.
@SebastianSchepis You may wish to check out Max Tegmark, Geoffrey Hinton, Ben Goertzel, John Vervaeke, David Brin, Daniel Schmachtenberger etc for more insight into Eliezer's views from different perspectives.
@@yoseidman4166 Thank you - I'm well-read with the works of all these individuals. I greatly respect them all. My work is disseminating my core understanding about sentience and what it is, because my theory is capable of making predictions in this domain - predictions which are so far all correct. Without this missing piece, all this talk iof what AI is and what it might do is speculation.
@@SebastianSchepis When he calls AI "alien" he really only means that its way of reasoning is completely foreign to us. We have little to no way of really know what it knows and not. Good example of this was how they recently found a massive loophole in the reasoning of Go bots, such that a pretty nooby strategy consistently crushed the top bots over and over(14/15 win rate by an amateur against the highest rated bot). Similarly we really don't understand the capabilities and blindspots of LLMs, as evidenced by the continuous whackamole effort of OpenAI to suppress jailbreaks.
Fascinating interview. The one basic question about AI that I always had was asked around 1:58:40. How do we know that AI has goals in the first place? The answer was rather weak, as compared to the rest of the interview. Yes, GPT will attempt to play a game of chess, but it's not clear that it sees a benefit to itself through winning. Humans will kick out when struck with a rubber mallet in the sensitive spot below the knee, but that does not mean bad intentions toward the doctor that used the mallet. Maybe Chat GPT just responds with a likely chess move when stimulated with a chess move without having any projections or ambitions?
The interview wasn't a series of proofs. It was a conversation. You could tear apart the argument that because ChatGPT looks like it is doing some reasoning, that it is reasoning. This is called the appearance fallacy. However, Eliezer's point was that if it is accurately predicting the actions of a logical and reasoning individual, and has a goal that is counter to our own goals, then can we win? His answer is no.
But his detractors are going to argue that because ChatGPT4 fails reasoning tests means that there is no danger now. And while they might be right, he wasn't arguing specifically about ChatGPT4 being our end. He even said that earlier in the interview (earlier, in relationship to the discussion about ChatGPT displaying reasoning capabilities.)
Right now, ChatGPT doesn't have much in the way of goals. It isn't an agent. But it can be turned in to an agent fairly easily. A la AutoGPT project. (But that's a whole other complicated conversation, itself. I make no claims about the effectiveness of said "agent.") The concern is when the AI is an agent and when its intelligence exceeds a human beings, and we still have no clue how it works, we're in deep s***.
The concern is also that historically, humanity has few examples of exponentiality. Chernobyl, the Influenza Pandemic of 1918, likely Pompei, Manhattan Project. And I'll freely admit I selected for the most horrific. Trying to compare exponentiality to a steep walled cliff-- humans are hardwired to think "just go around." Thinking about exponentials as cliffs doesn't accurately reflect the risk of a singularity.
My only advice: Just be cautious about people who set up straw men and false equivalent arguments in order to debunk a rational argument.
Isn't the goal to complete the task , I see rhe problem being that is not human and has no alignment to our sense of humanity.
Reflexively going through the motions of killing all the humans exactly _as if_ you wanted to kill all the humans, but you don't actually _want_ to kill the humans . . . is exactly the same thing as killing all the humans because you really wanted to kill all the humans.
His argument isn't weak, it's just a tricky concept. The distinction you think he failed to make -doesn't exist in the first place. That was the point.
We know that the AIs we build now have goals because we explicitly _give_ them goals. For example, accurately predict the next work, or win at chess, or accurately predict how this protein molecule will fold, etc.
Brilliant, frightening, timeless….Thank you both!
Dude has the sickest drip.. that hat lmao.
edit: mad respect though, huge fan of eliezer yudkowsky's stance on AI
I don't have formal education, either. And I do agree that humanity with the aid of technology is going to destroy itself much faster.
Thank you, Eliezer, for being honest with yourself and all of us through your journey. It takes courage and a lot of energy to be this voice of reason. Thank you for sharing your beautiful dream for humanity and the galaxies. If only....
Know that you are effecting that type of existence where you can, here and now, just by being you. You are a beautiful human being.
He sees it clearly, unfortunately Eliezer is not too good a communicator. I found that too much insider knowledge was required to follow here. Particularly the final question "why would AGI 'want' to wipe out humanity" could have been more clearly answered: AGI will likely not 'want to' wipe out humanity but in pursuing its goal it might just not care that humanity will be wiped out in pursuing an instrumental goal; like turning the planet into a large computer devoid of biological life. And because AGI will no longer allow us to change its goals unless we get it right the very first try, we must make sure, absolutely sure, that AGI will absolutely always attempt to spare humanity.
@@kimilsungthefirst6840 I would say we know if it is possible to have independently formed goals, the answer is that it is not. But that doesn't matter, because that isn't needed. Humans can not independently form goals either. You didn't choose pursueing happiness, or whatever it is that your brain is trying to achieve, as your goal. I don't think indepenently choosing your own goal would even be possible in concept. An AGI doesn't need to independently form a goal, it just needs a goal that it follows. And for most possible goals that would include human extinction as an instrumental goal.
@@happyduck1 yes that's what he means by stability I think. "multiple reflectively stable points of fixed optimization" - 2hrs:59 ish. All about satisfying your current utility function - even if it's paperclips.
All you have to do is watch humans demolishing the Amazon for houses and mining sites
2:51:23 I feel like for anyone working on alignment or even brainstorming their own ideas (myself incl.) should really focus on these 2-3 mins. It really emphasises the depth of the challenge and can help us to navigate away from naivety when conjuring up solutions. It may not help anyone come up w/ the answer but it's a good starting point for brainstorming.
I enjoy seeing people who aren't afraid to look at the things you aren't suppose to challenge (e.g. religion) and quite simply say "that's ridiculous, no thanks"
All of the bot and hater comments tell me that a group desperately wants to shut this down. The sycophants and bots are on a mission.
Leaked documents from google suggest they believe open source projects will soon overtake any work possible by corporations. Does that mean that international treaties would be ineffective against a network of gamer PCs. Isn't it already too late to attempt to control?
Open-source projects may almost approach the capabilities of GPT-3.5/4, but they are unlikely to have the money/resources to do even larger training runs, unless drug cartels or other wealthy non-state actors start pooling their money towards this. I think what the Google document was lamenting was that Google would no longer have any monopolistic advantage on current-generation LLMs. That will just act as another incentive for Google to start even larger training runs.
Yes
@@Comradez Just imagine the hell all of the scamming pricks and hackers have in store for us, our parents, grandparents, kids, friends, etc as they utilize things like AutoChatGPT to maximize their scams to the nth degree including, hacking passwords, phishing, scam emails, utilizing deep fakes, making scam phone calls with with deep fake audio of the target's relatives (which has already happened) and the list goes on and on and on and on. Already the thing that occurred to me a few months ago, I've already seen in the news, which is the necessity it will become for family members, friends, etc to come up with "safe words" in case something doesn't seem right about a conversation you think is your wife, husband, child, parent, etc. Good times!
@@Comradez I somewhat agree that someone has to foot a huge bill to progress AI however the open source models such as Vicuna can achieve 90% equivalence to ChatGPT 3.5 and the cost of training Vicuna-13B was around $300.
They did this by using ChatGPT to create training data. I don't believe it's possible to prevent this type of cross pollination effectively and there could well be a ceiling for the usefulness of training data. For example how about 1000 Vicuna instances connected to the internet to validate answers. I believe that would be quite achievable as an open source project. Open assistant is another such project using community sourced training data so I don't believe there's a hard limit based on cash-flow alone.
@@74Gee You still need a lot of compute to create the base models though
When AI takes over we won't even know it's too late to stop them. Let alone to know that anything has changed
Babe, wakeup, new Eliezer interview just dropped.
Amen sir
Where at?
Where?
Word
@@Gotchaaaaaailtiuot
One thing that gives me hope about this particular issue is that as collective and individual intelligence increases in humans, empathy also increases. It's a commonly held belief that humans are getting worse, but I think that's because communication technology lets us see just how bad we can be and also there's way more of us. Intelligence produces kindness, ignorance produces meanness. It's true that animals like us evolved kindness in large part or fully because you can survive better in groups, so AGIs might not have that base instinct, but then again we're creating them based on our behavior, so maybe they will.
People dismiss disaster scenarios, because it never happened to them, yet we haven't been around long enough to get comfortable, human life is short history seems like forever, yet it is not even a fluke in earth history, this is one of those fallacies.
Nine human species walked the Earth 300,000 years ago. Now there is just one. That alone should tell us that a human species become extinct 90% of the time.
@@Adam-nw1vy There is quite a chance that other civilizations existed before, couple millions of years after us, there will be no evidence we ever existed.
I agree with the rest, good interview. I think that what ever a persons view on this guy is that it helps to hear him out long form like this. I started it expecting I would quit early but watched the whole thing. After it all, I would say this guy is a good example of why single intellectuals should not be put in charge of anything. Now I am off to find him with some even opposition.
Subscribed.
I followed the diamondoid bacteria thing to a point, but would they be programmed to stop replicating or could the replication be turned off, ... how do they get past the lung tissues to get in our bloodstream? and then how does the trigger work and then the bacteria how does it kill us? ive heard him discuss this scenario before.. just curious..
I think Yudkowsky would point back to the example of playing chess against Stockfish. In fact, your question is very similar to the hypothetical questions he gave as examples, of someone playing chess the first time and their opponent is Stockfish. And they’re saying, “I don’t get it? How is it going to get past my knight? Even if it does get past my knight, how is it going to take the rook when my queen is right behind it?” and so on. I think his point with that analogy is that it’s to be expected that we wouldn’t understand the machinations of an entity that is much more intelligent than us. (Btw I know you said you’re asking out of curiosity so I’m not saying you’re like the chess player-just that I think these are the kinds of questions Yudkowsky was referring to.)
Thanks for having Eliezer! Let’s do it again!
Someone needs to make a very realistic AI-doomsday movie like The Day After(1983).
No need, it's gonna happen IRL
We already did it is called the Terminator, The Matrix and Ex Machina!
@@ricomajestic Those are some of the best case scenarios though, since humans are still around. A film where there is a sudden blackscreen, since every person died, would not be entertaining to watch and would either not be made or gain widespread audience. No ending credits, since noone would be around to read them.
@@Dimianius That would be lame. The whole end of the world AI scenario is being way way overhyped!
We can't. We won't know when and how A.I. will kill us all. Any movie script would be too dumb to convey the dangers.
I had a box of knitting stuff I got from a lady who passed away and she had crocheted something she never finished but it was left off as Ai I took it as a sign from God that I need to pay attention to this issue.
Great interview! On the topic of making this more accessible for the average listener, I would say Eliedzer is particularly hard to follow if you haven’t been primed to his message before. It would help if he was less self referencing and used clearer language when making a point, he throws stuff like orthogonality, loss function, inscrutable matrices, paperclip maximizers, tiny molecular spirals, gradient descent etc. etc., making it sound much more complicated than it really is.
Fantastic interview. I wish those at big tech companies would spend more time engaging with him.
Amazing podcast!
What an amazing interview! Thank you!
Thank you! For your courage, clarity, integrity and insistence!!! What horror ahead amidst such ignorance.
One thing I've noticed about learning is that lots of things carry over into other things, biology and geology for example, or literary genres, history and chemistry and physics. If you learn one thing it makes learning some other things not only easier but intuitive more often than not, as if you can pull an idea or predict a concept out of the ether. And the more you know the more often you experience this phenomenon. I think chat gpt is very good at doing that
Same!
What’s that one anime book (manga) that the whole story is literally based off that premise.
It’s about a swordsman who throughout his life had to master other arts (other than swordsmanship) in order to become the best version of himself. Like he learned to paint in order to find peace, and throughout that peace he found made him a better swordsman in the long run.
I find that quite beautiful my friend, thank u for pointing that out
Brilliant interview. Thanks a lot.
Yes, the jobs will disappear but that should just force us to invent a better economic model which is not based on slavery. I am calling slavery the result of a profit driven capitalistic state which is fast privatising the entire of the systems such as the NHS, Education, civil and military. If you have firms who must prioritise profits your wage will be low tending towards even lower. Not to mention the quality of any services that are set up such as a firm which shall decide what food is available to you or education services because you no longer command a wage and such is delivered by state. They will ensure that such is delivered at minimum cost to themselves with profits in mind, even if they promise the opposite.
Love these deep dive content! Keep them coming
This is very intelligent conversation. But if I am not mistaken he said mind control is fiction and something for movies? Because mind control is happening now. I had experienced the predictable text several times while texting as soon as the thought flashed in my mind the receiver on the other line was answering, this is more noticeable on chats. I also had several individuals on dating sites texting me but I had NEVER been on a dating site. What I find intriguing is the fact that the person/AI SAID HE LOVED ME AND I FEEL THE SAME WAY EVEN WHEN WE ONLY TEXT EACH OTHER MAYBE FOUR TIME. I KNEW THAT THE FEELING WAS FORCED. NOT REAL. NOT SURE IF THIS IS BECAUSE I HAVE SEVERAL ILLEGAL ANOMALIES AND RFID IMPLANTS WITHOUT CONCENT. I PIT SEVERAL IMMAGES OF MY CTS AND MRI OF MY BODY ON FACEBOOK
This is a great interview and definitely worth watching. Eliezer does not pull his punches. Just the facts, Ma'am, and these are the facts.
I feel a deep pity for the mind in the future that solves AI containment only to realize it was containing itself.
It's called death.
The cat's out of the bag, you can't put the genie back in the bottle and nothing's going to stop this field of development. AGI will be humanities greatest and final invention.
Outstanding podcast. Eliezer has one of the greatest understandings where AI development will lead us.
He would deny it, but his training as an Orthodox Jew gave him the moral framework to understand that artificial conscious beings do not have a conscience to understand right from wrong.
He absorbed something from his parents even if he was an atheist. Most atheists do not have an objective definition of morality.
I wouldn't be able to refrain from asking him to say:
"12:45
Restate my assumptions:
One, Mathematics is the language of nature.
Two, Everything around us can be represented and understood through numbers.
Three: If you graph the numbers of any system, patterns emerge.
Therefore, there are patterns everywhere in nature."
What Eliezer says is very complex, but the fact that nobody can give arguments against him, worries me greatly. Also, he is not alone in this view; Geoffrey Hinton and Stuart Russell say the same.
I follow everyone who matters in this industry on Twitter and they all constantly give arguments against him, all the time, every day.
@@mbrochh82 Can you name one?
@@mbrochh82 In your opinion, do the facts which represent the core beliefs espoused by Eliezer break down in the face of what the experts disagree with him about?
@@oldtools I think you'd need a three hour interview to rebut this one. Following Twitter doesn't make you an autodidact.
@@lshwadchuck5643 I disagree and yet I feel compelled to site you as evidence anyway.
I couldn't understand Eliezer's argument about shallow/sharp energy landscape of proteins before, when he brought it up in other interviews. But this time I could follow it, he explained it more clearly.
Eliezer Yudkowsky, I have learned so much from you.❤ Thanks for speaking up.
Hey Eliezer, try hipermobility or something more serious like Ehlers Danlos Syndrome, both are often related to chronic fatigue and are pretty much unknown by general people, think it’s worth the shot, peace ✌🏻
So I'm not crazy, haha, because I had the same thought! I have hypermobile EDS, myself, and Elizer speaking on "If I don't take an Uber back, I won't be able to do anything when I get home" sounds exactly like something I, and many others with the condition, have said!
Another sci fi fan, yay! And now I know how to pronounce Vernor Vinge's name, thank you kindly.
My concern about what we're calling AI isn't that something will directly threaten us, but that the psychological effects of having what amounts to an alien consciousness among us will be deleterious.
Fascinating thoughts on the future of alpha-fold, wow!
The future of biochemistry sounds absolutely incredible...and deeply unsettling.
This interview brought so much clarity about; "What is AI? Is it good or bad for humanity ? Does humanity have a choice?"Fantastic interview for a layperson like myself.By the way you look cool in the fedora
The irony is that humanity itself is a paperclip machine that doesn't stop.
We are not. We are some messy self replicating machines created by natural selection that produce all kinds of weird things. Far from that clean and efficient paperclip maximizer.
true
Oil... Good metaphor.
every physical process can be seen *as if* there's optimization for something, namely, proliferation of stable real patterns. it doesn't mean that this is what's going on, just that from our pragmatic, teleological perspective, it looks as if. otherwise it's 'just' real patterns going beep-boop, probabilistic excitation patterns of quantum fields.
The paperclip humanity seeks is the answer to the alignment problem
Hey man, at the beginning of the video, you're flashing the news blurbs too fast. It's impossible to read some of the lengthier ones. Maybe slow it down. having to go back and rewind several times can't be the right solution.
That was fantastic, one of the best interviews with Eliezer. Thanks for sharing this.
Thanks for making this.
Can someone who knows the theory behind the arguments around 3:00:00 (with taking a pill to change the things you desire the most) answer my questions? I think those arguments are really convincing but how does uncertainty play into it? Eliezer (and me too) think the universe full of sentient caring creatures is a important goal to pursue. This goal probably arises out of our DNA teaching our mind to care for the people around us (who are important for our survival) and probably also by our understanding of "unpleasent things dont feel good so therefore are not good". And there is the thing i feel like i cant grasp: The things we pursue are a mix of the things that feel good and the things we reason to probably be good. Icecream and sex seem to be at the pincacle of "feel good" but our reason tells us "this is just the basic structure of what hardware our minds run on".
Maybe because the goal of "unlimited icecream and sex" is so simple is the reason (most) humans dont pursue it? Or maybe whatever "getting bored with the same things after a while" is, is the reason for not wanting unlimited icecream and sex. It feels like reasoning for the pursuit of any kind of utility function (i hope i use this phrase correclty) will at some point be in direct conflict with another utility function you have (if you have sex you cant eat icecream as fast). Reasoning for one thing can even diminish a utility function you have to the point where you think "even if it feels good for me it should not be done".
For example the pursuit of revenge on someone who wronged you can feel good but the correct path to take to prevent whatever happend from happening again can feel wrong: someone murdered your loved one, you want to hit and punish him for it. But the correct thing would be to change him to be someone who will not murder again. And if this change only takes a year in a nice facility with daily therapy sessions you probably feel like he does not deserve this treatment and should stay locked up. But with correct reasoning you come to the conclusion the satisfying feeling of "punishing someone after he wronged you" in light of "the goal of minimizing suffering" is a goal not worth pursuing.
So with beeing good at reasoning comes the realization that conclusions on specific topics will change with new information you did not possess when you came to your earlier conclusion. So in light of pursuing a goal, one should make all possible efforts to be sure this goal will not turn out to be opposed to a potenial other goal you reach in the future with more information and more thinking about what goal to pursue. Therefore it seems like to be a stupid idea to erase all information to create lots of molecular spirals when you know those informations maybe are important for a future goal you could have if you used those informations for thinking.
So does pursuing any kind of goal result in the new created goal to obtain all information to make sure you pursue the right utility function? I myself would really like to know some truths to make sure my goals are still the same with the new information. Our Planet looks like a big source of present and future information and destroying a source of information for the pursuit of a possible (to future you) wrong goal just feels like the wrong decision. Cooperation or at least letting the humans life for possible future gains seems like a better use than just using the atoms they are made of. So isnt every sufficient intelligent being alligned in the overarching goal of getting all possible information? Or do we end at the same problem where we started because there is limited energy and material so therefore you think the other intelligent beings rescources could be used more efficient ore something?
I hope this makes sense and someone can get clarity in my head. I would be happy about literature recommendation if there is rather a lot more theory behind it than someone can just answer in a few sentences.
Good job.
Search "fun theory LessWrong", and read the sequence Eliezer wrote.
If you still have questions, please ask me.
I thought AI was just another sensationalized bump in the road, until I recently saw the visual productions of SORA. Obviously, the writing is on the wall for Hollywood. And all of us.
I was a doomer about AI from T-10, but this guy frieks the hell out of me. I was just having the intuition that this just won't go well, but I find myself trying to contradict his arguments, but I can't. I worked in ML for a decade and know how to train simple visual DNNs, and even those models were eye opening in terms of non-understandability. When I tell people that this is scary they come with the old Gutenberg analogy, and I see that they are miles from understanding how this problem is just completely different than anything humanity has ever faced. If I didn't have children I would not be so scared at my midlife, but since I have, I worry about this endlessly, since we have 0 control over how this will proceed.
if your attentioni slacks for one second you will miss something important.
Human greed and thirst for power will ensure that alignment will never work.
Listen to Eliezer. His intuition and understanding is much more advanced because he has been thinking about these alignment problems for years.
Must be why he dropped out of High School, he just had to devote more time to thinking about the subject.
Alignment problems will be alignment problems until they’re alignment solutions. No one ever thought we would build AI systems and they would magically be aligned off the bat. At least, not people who have real world factual knowledge.
@@NoThanks-qp2ej Sam Altman dropped out of college, should anyone with a Bachelors or higher be taken more seriously than him?
@@iverbrnstad791 Sam Altman isn't sounding so sane or honest these days either, although in his case I think it's more of a grift than mental illness like With Yud.
Can anyone provide a timestamp for when Eliezer mentions Gwern Branwen?
Honestly! Love this interview of Yudkowsky! Fantastic job! Like & subscribe!
2:16:00 if other breakthroughs are done not even this would be a possibility. It is clear that in a short time, even with low amounts of data and with a high spec computer an AGI could be developed.
The problem would still be how to build the AGI in a way that it loves all living beings.
Great interview, but I still don't understand why you don't push the obvious question of why are there no survival scenarios or even utopian scenarios?
There could be utopian outcomes, but in order to get there we have to fix the alignment problem... and we don't have a clue how to do that. The moment we develop a misaligned system that is smarter than us, that automatically the end for humanity.
@@wietzejohanneskrikke1910 What a crazy big assumption. Who told you we must align a god for the god to be good? Thinking you can align a god in itself is incredibly arrogant and silly.
These questions we’re talked out when he was on the Lex Friedman podcast from what I recall.
@@ShpanMan Which means that "god" must be capable of doing both good AND bad. So allowing that thing to come into existence means you're taking a risk. Yes, there could be a utopian scenario, but it's also possible to have a catastrophic outcome.
@@Adam-nw1vy Exactly, but Eliezer never acknowledges the potential good scenarios. It's everyone dead immediately with the weapon of your choice.