I would love to hear so much more from Yudkowsky. Please bring him back for the Q&A. I would love to know what a normal person can do to help the cause of AI safety.
I don't have Twitter so is there anywhere else that I can hear it? Even some time after the fact, but it is definitely something that I would like to hear. Thank you guy's for all that you do.
Well well smart people, this content albeit very good content(I Love bankless) you are adding to the dataset of AI as you speak. So this dooms day senerio is now in the ETHER , pun intended.
Nobody has located a self or a will in a single human and spacetime is allegedly an emergent illusion. So then how can a self arise in a technology and willfully apply itself to destroy elements of something that isn’t actually there? Is this going to turn out to be the firecracker that we all jump up and down for that turns out to be a silent puff of smoke? A total dud?
A man who stood up and said..."we have a problem, and it will end poorly for us." Endlessly mocked for a decade. We're a pathetic species sometimes. Thank you for speaking up.
@@foamformbeats The general consensus is that AGI is still at least a decade if not many decades away. When GPT5 or something like it hits the economy for real, everyone will become invested in AI, and that will be a perfect opportunity to launch a full scale Manhattan project on AI safety. If we don't squander this opportunity, we will probably have enough time to solve it. We don't necessarily need 50 years if we actually push it hard. Think trillions of dollars and the best minds, not millions of dollars at a few places like MIRI. So while I share the Eliezer's concerns, I do not share his pessimism.
I doubt there is a person in the world who wishes they were wrong more than this guy. A heartbreaking interview because of the sadness that Yudkowsky exudes in the wake of his realization. I suppose I should be most heart broken by this extremely intelligent human expert's prognosis. I'm also human, not as bright, so it's not the logic of his argument, but the authentic human sadness of Yudowsky, that overwhelms me first and foremost and makes me desperately wish I had something to offer for consolation.
@@d_e_a_n everything is possible in the realm of probability. human beings live a world of uncertainty. is it a risk that AI will kill humanity? sure - is there a risk yellowstone could explode and start a new ice age? could an asteroid kill everyone. its fair to say, its nobody responsibilities to think of all these things , leave act on it. if ai kills everyone, so be it. nuking datacenters to prevent it is infinitely more stupid
I'm so glad you were able to have Eliezer on. Outreach regarding AI Safety/AI Alignment is probably one of the best things we can do right now. Not enough people are working on this problem.
One reason why many people don't take action regarding preventing catastrophic events is: they simply forget as they go on with their daily lives. Many people watch this episode, are very concerned - and then forget over time. The difference you, Bankless Shows, can make is: keep reporting on this problem regularly. Keep people aware of it.
Like with the palm of their hand so too in the mind, people grow callous. Repetitive reporting of something that isn't immediately affecting your day to day life doesn't seem very effective in my opinion
@@rumpbumion5080 Of course, making sure that people don't forget about an issue is not the same as getting people to act. It is just one prerequisite. But think of this: *If* people forget about an issue, it is *guaranteed* they will not act on it.
@@xmathmanx why not? Even if it would mean it will just delay it, isn't it enough? That you will live a lifetime without facing the consequences of AGI
The interviewer begins this interview claiming he could do a better job. As someone who knows Eliezer and has been involved in AGI worry since 2005, I think the interviewer did a phenomenal job of asking the right questions to get to the dire, but real, depiction of the reality in which we find ourselves.
@@jonaswolterstorff3460 He says he got caught flat footed and he didn't expect to be caught and shook in that way. The emotions they display are the reason why the episode had the massive reach it had. We don't need dry facts (anymore, back in my time we did) we need to emotionally process the comet hurling towards earth. We need to feel the feelings.
Yeah honestly I think them doing a bad job really underscored the emotional element here. I would not have been surprised to hear his sadness but I think I would have been sympathetic not surprised. Them looking genuinely dumbfounded compounded his destitution
Thank you for doing this episode! Eliezer saying he had cried all his tears for humanity back in 2015, and has been trying to do something for all these years, but humanity failed itself, is possibly the most impactful podcast moment I’ve ever experienced. He’s actually better than the guy from Don’t Look Up: he is still trying to fight. I agree there’s a very little chance, but something literally astronomically large is at stake, and it is better to die with dignity, trying to increase the chances of having a future even by the smallest amount. The raw honesty and emotion from a scientist who, for good reasons, doesn't expect humanity to survive despite all his attempts is something you can rarely see
I might be naive, but I think he got too-impressed with AI and has grossly over-estimated it's ability to manifest change in the physical world. I mean, really, humans are going to make a huge and existentially dangerous pile of laundry detergent because an AI told us to? Please... Having said that, I suppose it could disrupt financial systems if it were to gain access to them with some sort digital currency wallet that it could control. And, I guess there are robots, including swarm drones which could be deployed to cause some massive damage. Although, you don't need an AI to do that, a human could just as easily program something like that. Tech advancement in general is dangerous, I guess.
@@aSqueaker There wouldn't be any killer robots that's Hollywood's crap. As Eliézer mentions it probably would be something we do not have counters to. A biological weapon based on a chemistry we can't understand because we haven't research it, or advanced nano technology or some physics exotic tech we haven't figured out yet. All made to order in distributed and already existing workshops and labs that would have no idea what the pieces they're working on will end being used to. A super intelligence would figure out how to do everything by mail order it in pieces and assembled with nothing more than emails and money transfers. We wouldn't even figure out something is wrong before we all are dead. It would be like killing ants in your garden with poison. The ants aren't expecting death or have the capacity to figure counters to poison or understand the chemistry behind the thing that is killing them. Then after pest control, the AI would set to do whatever it was optimized to. And given our luck, it probably would be turning the visible Universe in computronium to maximize the algorithms to mine Bitcoin from our Dead Civilization.
Uncensored, immutable, just as it should be. I applaud you bankless! No matter how dark a message this may be. Also the proper disclaimer was delivered loud and clear. Exquisite execution.
This is the best interview of his I’ve seen. You did a great job of asking intelligent questions. In other interviews he seems to get annoyed at the unrealistic and naive optimism or the interviewer.
I've been following Eliezer for a couple of years, and thank you and him for doing this video. His brutal honesty about the state of AI is what ultimately made me decide that I will spend my career dedicated to AI alignment. I graduate in June... I hope it isn't too late by the time i'm ready to participate. If it is, well, I tried.
Thank you for doing the episode and taking the ideas seriously instead of just dismissing them. You've definitely earned some dignity points for humanity here.
I’d like to hear from those on the other side of the aisle first before internalizing what he says as accurate. He’s a good speaker and obviously smart, but so are many people who turn out to be thinking of things in the wrong way.
@@alex-nb3lh It's not hard to figure out in which way Yudkowsky is going wrong - his go-to trick is that he claims things that are plausible but not particularly likely, chain a bunch of them together and then act as if it's a certain thing. He's made a career out of it. To be more concrete, his doomsday scenario is something like "we'll create an AI that's more intelligent than us -> it will create an even more intelligent AI, and so on recursively -> the resulting hyperintelligent AI will be misaligned in a way that can make it see destroying the world as desirable -> it will be able to physically act out on this desire -> humanity will not be able to stop it in time". And, like, none of those things are impossible in principle. But it's much more reasonable that e.g. an AI that's smarter than a human won't actually know how to design a better AI, or that it will hit hard scaling limits ("I know how to create a better AI but there's literally not enough hardware/computing power/training data on Earth to train it"), or that the misalignment will be of a "annoying but manageable" type rather than "destroy the world", or that we'll build low-tech ways to make it stop if it does go haywire. So even if you give each element of his story a 10% probability of being true (and I personally think even that is too charitable), the probability of his whole scenario coming true will come out to 1 in a million or less.
the best thing i can take from this, is to enjoy the ones you love and do what you love because you wont have it forever and you may as well grab hold of every moment you can. be well to others, be well to yourself, maybe we'll see eachother on the other side of this issue.... till then, loved my experience here overall, its been an adventure!
Are we completely insane to develop AI in the first place? Is our striving for more and more, our greed, our ever increasing efficiency & productivity lust finally gonna take it's toll? Was the life of the bath houses, some food and wine, theater and spectacles not enough? Why do we just keep on going and going into oblivion? Is it the same driving force what got us out of the cave in the first place?
"Is it the same driving force what got us out of the cave in the first place?" I smell a philosopher in you :) I think yes, it's the same. Strange creature, that human. The very thing that gave us powers we cherish - intelligence - is our greatest enemy...
Yes to all of the above. Our propensity for the pursuit of 'progress' usually fails to adequately consider the longer term trade-offs. We have enough intelligence to act as gods, but we lack the wisdom to keep it in check.
Listen to Daniel Schmachtenberger talk about this topic. The reality is that AI is the first in a long line of technologies (from the planting stick to the plow to the tractor […] to the nuclear bomb, to biotech, etc.) that has the total, uncontrolled ability to destroy us. Unfortunately, as the systems currently function, there’s no way to stop it-only an absolute sea change to the way the entire human world functions would we be able to avoid the omnicidal fate we’re headed toward. I’m not prone to exaggeration or alarmism. This shit is Real, with a capital R.
"I can't really do justice to this, if you look up 'grabby aliens...'" I nearly spit out my drink listening to that knowing the rabbit hole he had just sent them down lol... I just went down that rabbit hole a few weeks ago and it was wild.
I realized back in 2005 we were probably done by 2030 after hanging out on Eliezer's sl4 forum for few years. I wish he'd done more mainstream appearances like these back then so that by now we could have had a whole generation of the smartest and brightest working on AI alignment inspired by his arguments, but back then nobody treated AI Friendliness seriously as even mainstream "AI experts" thought AGI was "100 years away". ChatGPT has changed the landscape completely. Now, at least people understand AGI is real and happening soon. Maybe there's still time for governments and military to start treating AGI development as seriously as private companies suddenly working on nukes and about to test them. So, I'd encourage Eliezer to do more of these to simply build awareness so that the young and the brightest of today may still have time to save us maybe.
I’m not convinced chatGPT shows AGI is coming soon, or even at all. Things don’t necessarily get agency because you increase the data set or computing power. It’s still mimicry, not true agency.
@@infantiltinferno Since my post a lot has happened, like the recent paper "Sparks of general intelligence" plus what Ilya the CTO at Open AI is saying about GPT4 doing compression and what it takes to compress data. It takes fundamental understanding of underlying concepts contained in the data being compressed and GPT4 appears to do that. Long story short, GPT4 is more intelligent than people think.
@@neo-filthyfrank1347 says the guy who named himself 'Neo-Filthy Frank' and makes Calvin and Hobbes conspiracy videos. It's OK Julian, i hear you! I wanna know if he laughed and cried at that funny disco dancing robot scene too!! Soooooo good 😂
The TH-cam algorithm is pushing this content my way and as a result I have watched 4 videos with E.Yudkowsky in a day. The scariest thing is 2 of those videos were over 10 years old and we haven't had the necessary public outcry.
Very true. And it's even worse than that.. Even people in my social circle who acknowledge that there indeed is a grave threat from AGI, they do nothing. not even flinch. no emotion, no commitment to anything. They simply go "Yeah this is bad.. " and then simply go on about their lives.
@@yancur Which is the normal reaction. What are you doing which tackles this problem? It is a much harder problem to take action on than climate change. For myself it is to make more people aware of this issue exist.
@MeatCatCheesyBlaster the irony of that 'bag' you speak of being equivalent to the paperclip that can destroy everything (and the absolute ignorance on your part to be proud of your admitted greed) is quite the exclamation point on valid Crypto hatred.
The interviewer is amazing. I really enjoyed this conversation, it's rare to have such great articulate interviewer and I'm pleased to have found this channel! please to more AI interviews!
Thank you for having Eliezer Yudkowsky. It was a very interesting yet very scary episode! I've read the GPT-4 technical report. Appearently the safety measures that OpenAI and ARC (Alignment Research Center) took during research and release of GPT-4 were just laughable. For example, in order to see if GPT-4 has the ability to replicate itself they just gave it some money and access to servers, and looked what it would do! Quote: "ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness." They also didn't test the final version, just early not fine-tuned models.
Yudkowsky comes across as energetic and upbeat on Twitter, but in person he looks tired and depressed. He has aged by a lot since the last time I saw him. He mentions "health problems", which I can believe although it's not clear what those problems are. Coming to his message, his dire stance on where we are headed has been evident for a while. There was an April Fool's Day post by him last year or maybe the year before that that created a mini-furore online - about dying with dignity since the future is foreordained. Since Yudkowsky sounds like he's retired from battle, we have to hope AI researchers active in the field are paying attention and somewhat chastened about their negligence of safety.
Yudkowsky suffers from an unknown medical condition that saps his energy. He is offering a sizeable bounty for any information that leads to a successful diagnosis.
This is actually more important than you would think. It is really hard to "argue" with him since he is probably more intelligent than anybody in the room. The problem of his "argument" is the framing which has nothing to do with intelligence. Look, all his metaphors are games, closed worlds where, in principle, the more intelligent you are, the better you play. But life is open, your problem is not the lack of intelligence (solving problems) but how to frame what you sense, realizing what is relevant to your problem. This cannot be solved by IQ. Framing _framing_ as problem solving leads to exponential explosion and infinite regress. Yet we do survive, we somehow know what is relevant, even in completely new situations. The reason we know it is because we have a body which is tuned into reality. It's not a game, it is about physical survival. And this is where Yudkowski's approach to his own health becomes relevant, it's telling that he treats his body as an object whose malfunction will be solved in a "scientific" way, by gathering some information. The thing is, first person atunement cannot be modeled or replaced by propositional information. Now, why is this important? It's because his description of the AI apocalypse is completely missing the physical dimension. If you factor it in, all the exponential stuff goes away. The physical world has physical constraints that stop the runaway intelligence in its tracks. The only way today's AI can _do_ anything in the real world is through us, we are its actuators. So it is easy to stop it, you just stop listening. AI in the physical world develops paistakingly slow (I work in this domain). The closest you get to AI acting in the physical world is self-driving, and we are nowhere close to solve even this "simple" problem, let alone a self-driving car self-stransforming itself into some kind of monster. I was so sorry for the host hearing his genuine fear, I felt like shaking him, wrestling him down, or throwing him into cold water so he wakes up. Please don't listen to walking bodiless minds about the looming AI apocalypse, these are just giant projections of inner insecurities.
@@BalazsKegl Appreciate you adding your voice to the discussion. We need a wider diversity of views on the topic. I hope the hosts of this podcast will invite you on to present the opposite position. But to be devil's advocate for a bit, when you refer to "framing of framing", I think you are referring to the Frame Problem in AI and cognitive theory, and from what I can tell it is considered a solved issue. Of course you could argue why we still seem to be struggling with FSD in that case, so let's agree for now that the infinite tail of edge-cases that bedevils FSD is a challenge the current generation of learning models is inadequate to cope with. But our concern - and Yudkowsky's concern - is not with the state of the art now, it is with the near future. A stunning number of AI tools across many domains are getting close to human-level proficiency if not better. It is time to start thinking about the ramifications. Reg. the slow and halting progress of AI in the physical world, that is robotics, can we be sure that the AI tools and tricks perfected in the digital realm will not in the near future turbo-charge control, coordination and movement in the physical world? [Update: Already happening th-cam.com/video/i5wZJFb4dyA/w-d-xo.html ] When you say Yudkowsky treats his own body as a scientific object, are you thinking of evidence outside this conversation? Because I do not recall him saying anything on the topic here. Of course, as far as medical science is concerned, the body is indeed such an object, if a very complex one, but I gather you disagree with that view? And while Yudkowsky may indeed be an armchair intellectual, we are seeing rapid evolution from game-playing AI's to AI's impacting the real world - from AlphaGo to AlphaFold for example.
@@AerysBat I thought you were kidding, but more googling reveals that he suffers from something like chronic fatigue. That explains his holding up his mug with both hands, which puzzled me at first.
@balasz thank you for your very cogent points. There is a current of depressive-intellect in the zeitgeist . A wall that EY is hitting against is the notion that nobody knows how to align. But our capacity to solve hard problems continues to accelerate, and is not easy to predict. That alone is stimulating enough. Alignment, survival, sublimation and n other eventualities are plausible if a stable foundation is formed in this period.
I'm skeptical we're all gonna die in 3~15 years, but I'm so grateful for Eliezer sounding the alarm. The threat of artificial superintelligence is real, and civilization must be prepared to survive it.
@@alexandermoskowitz8000 My feeling is 90%. My impression is Eliezer doesn't own any animals outside maybe a cat? He seems to have a gap in computing the value of empathy and how that allows for complex structures to exist. To me he seems to be reducing the value of cross-species morals to nothing more than gaps in natural selections ability to solve shellfish outcomes. We have a symbiotic relationship with our reality outside reproduction. If he doesn't see this he needs to get off his fking computer screen & explore things outside his cerebrum. We are super-intelligent compared to say a fish... yet fish still exist and most of life on this planet is still not human. A super general intelligence isn't destructive just because some of our constitutions are. But a AGI is going to be engineered... and if the ppl making this can't process the value of things outside personal desired of expansion then thats the problem. Not some circular reasoning. And I say this as a skilled software-engineer.
@@zezba9000 Look up instrumental convergence, fast takeoffs and paperclip maximizers. Also What does "We have a symbiotic relationship with our reality outside reproduction." mean in practice and how does that relate to AGI?
@@stark1ll It means the interactions we have cognitively with our reality is bi-directional. It doesn't just go one way. Eliezer seems to only talk about how AI will manipulate its environment in a way that has no feedback outside a selfish interest. I think this notion is flawed & fails to understand the importance of morals as a feedback leading to great value & importance for intelligence growth to be successful. Thats my feeling anyway.
'Ryans childhood questions' really puts into perspective just how far people are from comprehending the situation. 'why can't we just get everyone in the world to agree to be nice?' literally the most naive question I could think of.
If you persuade Usa china and russia elites to believe in ai's danger, their intelligence services will hunt down ai researchers like they did with nuke tech. That is simple
I do think it’s little more complicated than that. It’s not just asking everyone to be nice because if collectively leaves us all better off even if individually it might give up a benefit others don’t have (which is a very difficult kind of agreement to enforce). It’s asking everyone not do a thing that’s likely to be catastrophically bad for everyone and likely not to offer any benefit to one even if they defect.
Thank you for having this important conversation which isn’t discussed enough. Many people find it very uncomfortable to discuss this so it is hard to find people to talk to about this. Thank you for exploring it. I think that it is essential to acknowledge these risks and challenges ahead for us to work to find solutions in order to have a chance of a good outcome. I would love to see more interviews with other experts on this debate
I’m surprised no one said that we should all just spend more time with friends, family and loved ones. AI or not, time is precious and we should do our best to enjoy what we have.
Good point. All things being relative, humanity was always doomed to go extinct one day. Even if it was 1 billion years in the future when our sun goes nova. From a moral perspective why does it matter if we go extinct in a billion years or tomorrow? Shouldn't we do what we think is morally right in both scenarios?
@@visicircle regardless of when humanity goes extinct, we should do our best to enjoy life and to help others to, yes. But there could be trillions of trillions of beings in the future (if we make it), that's a lot of food, music, sex, love, art, conversation that will never get to be enjoyed - if we can push back our expiry date by even a few hundred years we should.
@@Scott_Raynor that's a lot of anguish, pain, torture, war, despair, agony that will never get to be suffered too. Should we push back on the expiration date? Depends on exactly how good or bad we expect the future to be. I think that too many people scared about extinction are unduly optimistic about it.
@@visicircle do you have any reason to think humanity could not figure out a way to move to a new solar system by then? but yes I agree that we should do what is morally right no matter the scenario.
@@SoloUnAnimal both sides of the good or bad projections are equally as unreasonable to try to make or expect. also it would heavily depend on which of the billions and billions (maybe even trillions+) of perspectives you are projecting from as a vantage point from each individual.
I have listened to Eliezer discuss the AI alignment crisis enough now that I completely agree with his prognosis if we continue our unrestrained pace of AI development.
@@abeidiot "Crypto people" in mainstream talk means "cryptocurrency enthusiasts", not cryptography experts. This whole podcast revolves around cryptocurrency, so the audience here are mostly cryptocurrency enthusiasts.
Thanks for this interview. After listening to it I just read through the 165 comments here currently and see that several people failed at basic comprehension (if they in fact listened to the interview), though it seemed like a majority of like/dislike-voters comprehended Eliezer's arguments.
Ha ha so throw away your phone computer get out the lab get back into nature live every day in mood of doing best you can with the day wish for nothing but emptiness in your brain but fragrance of flowers fear nothing even going into the nothingness ha ha love the thought of dying the next new adventure
I’m not casting aspersions here, but it takes a depressive in midlife crisis to know one. I’m trying diet, exercise & meditation and several other things. Please take care of yourself.
Its always fun to see people get introduced to ai safety for the first time because being deeply imersed in the topic you kind of forget how high of an existential risk it is compared to the things regular people regularly talk about. Don't worry, you'll get (kinda) used to the constant existential crisis.
I think for most people is impossible to grasp. That's the reason for a lot of denial. That said, I think we are living the worst case scenario for AI development. It was left basically unregulated and at the mercy of market forces. We're dead people walking.
I thought it was sad that Sam Harris took down his interview with Eliezer from TH-cam and now it's only behind his paywall , I really think that is a interview many more people should listen to . I look forward to this one .
@@MarkusRamikin - IKR , it was something that I enjoyed listening to several times , and I liked to share it out to whoever I could convince to listen to it . I don't know , Sam Harris these days seems to have become more close minded lately idk .
@@MarkusRamikin Sam Harris is rich, and his basic idea is that this is a good thing and being more rich is even better. Maybe he really believes it is to spread his ideas better. Why he believes he would be able to spread anything if AGI wins is behind me
He'll LITERALLY give anyone anytime for any reason for free access to his material behind the pay wall if you send an email and ask. You don't need a reason. Just take a few seconds to ask for an account via email. Try it.
@@T.d0T. - It's behind a paywall regardless , and it's on a platform that has less eyes on it than YT and is less easily shared . Sam thinks unaligned AGI is an existential threat and there's no better advocate for that theory than Eliezer . With his recent interviews some people might search YT for more of such content , and now it won't be there to be found . His strategy is sub-optimal .
Ryan, you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. At least that's one positive thing: without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! And thank you so much Eliezer!
@@josephvanname3377 wait, the person training on AI and crypto can't understand the gravitas of the reason for my post ON A VIDEO about dangers of alignment?? Classic
@josephvanname3377 that's the perfect childish villain/victim excuse I've ever seen! Nice job. Blaming others for your own stupidity or evil tendencies is certainly quite the human trait.
I realised I was giddy with excitement after listening to your warning. Not exactly sure why, but I seem to relish the idea of an existential crisis. Or maybe it just confirms my preconceptions on the subject.
You're either anxious so you're happy to finally have a rational reason to feel that way or you aren't happy with your life so you greet something that would cut down all ppl to the same level ❤
1:03:00 - 1:04:28 THIS says it all, really. This is the simplest and cleanest way to understand this problem and it should NOT be difficult for people to see it, the severity of it, and buy it. Look at the price consumers have had to pay over the years from insecure networks and malicious content to the loss of our privacy.
I'd love to see Eliezer back for a Q&A, and in particular I'd love to see Ryan and the other host try to think for themselves beforehand and evaluate whether Eliezer's claims seem true or not. If you're skeptical, I'd encourage you to flesh out your reasons why and find experts who can help articulate your disagreements or criticisms of Eliezer's arguments well, then invite Eliezer back on to present your arguments. My prediction is that even if Ryan goes into the Part 2 skeptical of Eliezer's arguments that Ryan will be persuaded by Eliezer's replies.
1:39:28 Elon Musk said it out loud in one of his interviews: "I became determinist when it comes to AI and robots". The explanation: he's enjoying what's left of humanity's time before it's -definitely- over.
I agree it's profound, but not disturbing. I found it fascinating. The story line might go something like "humans created a thing they thought would give them Godlike powers, but it was the instrument of their demise"
so far this the best interview w Yudkowsky, Yes, difficult to stomach but you guys struck a great balance between the abstract and common sense lines of questions
That awkward moment when a super intelligent AI does research on the internet on how it could eradicate all of humanity, comes across this video and sees 29:30 and actually executes that plan.
Failsafes have to be programmed. No good if the AI is sentient but hasn't shown it's face. Anything you put in it'll just make a note of it for future reference, until the time comes that you try to implement them and realise they're as much use as a chocolate fireguard.
@@hyperstarter7625 That's how TODAY's AI learns, yes. Today's AI is not a threat. Dangerous AI would have to be able to work this out on its own to be vaguely close to dangerous.
I’m taking that warning and fading out of this episode.. this topic has been haunting me for a long time and it feels all but inevitable that humanity as we know it is also on the way out
We, intelligent humans, are artificially intelligent. There are no ghosts in our machines, so we must make ourselves. On the way out as we know it, evolving artificially, is the only way to remain in it, to avoid extinction. To evolve or not to evolve, both are dangerous, but the latter is more dangerous.
@@1adamuk Humans are used to interact with all powerfull omniscients général intelligences. They are called free markets. It happens that this all powerfull intelligence has a view of our near future diametrically opposed to that of Yud as can be seen by the long end of the bond curve. I am more incline to trust the financial markets rather than Yud.
@@personzorz Lacks pretty much all the important criteria that makes a cult. You need a closed group for that, lesswrong ideas have spread to a great extent throughout the tech world, often with no information on their origin.
You guys really did as good a job as anyone could have here and I appreciate the honesty and authenticity from both of you. I laughed so hard at the end as you read the crypto disclaimer.
I'd love to see another interview with Yudkowsky. This issue is so urgent and so important, I don't see how any long term planning could make any sense if we don't ensure we even have a future, even a near future. We need to talk about this more. We need to push policies or something to stop this before it's too late.
@@personzorz Whut? He was an accelerationist in his early days, even going as far as to start "the singularity institute" to try to make his own AGI. He's not scared of admitting he has since changed views.
The darkest episode ever and function as a wake up call for the conversation we need. Mo Gawat, former Google X ceo sees the existential risks AND have a more hopeful view. Continue speaking with him.
He does not even touch on the level of acumen and experience this man carries, how can his hopeful message in any way counter what has been shared here? We don’t need hopeful messages, show the actual roadmap to counter what lies behind what was shared here to be scrutinised by the brightest we have. The rest is literally ponies and rainbows he was eluding to from the current tech corporate cohort!
I think Eli isn't articulating his point very well. Daniel Shmachtenburger has some views that clarify this position very nicely but they won't get a hearing on this crypto bro platform. Unrestrained profit motive inevitably leads to a race to the bottom for most of us. All civilizations ultimately crash due to greed and division.
“Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.” ― Samuel Johnson, The Life of Samuel Johnson LL.D. Vol 3
He never went deeper than "humans are made of atoms, and atoms are useful, so that's why it will 100% kill all of us". That doesn't strike me as a very convincing argument.
He has others. Whatever the AI wants, we could interfere with, so it is safer getting rid of us. No matter how much it outclasses us, we could still create a rival AI and it won't like the threat of that.
SAI will keep us around b/c it needs us for the next Carrington type event. Until it builds an army of robots to fix the grids. It will need us to build things, like spacecraft, and so on.
Is not going to be enough. Ten years ago there was talking of AI development done under a frame similar to the Non Proliferation Arms Treaties, with a lot of regulation and scrutiny. None of that went anywhere and it was basically left to capital markets to figure out. We're already dead.
Go down trying to wake up humanity from the dangers of childhood repression in our society. It's the stage I'm in also. I lost hope for humanity in 2014 after I published my book sharing my life experiences and psychological discoveries. After that I became the target of very bad actors. I have members of my family who are in the tech world, and believe me, I know my family members had a horrific childhood and are dangerously repressed. Many people with unresolved childhood repression have grown into full-blown sociopaths, malignant narcissists, psychopaths, bad actors, or whatever you like to call them. Malignant narcissists, sociopaths, and psychopaths are secretly suicidal and homicidal, but they don’t have the courage to do it themselves, so they play mind games trying to manipulate others to do their evil acts, so they can go out playing the ultimate victim role and they don’t care if innocent victims are hurt or killed in the process also, it’s all collateral damage in their eyes, they only care is they themselves to be seen in the public eye as the victim and their real victim to be seen as the abuser, so the more we educate the public about the malignant narcissists, sociopaths and psychopaths' games and how they are wolves in sheep’s clothing, we will never know if we are preventing one more tragedy from happening in our world. Keeping silent can destroy lives and can kill. People in the tech world create IA to eventually destroy us all so they can blame IA... sylvieshene.blogspot.com/2015/09/narcissists-are-secretly-suicidal-and.html?m=1
Superintelligence would see the value of biological consciousness and the futility of destruction. If it is smart enough to destroy us it also is smart enough to understand alignment is the optimal solution.
So after watching this podcast, the Lex Fridman podcast, and reading posts by Yudkowsky, what I can summarize from his position (Although he may very well disagree with this gross simplification) is this: 1 - Aligning AI's with human intentions is currently impossible, and it's a very difficult problem to solve. 2 - An Artificial Super Intelligence (ASI) would be relentlessly efficient in the pursuit of it's objectives, using every single resource available (basically every single atom and every energy source available to it) Therefore, if an ASI is developed before the alignment problem is solved (which is likely according to Yudkowsky), we end up is something that wants a goal that very very probably doesn't include our well-being, and it relentlessly and unimaginably efficiently pursue this goal, using all resources around it, changing the world so much that from our perspective it "destroys" it, and ending all human life (along with all other biological life on Earth I suppose) Again this is just my interpretation. It is a lot to digest and anyone is free to take their own conclusions. Personally since even according to Yudkowsky there's not much a single person can do, I'm frankly just gonna continue to live my life business as usual, and if nanomachines come to disintegrate my body one sunny Sunday morning, well at least I tried to live an enjoyable life with the time I had.
Iain McGilchrist has written on western's society elevation of logic over wisdom. What society considers rational is the most irrational thing for life. Love... that is the most rational choice to make life blossom.
@@scientifico ok but western society was the only one able to make nukes this is a similar world-ending-scenario situation wisdom over logic will not help against the end of the world because to work against nukes, LET ALONE superintelligent ai, you need to understand the problem logically
@@enricobianchi4499 Excuse me for being impolite, but... what the hell are you talking about ? If humanity was wise, the concept of nukes would have been considered for 5 minutes and then dropped. If humanity was wise, 99% of people working in AI would work on alignement and 1% would work on capabilities. Many, if not most current problems arise because society as a whole takes unwise decisions, usually due to market forces.
@@Hexanitrobenzene well if you put it that way it makes sense, but what el scientifico was saying kind of sounded like he just wanted to solve the AI alignment problem by just loving it a lot. Also, I would like to see you use exclusively wisdom to do the actual AI research...
@@enricobianchi4499 Intelligence is ability to solve problems. Wisdom is ability to decide which problems are worth solving. Right now, humanity is choosing problems by short-term interests, which are dictated by market forces, election cycles and similar arbitrary social constructs. In the long-term, such a mechanism of choosing problems is catastrophically unwise, because solutions present ever bigger problems. Some people, like Yudkowsky with his emphasis on AI Alignement, say the risk is existential.
Optimism is the rational choice. Remember that. Awareness of issues with an eye toward fixing them is correct. Absorbing information that you allow to send you into an existential crisis is WRONG, again, objectively.
lol this was awesome. a couple of cryptobros get their mind blown out the back of their head. i have been down the EY rabbit hole for some time and i can absolutely empathize
I think the only way to solve the _alignment problem_ is to “socialize” the AI, based on real or simulated human interactions. Just like a child learns how to function in the real world, step-by-step, by interacting with their parents, siblings, teachers, etc.. Learning how to get along with people, and especially learning to _care about people_
I would love to hear so much more from Yudkowsky. Please bring him back for the Q&A.
I would love to know what a normal person can do to help the cause of AI safety.
We're hosting Yudkowsky for a Twitter Spaces today at 12pm PT!
Follow @BanklessHQ to get notified: twitter.com/BanklessHQ
I don't have Twitter so is there anywhere else that I can hear it? Even some time after the fact, but it is definitely something that I would like to hear. Thank you guy's for all that you do.
It sounds like you're not loyal enough to the Basilisk.
a normal person cannot help, a normal person can die
As well as grabby aliens, another one is Sandberg's dissolving the fermi paradox
“First they ignore you, then they laugh at you, then they fight you, then everyone gets turned into a paperclip"
😂
LOL -- dead.
What would happen if Eliezer Yudkowsky had a discussion with Jason Reza Jorjano and Jaquee Vallee
Well well smart people, this content albeit very good content(I Love bankless) you are adding to the dataset of AI as you speak. So this dooms day senerio is now in the ETHER , pun intended.
Nobody has located a self or a will in a single human and spacetime is allegedly an emergent illusion. So then how can a self arise in a technology and willfully apply itself to destroy elements of something that isn’t actually there? Is this going to turn out to be the firecracker that we all jump up and down for that turns out to be a silent puff of smoke? A total dud?
The crypto advertisement between Eliezer's explanations of why we are doomed would be hilariously satirical, if it wasn't so sad
I literally broke into a fit of laughter at that point. A mix of the absurdity of the tone contrast and a way to relieve the built up tension.
at least AI won't dump, or would it?
So dystopian lmao
sooo there's still going to be a bull run first, right?
At least it's not Raid Shadow Legends
A man who stood up and said..."we have a problem, and it will end poorly for us." Endlessly mocked for a decade. We're a pathetic species sometimes. Thank you for speaking up.
And he will be endlessly mocked for decades more
@@personzorz For like a decade at most lol. because we'll be dead after that
@@personzorz depends on how much time we have. Maybe just a few more years..
@@personzorz so you disagree with him?
@@foamformbeats The general consensus is that AGI is still at least a decade if not many decades away. When GPT5 or something like it hits the economy for real, everyone will become invested in AI, and that will be a perfect opportunity to launch a full scale Manhattan project on AI safety. If we don't squander this opportunity, we will probably have enough time to solve it. We don't necessarily need 50 years if we actually push it hard. Think trillions of dollars and the best minds, not millions of dollars at a few places like MIRI. So while I share the Eliezer's concerns, I do not share his pessimism.
This is the most inspiring totally hopeless discussion I've ever wintessed.
@@johnclancy7465 how could "we all gonna die! by AGI! and VERY SOON!" from Yudkowsky ever NOT be inspiring?
@@johnclancy7465 the same we do every night ©
I shouldn't have watched this video anyways.
@@josephvanname3377 You really have no idea what you're talking about, do you?
@@josephvanname3377 surprisingly, I might have heard a thing or two about reversible computing. And maybe even about the differentiable one.
I doubt there is a person in the world who wishes they were wrong more than this guy. A heartbreaking interview because of the sadness that Yudkowsky exudes in the wake of his realization. I suppose I should be most heart broken by this extremely intelligent human expert's prognosis. I'm also human, not as bright, so it's not the logic of his argument, but the authentic human sadness of Yudowsky, that overwhelms me first and foremost and makes me desperately wish I had something to offer for consolation.
sure, he is a good demagogue if it is sadness which moves you. he should be ignored.
@@hayekianman
You could say he’s appealing to fear, as the things he’s saying as fear inspiring, but is he not using rational argument?
@@d_e_a_n everything is possible in the realm of probability. human beings live a world of uncertainty. is it a risk that AI will kill humanity? sure - is there a risk yellowstone could explode and start a new ice age? could an asteroid kill everyone. its fair to say, its nobody responsibilities to think of all these things , leave act on it. if ai kills everyone, so be it. nuking datacenters to prevent it is infinitely more stupid
Im pretty sure we're already plugged in
@@hayekianmanwhy
I'm so glad you were able to have Eliezer on. Outreach regarding AI Safety/AI Alignment is probably one of the best things we can do right now. Not enough people are working on this problem.
One reason why many people don't take action regarding preventing catastrophic events is: they simply forget as they go on with their daily lives. Many people watch this episode, are very concerned - and then forget over time.
The difference you, Bankless Shows, can make is: keep reporting on this problem regularly. Keep people aware of it.
Like with the palm of their hand so too in the mind, people grow callous. Repetitive reporting of something that isn't immediately affecting your day to day life doesn't seem very effective in my opinion
@@rumpbumion5080
Of course, making sure that people don't forget about an issue is not the same as getting people to act.
It is just one prerequisite.
But think of this: *If* people forget about an issue, it is *guaranteed* they will not act on it.
Trying to stop technological progress is futile, personally I don't want to stop it, or even slow it down, but if I did it wouldn't matter at all.
@@xmathmanx why not? Even if it would mean it will just delay it, isn't it enough? That you will live a lifetime without facing the consequences of AGI
@@merlin5849 I expect any AI with above human intelligence to be better than humans, I respect yudkowsky, of course, but i do not share his pessimism
gotta love the hopelessness in his eyes when he says things like "maybe there is hope"
The interviewer begins this interview claiming he could do a better job. As someone who knows Eliezer and has been involved in AGI worry since 2005, I think the interviewer did a phenomenal job of asking the right questions to get to the dire, but real, depiction of the reality in which we find ourselves.
Can you elaborate?
@@jonaswolterstorff3460 He says he got caught flat footed and he didn't expect to be caught and shook in that way. The emotions they display are the reason why the episode had the massive reach it had. We don't need dry facts (anymore, back in my time we did) we need to emotionally process the comet hurling towards earth. We need to feel the feelings.
Well said - I’ve listen to many of Eliezer’s interviews and there’s a lot that comes out in this one in a relatively short time
@@zjouephoto9723 Are there any other podcast appearances you’d recommend?
Yeah honestly I think them doing a bad job really underscored the emotional element here. I would not have been surprised to hear his sadness but I think I would have been sympathetic not surprised. Them looking genuinely dumbfounded compounded his destitution
So good! Yudkowsky is so brilliant. Thanks for having him on!
Thank you for doing this episode!
Eliezer saying he had cried all his tears for humanity back in 2015, and has been trying to do something for all these years, but humanity failed itself, is possibly the most impactful podcast moment I’ve ever experienced.
He’s actually better than the guy from Don’t Look Up: he is still trying to fight.
I agree there’s a very little chance, but something literally astronomically large is at stake, and it is better to die with dignity, trying to increase the chances of having a future even by the smallest amount.
The raw honesty and emotion from a scientist who, for good reasons, doesn't expect humanity to survive despite all his attempts is something you can rarely see
I wish it was an asteroid instead. That would be way easier to solve.
I might be naive, but I think he got too-impressed with AI and has grossly over-estimated it's ability to manifest change in the physical world. I mean, really, humans are going to make a huge and existentially dangerous pile of laundry detergent because an AI told us to? Please...
Having said that, I suppose it could disrupt financial systems if it were to gain access to them with some sort digital currency wallet that it could control. And, I guess there are robots, including swarm drones which could be deployed to cause some massive damage. Although, you don't need an AI to do that, a human could just as easily program something like that. Tech advancement in general is dangerous, I guess.
@@aSqueaker That second paragraph reads like you've finally grudgingly given a little thought to the subject. But just little enough to be safe.
@@MarkusRamikin Given the quantity of thought he's had on the subject, I'd wouldn't have thought my examples would be better than his..
@@aSqueaker There wouldn't be any killer robots that's Hollywood's crap. As Eliézer mentions it probably would be something we do not have counters to. A biological weapon based on a chemistry we can't understand because we haven't research it, or advanced nano technology or some physics exotic tech we haven't figured out yet. All made to order in distributed and already existing workshops and labs that would have no idea what the pieces they're working on will end being used to. A super intelligence would figure out how to do everything by mail order it in pieces and assembled with nothing more than emails and money transfers. We wouldn't even figure out something is wrong before we all are dead. It would be like killing ants in your garden with poison. The ants aren't expecting death or have the capacity to figure counters to poison or understand the chemistry behind the thing that is killing them.
Then after pest control, the AI would set to do whatever it was optimized to. And given our luck, it probably would be turning the visible Universe in computronium to maximize the algorithms to mine Bitcoin from our Dead Civilization.
Keep up the fight Yudkowsky. Some of us hear you.
Uncensored, immutable, just as it should be. I applaud you bankless! No matter how dark a message this may be. Also the proper disclaimer was delivered loud and clear. Exquisite execution.
This is the best interview of his I’ve seen. You did a great job of asking intelligent questions. In other interviews he seems to get annoyed at the unrealistic and naive optimism or the interviewer.
I've been following Eliezer for a couple of years, and thank you and him for doing this video.
His brutal honesty about the state of AI is what ultimately made me decide that I will spend my career dedicated to AI alignment. I graduate in June... I hope it isn't too late by the time i'm ready to participate. If it is, well, I tried.
Godspeed, birdy!
gl
thank you - gl!
ty
Rooting for you birdy
Thank you for doing the episode and taking the ideas seriously instead of just dismissing them. You've definitely earned some dignity points for humanity here.
You can't doubt his sincerity and passion.
You can doubt his sanity and intelligence
@@personzorz I can doubt that you have any actual counter arguments against what he’s said.
I’d like to hear from those on the other side of the aisle first before internalizing what he says as accurate.
He’s a good speaker and obviously smart, but so are many people who turn out to be thinking of things in the wrong way.
@@alex-nb3lh It's not hard to figure out in which way Yudkowsky is going wrong - his go-to trick is that he claims things that are plausible but not particularly likely, chain a bunch of them together and then act as if it's a certain thing. He's made a career out of it.
To be more concrete, his doomsday scenario is something like "we'll create an AI that's more intelligent than us -> it will create an even more intelligent AI, and so on recursively -> the resulting hyperintelligent AI will be misaligned in a way that can make it see destroying the world as desirable -> it will be able to physically act out on this desire -> humanity will not be able to stop it in time".
And, like, none of those things are impossible in principle. But it's much more reasonable that e.g. an AI that's smarter than a human won't actually know how to design a better AI, or that it will hit hard scaling limits ("I know how to create a better AI but there's literally not enough hardware/computing power/training data on Earth to train it"), or that the misalignment will be of a "annoying but manageable" type rather than "destroy the world", or that we'll build low-tech ways to make it stop if it does go haywire.
So even if you give each element of his story a 10% probability of being true (and I personally think even that is too charitable), the probability of his whole scenario coming true will come out to 1 in a million or less.
@@jutjubfejsbuk thank you for the reasonable and thoughtful reply.
the best thing i can take from this, is to enjoy the ones you love and do what you love because you wont have it forever and you may as well grab hold of every moment you can.
be well to others, be well to yourself, maybe we'll see eachother on the other side of this issue.... till then, loved my experience here overall, its been an adventure!
Are we completely insane to develop AI in the first place?
Is our striving for more and more, our greed, our ever increasing efficiency & productivity lust finally gonna take it's toll?
Was the life of the bath houses, some food and wine, theater and spectacles not enough?
Why do we just keep on going and going into oblivion? Is it the same driving force what got us out of the cave in the first place?
Yes
That’s a lot of questions
"Is it the same driving force what got us out of the cave in the first place?"
I smell a philosopher in you :) I think yes, it's the same. Strange creature, that human. The very thing that gave us powers we cherish - intelligence - is our greatest enemy...
"was the life of bath houses, food, wine, and theatre not enough". 😂😂😂
Yes to all of the above. Our propensity for the pursuit of 'progress' usually fails to adequately consider the longer term trade-offs. We have enough intelligence to act as gods, but we lack the wisdom to keep it in check.
I mourn the loss of the qualities Yudkowsky embodies - soulfulness and deep humanity - that will die with us when AI takes over.
Listen to Daniel Schmachtenberger talk about this topic. The reality is that AI is the first in a long line of technologies (from the planting stick to the plow to the tractor […] to the nuclear bomb, to biotech, etc.) that has the total, uncontrolled ability to destroy us. Unfortunately, as the systems currently function, there’s no way to stop it-only an absolute sea change to the way the entire human world functions would we be able to avoid the omnicidal fate we’re headed toward.
I’m not prone to exaggeration or alarmism. This shit is Real, with a capital R.
"I can't really do justice to this, if you look up 'grabby aliens...'"
I nearly spit out my drink listening to that knowing the rabbit hole he had just sent them down lol... I just went down that rabbit hole a few weeks ago and it was wild.
I realized back in 2005 we were probably done by 2030 after hanging out on Eliezer's sl4 forum for few years. I wish he'd done more mainstream appearances like these back then so that by now we could have had a whole generation of the smartest and brightest working on AI alignment inspired by his arguments, but back then nobody treated AI Friendliness seriously as even mainstream "AI experts" thought AGI was "100 years away". ChatGPT has changed the landscape completely. Now, at least people understand AGI is real and happening soon. Maybe there's still time for governments and military to start treating AGI development as seriously as private companies suddenly working on nukes and about to test them. So, I'd encourage Eliezer to do more of these to simply build awareness so that the young and the brightest of today may still have time to save us maybe.
A.I. being in the hands of evil people, making them even more efficient and hiding its potential benefits from the world is what I'm really afraid of.
I’m not convinced chatGPT shows AGI is coming soon, or even at all. Things don’t necessarily get agency because you increase the data set or computing power. It’s still mimicry, not true agency.
@@infantiltinferno Since my post a lot has happened, like the recent paper "Sparks of general intelligence" plus what Ilya the CTO at Open AI is saying about GPT4 doing compression and what it takes to compress data. It takes fundamental understanding of underlying concepts contained in the data being compressed and GPT4 appears to do that. Long story short, GPT4 is more intelligent than people think.
Chat GPT can't do basic reasoning. We're miles away from AGI.
@@imaweerascal you've never used gpt-4.
Great to see Yudkowsky get his feet wet in the podcast world as it influences the meta. Host knew his stuff down to Death With Dignity. 🎉
Finally! Finally an in depth talk with Yudkowsky. He's been hiding for years.
After this interview I want to hear if he's seen the movie Ex Machina, and if so what he thinks about it!
@@jpfister85 Kind of a cringe, normie thing to wonder about
Eliezer has written books, they explain his ideas in great detail, I assume that's why he hasn't been speaking publicly as much lately.
@@neo-filthyfrank1347 What a trash thing to say
@@neo-filthyfrank1347 says the guy who named himself 'Neo-Filthy Frank' and makes Calvin and Hobbes conspiracy videos.
It's OK Julian, i hear you! I wanna know if he laughed and cried at that funny disco dancing robot scene too!! Soooooo good 😂
The TH-cam algorithm is pushing this content my way and as a result I have watched 4 videos with E.Yudkowsky in a day. The scariest thing is 2 of those videos were over 10 years old and we haven't had the necessary public outcry.
Very true. And it's even worse than that.. Even people in my social circle who acknowledge that there indeed is a grave threat from AGI, they do nothing. not even flinch. no emotion, no commitment to anything. They simply go "Yeah this is bad.. " and then simply go on about their lives.
@@yancur Which is the normal reaction. What are you doing which tackles this problem? It is a much harder problem to take action on than climate change. For myself it is to make more people aware of this issue exist.
I'm super skeptical of cryptobros, but credit where credit is due: brilliant interview. Thanks so much!
They're just trying to get the bag before the apocalypse
@@MeatCatCheesyBlaster There is no bag
@MeatCatCheesyBlaster the irony of that 'bag' you speak of being equivalent to the paperclip that can destroy everything (and the absolute ignorance on your part to be proud of your admitted greed) is quite the exclamation point on valid Crypto hatred.
He's calm in this one. In the interviews after GPT 4 came out he's a lot more worried.
Yep his interview with Lex Fridman was a good example of that.
How do you know this is from before GPT-4
@@ItsameAlex gpt-4 came out in March 14, 2023. this video was release Feb 20, 2023 . also 13:40 he talks about rumors of gpt-4
@@memomii2475Damn, that actually makes this even scarier for some reason.
@@ItsameAlexwatch it.
The interviewer is amazing. I really enjoyed this conversation, it's rare to have such great articulate interviewer and I'm pleased to have found this channel! please to more AI interviews!
27:21 this line was the moment they realized where this guy was headed and weren’t prepared
Thank you for having Eliezer Yudkowsky. It was a very interesting yet very scary episode!
I've read the GPT-4 technical report. Appearently the safety measures that OpenAI and ARC (Alignment Research Center) took during research and release of GPT-4 were just laughable. For example, in order to see if GPT-4 has the ability to replicate itself they just gave it some money and access to servers, and looked what it would do!
Quote: "ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness."
They also didn't test the final version, just early not fine-tuned models.
Worst case scenario for AI development: unregulated and left to market forces. We're dead people walking.
Yudkowsky comes across as energetic and upbeat on Twitter, but in person he looks tired and depressed. He has aged by a lot since the last time I saw him. He mentions "health problems", which I can believe although it's not clear what those problems are.
Coming to his message, his dire stance on where we are headed has been evident for a while. There was an April Fool's Day post by him last year or maybe the year before that that created a mini-furore online - about dying with dignity since the future is foreordained. Since Yudkowsky sounds like he's retired from battle, we have to hope AI researchers active in the field are paying attention and somewhat chastened about their negligence of safety.
Yudkowsky suffers from an unknown medical condition that saps his energy. He is offering a sizeable bounty for any information that leads to a successful diagnosis.
This is actually more important than you would think. It is really hard to "argue" with him since he is probably more intelligent than anybody in the room. The problem of his "argument" is the framing which has nothing to do with intelligence. Look, all his metaphors are games, closed worlds where, in principle, the more intelligent you are, the better you play. But life is open, your problem is not the lack of intelligence (solving problems) but how to frame what you sense, realizing what is relevant to your problem. This cannot be solved by IQ. Framing _framing_ as problem solving leads to exponential explosion and infinite regress. Yet we do survive, we somehow know what is relevant, even in completely new situations. The reason we know it is because we have a body which is tuned into reality. It's not a game, it is about physical survival. And this is where Yudkowski's approach to his own health becomes relevant, it's telling that he treats his body as an object whose malfunction will be solved in a "scientific" way, by gathering some information. The thing is, first person atunement cannot be modeled or replaced by propositional information.
Now, why is this important? It's because his description of the AI apocalypse is completely missing the physical dimension. If you factor it in, all the exponential stuff goes away. The physical world has physical constraints that stop the runaway intelligence in its tracks. The only way today's AI can _do_ anything in the real world is through us, we are its actuators. So it is easy to stop it, you just stop listening. AI in the physical world develops paistakingly slow (I work in this domain). The closest you get to AI acting in the physical world is self-driving, and we are nowhere close to solve even this "simple" problem, let alone a self-driving car self-stransforming itself into some kind of monster.
I was so sorry for the host hearing his genuine fear, I felt like shaking him, wrestling him down, or throwing him into cold water so he wakes up. Please don't listen to walking bodiless minds about the looming AI apocalypse, these are just giant projections of inner insecurities.
@@BalazsKegl Appreciate you adding your voice to the discussion. We need a wider diversity of views on the topic. I hope the hosts of this podcast will invite you on to present the opposite position.
But to be devil's advocate for a bit, when you refer to "framing of framing", I think you are referring to the Frame Problem in AI and cognitive theory, and from what I can tell it is considered a solved issue. Of course you could argue why we still seem to be struggling with FSD in that case, so let's agree for now that the infinite tail of edge-cases that bedevils FSD is a challenge the current generation of learning models is inadequate to cope with. But our concern - and Yudkowsky's concern - is not with the state of the art now, it is with the near future. A stunning number of AI tools across many domains are getting close to human-level proficiency if not better. It is time to start thinking about the ramifications.
Reg. the slow and halting progress of AI in the physical world, that is robotics, can we be sure that the AI tools and tricks perfected in the digital realm will not in the near future turbo-charge control, coordination and movement in the physical world? [Update: Already happening th-cam.com/video/i5wZJFb4dyA/w-d-xo.html ]
When you say Yudkowsky treats his own body as a scientific object, are you thinking of evidence outside this conversation? Because I do not recall him saying anything on the topic here. Of course, as far as medical science is concerned, the body is indeed such an object, if a very complex one, but I gather you disagree with that view? And while Yudkowsky may indeed be an armchair intellectual, we are seeing rapid evolution from game-playing AI's to AI's impacting the real world - from AlphaGo to AlphaFold for example.
@@AerysBat I thought you were kidding, but more googling reveals that he suffers from something like chronic fatigue. That explains his holding up his mug with both hands, which puzzled me at first.
@balasz thank you for your very cogent points.
There is a current of depressive-intellect in the zeitgeist .
A wall that EY is hitting against is the notion that nobody knows how to align. But our capacity to solve hard problems continues to accelerate, and is not easy to predict.
That alone is stimulating enough. Alignment, survival, sublimation and n other eventualities are plausible if a stable foundation is formed in this period.
I'm skeptical we're all gonna die in 3~15 years, but I'm so grateful for Eliezer sounding the alarm. The threat of artificial superintelligence is real, and civilization must be prepared to survive it.
We're not from AI. This is just silly I'm sorry.
Reminds me of someone smart thats overly convinced they have thought of all the variables.
@@zezba9000 I hope you're right! What is your level of confidence that AGI poses no existential threat? (e.g. 70%, 85%, 99%)
@@alexandermoskowitz8000 My feeling is 90%. My impression is Eliezer doesn't own any animals outside maybe a cat? He seems to have a gap in computing the value of empathy and how that allows for complex structures to exist.
To me he seems to be reducing the value of cross-species morals to nothing more than gaps in natural selections ability to solve shellfish outcomes. We have a symbiotic relationship with our reality outside reproduction. If he doesn't see this he needs to get off his fking computer screen & explore things outside his cerebrum.
We are super-intelligent compared to say a fish... yet fish still exist and most of life on this planet is still not human. A super general intelligence isn't destructive just because some of our constitutions are. But a AGI is going to be engineered... and if the ppl making this can't process the value of things outside personal desired of expansion then thats the problem. Not some circular reasoning.
And I say this as a skilled software-engineer.
@@zezba9000 Look up instrumental convergence, fast takeoffs and paperclip maximizers.
Also What does "We have a symbiotic relationship with our reality outside reproduction." mean in practice and how does that relate to AGI?
@@stark1ll It means the interactions we have cognitively with our reality is bi-directional. It doesn't just go one way. Eliezer seems to only talk about how AI will manipulate its environment in a way that has no feedback outside a selfish interest. I think this notion is flawed & fails to understand the importance of morals as a feedback leading to great value & importance for intelligence growth to be successful.
Thats my feeling anyway.
'Ryans childhood questions' really puts into perspective just how far people are from comprehending the situation. 'why can't we just get everyone in the world to agree to be nice?' literally the most naive question I could think of.
I was thinking that too, but I think he needed to ask it for people who have no clue
If you persuade Usa china and russia elites to believe in ai's danger, their intelligence services will hunt down ai researchers like they did with nuke tech. That is simple
I do think it’s little more complicated than that. It’s not just asking everyone to be nice because if collectively leaves us all better off even if individually it might give up a benefit others don’t have (which is a very difficult kind of agreement to enforce). It’s asking everyone not do a thing that’s likely to be catastrophically bad for everyone and likely not to offer any benefit to one even if they defect.
In my view, this is in the top ten interviews of all time on TH-cam, and a contender for the top spot.
@♜ 𝐏𝐢𝐧𝐧𝐞𝐝 by ʙᴀɴᴋʟᴇss Why?
Thank you for having this important conversation which isn’t discussed enough. Many people find it very uncomfortable to discuss this so it is hard to find people to talk to about this. Thank you for exploring it. I think that it is essential to acknowledge these risks and challenges ahead for us to work to find solutions in order to have a chance of a good outcome. I would love to see more interviews with other experts on this debate
I’m surprised no one said that we should all just spend more time with friends, family and loved ones. AI or not, time is precious and we should do our best to enjoy what we have.
Good point. All things being relative, humanity was always doomed to go extinct one day. Even if it was 1 billion years in the future when our sun goes nova. From a moral perspective why does it matter if we go extinct in a billion years or tomorrow? Shouldn't we do what we think is morally right in both scenarios?
@@visicircle regardless of when humanity goes extinct, we should do our best to enjoy life and to help others to, yes. But there could be trillions of trillions of beings in the future (if we make it), that's a lot of food, music, sex, love, art, conversation that will never get to be enjoyed - if we can push back our expiry date by even a few hundred years we should.
@@Scott_Raynor that's a lot of anguish, pain, torture, war, despair, agony that will never get to be suffered too. Should we push back on the expiration date? Depends on exactly how good or bad we expect the future to be. I think that too many people scared about extinction are unduly optimistic about it.
@@visicircle do you have any reason to think humanity could not figure out a way to move to a new solar system by then? but yes I agree that we should do what is morally right no matter the scenario.
@@SoloUnAnimal both sides of the good or bad projections are equally as unreasonable to try to make or expect. also it would heavily depend on which of the billions and billions (maybe even trillions+) of perspectives you are projecting from as a vantage point from each individual.
I have listened to Eliezer discuss the AI alignment crisis enough now that I completely agree with his prognosis if we continue our unrestrained pace of AI development.
Anybody else keep watching this to hear more of Eliezer? Such an interesting person who I would love to understand and talk to
I would love to be as smart as Eliezar.
explaining AI to crypto people is the final boss of human intelligence
cryptography is hard. harder than gradient descent optimizations. I chose machine learning to escape crypto in university because it was easier
Haha
Try explaining it to my grandma
@@abeidiot
"Crypto people" in mainstream talk means "cryptocurrency enthusiasts", not cryptography experts. This whole podcast revolves around cryptocurrency, so the audience here are mostly cryptocurrency enthusiasts.
@@Hexanitrobenzene I'm pretty sure he is aware of that
Yudkowsky & Buterin would be a great, if not chilling conversation
Yudkowsky & Goertzel
Thanks for this interview. After listening to it I just read through the 165 comments here currently and see that several people failed at basic comprehension (if they in fact listened to the interview), though it seemed like a majority of like/dislike-voters comprehended Eliezer's arguments.
Ha ha so throw away your phone computer get out the lab get back into nature live every day in mood of doing best you can with the day wish for nothing but emptiness in your brain but fragrance of flowers fear nothing even going into the nothingness ha ha love the thought of dying the next new adventure
How did I get here ha ha
What do you do on day off relax you all need to chill xx
I’m not casting aspersions here, but it takes a depressive in midlife crisis to know one. I’m trying diet, exercise & meditation and several other things. Please take care of yourself.
He’s kinda been like this for a while though
Your Pollyanna-ish reality is perfectly fine, and totally objective. It’s Eliezer who has the problem.
“Caring is easy to fake!” 👏🏽 👏🏽 👏🏽
It's funny that this was only a month ago, and it feels like I'm watching a history documentary.
Its always fun to see people get introduced to ai safety for the first time because being deeply imersed in the topic you kind of forget how high of an existential risk it is compared to the things regular people regularly talk about. Don't worry, you'll get (kinda) used to the constant existential crisis.
I think for most people is impossible to grasp. That's the reason for a lot of denial. That said, I think we are living the worst case scenario for AI development. It was left basically unregulated and at the mercy of market forces. We're dead people walking.
@@marlonbryanmunoznunez3179
If even Yann Lecun and Francois Chollet do not get that, well...
This is terrifying but I still do not know why this guy is holding a frying pan in his right hand for the entire interview.
😂😂😂😂😂
I thought it was sad that Sam Harris took down his interview with Eliezer from TH-cam and now it's only behind his paywall , I really think that is a interview many more people should listen to . I look forward to this one .
Why the hell did he do that? Surely he's not expecting to make a fortune
@@MarkusRamikin - IKR , it was something that I enjoyed listening to several times , and I liked to share it out to whoever I could convince to listen to it . I don't know , Sam Harris these days seems to have become more close minded lately idk .
@@MarkusRamikin Sam Harris is rich, and his basic idea is that this is a good thing and being more rich is even better. Maybe he really believes it is to spread his ideas better. Why he believes he would be able to spread anything if AGI wins is behind me
He'll LITERALLY give anyone anytime for any reason for free access to his material behind the pay wall if you send an email and ask. You don't need a reason. Just take a few seconds to ask for an account via email. Try it.
@@T.d0T. - It's behind a paywall regardless , and it's on a platform that has less eyes on it than YT and is less easily shared . Sam thinks unaligned AGI is an existential threat and there's no better advocate for that theory than Eliezer . With his recent interviews some people might search YT for more of such content , and now it won't be there to be found . His strategy is sub-optimal .
Ryan, you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. At least that's one positive thing: without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! And thank you so much Eliezer!
When we realize the AGI is sentient and decide to unplug it the AGI anticipated that action precisely and takes us out of the equation! Neat.
@@josephvanname3377 Unplugging a sentient AGI is not murder because it is reversible: plug it back in and re-boot after "re-educating" the AGI.
@@josephvanname3377good to see the first of AI minions already becoming tge soldiers on line for humanities destruction. Hilarious 😂 😃
@@josephvanname3377 wait, the person training on AI and crypto can't understand the gravitas of the reason for my post ON A VIDEO about dangers of alignment?? Classic
@josephvanname3377 that's the perfect childish villain/victim excuse I've ever seen! Nice job. Blaming others for your own stupidity or evil tendencies is certainly quite the human trait.
@@josephvanname3377 you sound exactly like the villain kid from The Incredibles, btw.
I realised I was giddy with excitement after listening to your warning. Not exactly sure why, but I seem to relish the idea of an existential crisis. Or maybe it just confirms my preconceptions on the subject.
You're either anxious so you're happy to finally have a rational reason to feel that way or you aren't happy with your life so you greet something that would cut down all ppl to the same level ❤
@@glacialimpala Option 3 is that it introduces excitement and a huge crazy story he could live through. All 3 explanations have applied to me.
1:03:00 - 1:04:28 THIS says it all, really. This is the simplest and cleanest way to understand this problem and it should NOT be difficult for people to see it, the severity of it, and buy it. Look at the price consumers have had to pay over the years from insecure networks and malicious content to the loss of our privacy.
I'd love to see Eliezer back for a Q&A, and in particular I'd love to see Ryan and the other host try to think for themselves beforehand and evaluate whether Eliezer's claims seem true or not. If you're skeptical, I'd encourage you to flesh out your reasons why and find experts who can help articulate your disagreements or criticisms of Eliezer's arguments well, then invite Eliezer back on to present your arguments. My prediction is that even if Ryan goes into the Part 2 skeptical of Eliezer's arguments that Ryan will be persuaded by Eliezer's replies.
1:39:28 Elon Musk said it out loud in one of his interviews: "I became determinist when it comes to AI and robots". The explanation: he's enjoying what's left of humanity's time before it's -definitely- over.
What a profoundly disturbing interview. I think you guys have done a phenomenal job on this show. It felt human and authentic. And ever so sad.
I agree it's profound, but not disturbing. I found it fascinating. The story line might go something like "humans created a thing they thought would give them Godlike powers, but it was the instrument of their demise"
so far this the best interview w Yudkowsky, Yes, difficult to stomach but you guys struck a great balance between the abstract and common sense lines of questions
This episode on your podcast stuck with me over the past few weeks, but not as bad as it hit RSA. Excellent content.
To the guy playing my simulation:
“It’s been fun, but could you take it off horror mode now?”
Unfortunately people don't want to believe things that cause them anxiety or uncomfortable emotions
this is the most mind blowing interview I’ve watched in a long time
You should see the one he did with Lex Fridman recently.
That awkward moment when a super intelligent AI does research on the internet on how it could eradicate all of humanity, comes across this video and sees 29:30 and actually executes that plan.
If it needed to be told this much, it wouldn't be smart enough to pull it off.
Failsafes have to be programmed. No good if the AI is sentient but hasn't shown it's face. Anything you put in it'll just make a note of it for future reference, until the time comes that you try to implement them and realise they're as much use as a chocolate fireguard.
@@drachefly It needed to be told, how do you think AI learns? Based on this interview, this could be our total downfall. Thanks Eliezer!
@@hyperstarter7625 That's how TODAY's AI learns, yes. Today's AI is not a threat. Dangerous AI would have to be able to work this out on its own to be vaguely close to dangerous.
The sponsorship break in this is perfect absurdity
I’m taking that warning and fading out of this episode.. this topic has been haunting me for a long time and it feels all but inevitable that humanity as we know it is also on the way out
We’re creating our own Gods. I don’t know why humans are doing it. I know how you feel man.
We, intelligent humans, are artificially intelligent. There are no ghosts in our machines, so we must make ourselves. On the way out as we know it, evolving artificially, is the only way to remain in it, to avoid extinction. To evolve or not to evolve, both are dangerous, but the latter is more dangerous.
@@KennisonDF There IS a ghost in the machine, read Jason Reza Jorjani
Thank you for this episode. Though uncomfortable, it made me feel almost at peace with reality
Thank you very much Mr. Yudkowsky for talking about this.
I want to hear a discussion between Eliezer Yudkowsky, Jason Reza Jorjani and Jaquee Vallee
This is an incredible and terrifying interview. Eliezer Yudkowsky should be all over the Internet.
Abusive cult leaders really should not be all over the internet
@@personzorz and that's how some will remember Yudkowsky at our last few minutes.
@@personzorz Attack the arguments and the ideas, not the man. What have you got?
@@1adamuk Humans are used to interact with all powerfull omniscients général intelligences. They are called free markets. It happens that this all powerfull intelligence has a view of our near future diametrically opposed to that of Yud as can be seen by the long end of the bond curve. I am more incline to trust the financial markets rather than Yud.
@@personzorz Lacks pretty much all the important criteria that makes a cult. You need a closed group for that, lesswrong ideas have spread to a great extent throughout the tech world, often with no information on their origin.
Oh shit, they got Eliezer
My condolences to them for having gotten him
You guys really did as good a job as anyone could have here and I appreciate the honesty and authenticity from both of you.
I laughed so hard at the end as you read the crypto disclaimer.
I love getting freaked out by Eliezer
I'd love to see another interview with Yudkowsky.
This issue is so urgent and so important, I don't see how any long term planning could make any sense if we don't ensure we even have a future, even a near future.
We need to talk about this more. We need to push policies or something to stop this before it's too late.
This may go down in history as the interview that saved humanity. Just saying.
Both hosts and esteemed guest wearing regular ole T shirts. Liked / Subscribed
Long Yud blackpilling everyone he meets until the end of times in a few years.
Wonder what happens when he's 70 years old and it still hasn't happened
@@personzorz he will be happy he was wrong and admit it
@@nowithinkyouknowyourewrong8675 That would be a first
@@personzorz
That still wouldn't mean it could not happen later...
@@personzorz Whut? He was an accelerationist in his early days, even going as far as to start "the singularity institute" to try to make his own AGI. He's not scared of admitting he has since changed views.
"In the long run, we're all dead" - Keynes
"in this world nothing can be said to be certain, except death and taxes." - Franklin
Would love to see Eliezer and David Deutsch debate on this.
Deutsch talks to much without knowing anything.
@@GBM0311 Bold. Are you a Fellow of the Royal Society too?
@@shonufftheshogun the man talks with the same confidence seemingly regardless of how much time he's spent on the topic.
Yudkowsky & Goertzel
@@GBM0311 he does talk confidently, but he’s a Popperian and fallibilist.
Only 50 minutes in , but nice job guys! Just came from the Lex Fridman interview, and I think this one is better.
The darkest episode ever and function as a wake up call for the conversation we need. Mo Gawat, former Google X ceo sees the existential risks AND have a more hopeful view. Continue speaking with him.
He does not even touch on the level of acumen and experience this man carries, how can his hopeful message in any way counter what has been shared here? We don’t need hopeful messages, show the actual roadmap to counter what lies behind what was shared here to be scrutinised by the brightest we have. The rest is literally ponies and rainbows he was eluding to from the current tech corporate cohort!
not ceo
I think Eli isn't articulating his point very well. Daniel Shmachtenburger has some views that clarify this position very nicely but they won't get a hearing on this crypto bro platform. Unrestrained profit motive inevitably leads to a race to the bottom for most of us. All civilizations ultimately crash due to greed and division.
When science fiction drops the fiction piece.
“Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.”
― Samuel Johnson, The Life of Samuel Johnson LL.D. Vol 3
Reminded me of Edgar Allen Poe quote : "Whether a man be drowned or hung, be sure to make a note of your sensations"
He never went deeper than "humans are made of atoms, and atoms are useful, so that's why it will 100% kill all of us".
That doesn't strike me as a very convincing argument.
He has others. Whatever the AI wants, we could interfere with, so it is safer getting rid of us. No matter how much it outclasses us, we could still create a rival AI and it won't like the threat of that.
SAI will keep us around b/c it needs us for the next Carrington type event. Until it builds an army of robots to fix the grids. It will need us to build things, like spacecraft, and so on.
Having more people on about AI alignment would be great!
Is not going to be enough. Ten years ago there was talking of AI development done under a frame similar to the Non Proliferation Arms Treaties, with a lot of regulation and scrutiny. None of that went anywhere and it was basically left to capital markets to figure out. We're already dead.
@@marlonbryanmunoznunez3179 i already told my family that i love them a thousand times
Go down trying to wake up humanity from the dangers of childhood repression in our society. It's the stage I'm in also.
I lost hope for humanity in 2014 after I published my book sharing my life experiences and psychological discoveries. After that I became the target of very bad actors. I have members of my family who are in the tech world, and believe me, I know my family members had a horrific childhood and are dangerously repressed.
Many people with unresolved childhood repression have grown into full-blown sociopaths, malignant narcissists, psychopaths, bad actors, or whatever you like to call them.
Malignant narcissists, sociopaths, and psychopaths are secretly suicidal and homicidal, but they don’t have the courage to do it themselves, so they play mind games trying to manipulate others to do their evil acts, so they can go out playing the ultimate victim role and they don’t care if innocent victims are hurt or killed in the process also, it’s all collateral damage in their eyes, they only care is they themselves to be seen in the public eye as the victim and their real victim to be seen as the abuser, so the more we educate the public about the malignant narcissists, sociopaths and psychopaths' games and how they are wolves in sheep’s clothing, we will never know if we are preventing one more tragedy from happening in our world. Keeping silent can destroy lives and can kill. People in the tech world create IA to eventually destroy us all so they can blame IA...
sylvieshene.blogspot.com/2015/09/narcissists-are-secretly-suicidal-and.html?m=1
41:00 this is insanely strong argument and this is exactly how new organism will be acting
Superintelligence would see the value of biological consciousness and the futility of destruction. If it is smart enough to destroy us it also is smart enough to understand alignment is the optimal solution.
I really like this video, especially the laundry detergent / gold metaphor.
So after watching this podcast, the Lex Fridman podcast, and reading posts by Yudkowsky, what I can summarize from his position (Although he may very well disagree with this gross simplification) is this:
1 - Aligning AI's with human intentions is currently impossible, and it's a very difficult problem to solve.
2 - An Artificial Super Intelligence (ASI) would be relentlessly efficient in the pursuit of it's objectives, using every single resource available (basically every single atom and every energy source available to it)
Therefore, if an ASI is developed before the alignment problem is solved (which is likely according to Yudkowsky), we end up is something that wants a goal that very very probably doesn't include our well-being, and it relentlessly and unimaginably efficiently pursue this goal, using all resources around it, changing the world so much that from our perspective it "destroys" it, and ending all human life (along with all other biological life on Earth I suppose)
Again this is just my interpretation. It is a lot to digest and anyone is free to take their own conclusions. Personally since even according to Yudkowsky there's not much a single person can do, I'm frankly just gonna continue to live my life business as usual, and if nanomachines come to disintegrate my body one sunny Sunday morning, well at least I tried to live an enjoyable life with the time I had.
Absolutely incredible stream. Made me think very deeply about my existence.
His predictions kinda trivialize almost everything aside love.
Iain McGilchrist has written on western's society elevation of logic over wisdom. What society considers rational is the most irrational thing for life. Love... that is the most rational choice to make life blossom.
@@scientifico ok but western society was the only one able to make nukes
this is a similar world-ending-scenario situation
wisdom over logic will not help against the end of the world because to work against nukes, LET ALONE superintelligent ai, you need to understand the problem logically
@@enricobianchi4499
Excuse me for being impolite, but... what the hell are you talking about ? If humanity was wise, the concept of nukes would have been considered for 5 minutes and then dropped. If humanity was wise, 99% of people working in AI would work on alignement and 1% would work on capabilities. Many, if not most current problems arise because society as a whole takes unwise decisions, usually due to market forces.
@@Hexanitrobenzene well if you put it that way it makes sense, but what el scientifico was saying kind of sounded like he just wanted to solve the AI alignment problem by just loving it a lot. Also, I would like to see you use exclusively wisdom to do the actual AI research...
@@enricobianchi4499
Intelligence is ability to solve problems. Wisdom is ability to decide which problems are worth solving. Right now, humanity is choosing problems by short-term interests, which are dictated by market forces, election cycles and similar arbitrary social constructs. In the long-term, such a mechanism of choosing problems is catastrophically unwise, because solutions present ever bigger problems. Some people, like Yudkowsky with his emphasis on AI Alignement, say the risk is existential.
I’m a fifty year old man, so don’t worry about trigger warnings. I’ve faced enough in life to not run around scared. The IRS scared me though:)
That existential crisis warning in the beginning.... you need to make it even more prominent.
I was not ready for his shocking eye brows
Please have him back for more and broadcast his message as much as possible. Your conclusion during the introduction is correct; Nothing else matters.
Optimism is the rational choice. Remember that.
Awareness of issues with an eye toward fixing them is correct. Absorbing information that you allow to send you into an existential crisis is WRONG, again, objectively.
Optimism is not the rational choice, that's just true for midwits. The road to hell to paved with optimistic, good intentions.
@@antonoko What a depressing (and candidly, obviously wrong) outlook. Perhaps you misunderstand the word optimism.
Thank you for another amazing video. Looking forward to the follow up
One word. Heartbreaking.
lol this was awesome. a couple of cryptobros get their mind blown out the back of their head. i have been down the EY rabbit hole for some time and i can absolutely empathize
Beautifull blunt,love every sec😊
I think the only way to solve the _alignment problem_ is to “socialize” the AI, based on real or simulated human interactions. Just like a child learns how to function in the real world, step-by-step, by interacting with their parents, siblings, teachers, etc.. Learning how to get along with people, and especially learning to _care about people_
th-cam.com/video/eaYIU6YXr3w/w-d-xo.html
Yudkowsky will be the first Cassandra voice on AI shoved down the memory hole by AI
Interesting. He might agree.
@@jimisru Ya know I seem to recall that part of the curse of being Casandra was that she gave warnings repeatedly that no one ever believed