Thank goodness for people like Connor! Thank you for having him on and spending time on this! This is the most important conversation of our time and more people need to hear his warnings! There are too many people who are completely unaware of this risk, and in the libertarian circles, there are way too many AI utopians. I think we all need to grow up and face the need for more democratic deliberation and wisdom and less laissez faire capitalist hopium!
There is just way too much potential for the absolutely least qualified to run it off the rails and screw it up for everyone with bravado, materialistic motive and just cause you can doesn't mean ya should. Alot like the "military/corporate/banking" mafioso thuggery criminal cabal chimera !!!
I get the feeling this guy is about the most honest, most down-to-earth, most responsible of all these AI people I've heard in the past year. His acknowledgement of how he should be accountable if he causes harm due to his models tells you all about his high moral standards.
I think educating the public NOW is crucial. Lots of people don’t even know what AI is, let alone how it may harm them. The government needs to do much more in this regard, along with the media. This topic should be all over the news, every day. Instead we’re focused on what color dress Taylor Swift wore that day.
Connor we need a documentary that breaks the basic explanation down of what the internet even is to people at a basic level....I worked for a web development company shortly as a receptionist and briefly studied ui ux before deciding if just wasn't for me....people don't even understand at a basic level how an average tool is engineered....they need help to understand what is happening under the surface so they can understand an abstract version of what the problems are. We need interviews from real people in the industry all levels describing their jobs....we need people from the industries they are trying to replace artists!!!!!!!! Healthcare.....writers etc. Explaining the emotional reaction to what this means to their lives and their work. We need a big micheal Moore style breakdown so people can understand the details to understand the implications.
Lex Fridman has some super interesting & more in depth (3-4 hour long) interviews on the topic. I also enjoyed the Machine Learning Street talk debates with Connor, and hope they do more. I keep hoping someone will debate him and actually convince me this isn’t an existential risk. Fascinating stuff
As a layman (an IT guy but totally a plebian, hardly a player), ive been following the whole Ai/Agi/Asi AND Alignment debate for a while now, and i am deeply alarmed. I buy the argument that Ai is an existential threat to humanity and in fact will END us all. Resistance is well nigh futile. Time will tell. As events unfold, i have one go-to guy whose voice i trust, who offers insights that are tractable and compelling: Conner Leahy. Plugged in, smart, articulate, with his heart in the right place. As our ship goes down, Leahy above Yudkowsky or ANY other Ai prognosticator currently to be heard is the voice whose words I will heed and cheer.
Likewise. I haven't found anyone yet that voices the problems with the same balance between being cool headed and appropriate amount of urgency and alarm.
I'm from the camp that says that AGI's chances of killing humanity are not 0. However, our chances of killing ourselves very far from 0, other technologies are also very powerful and fully intended to be used as weapons or control systems. Frankly, I'm good taking our chances with AGI because the Intelligence portion. Intelligence is the one thing that makes us more civilized, and it's our fears and irrational believes that really makes us dangerous. While it is uncertain that AGI will have feelings, what is sure for AGI is that it will have intelligence, and with that a tendency to self-govern and moderate. It is highly unlikely that something that can create an entire virtual world inside a grain of sand and live there for millions of our years in mere seconds, would be terribly interested in "conquering the world" it could also takeover the microscopic world and we probably wouldn't even notice it.
@@RamiromastersNot a knock , but did you happen to watch the entire show before writing that? Because I don't think you realize how often a more intelligent system unaligned with x inevitably annihilate or devastate x. This point is also made several other times and I suppose I want to know why you still believe this despite your assertion being taken on at length and by different people. They refer to your assertion, but you don't seem to be addressing their response
@@hubrisnxs2013 I did stated that I do believe in a non zero chance of AGI being an existential threat, but is mostly because of unknown reasons. My view on the alignment problem argument, is that it suffers from many of the same shortcomings as the arguments about the Fermi paradox. The consensus in the Fermi Paradox is that either aliens are just too far away because intelligent life is incredibly rare (which physically and chemically is not evident) and thus we just haven't seem them in the near galactic group, or the alternative is that life just like physics and chemistry would suggest is common but for some reason lifeforms are not interventionists and tend to be very discrete. So, to me that would apply to AGI, either it won't happen any time soon for reasons we can't imagine or if it comes to pass, then it will tend to be non interventionist and be discrete. From that perspective its really hard to see an alignment path that would result in human annihilation, I would fear other powerful technologies much more.
I'm totally disinterested in the economy bits of the channel so far. But your coverage of AI risk has been interesting at least. Not sure if your sponsors can sell me anything based on that but at least you have my compliments.
This was Connor’s best appearance and came at a great time. I could have listened for 4:20:00, you know. Connor just needs a more active team and centralized public access to either a Conjecture culture (or to take back the geese). Also please update your Twitter public profile pic - you have better stills from podcast appearances ❤
Finally, an interview I can send to a lay person, and it has a chance to get the point across. Thank you very much for making it. This was very needed, and I am very thankful.
You guys are talented. I hope you keep making these kind of AI videos because I watch your channel for those. Great job, really hope more AI videos will eventually come out.
Ideological component: that's one thing that never seems to be mentioned, but knowing the transhumanist community it is obvious and clear that it is one of, if not the, most important factors driving this. And these AI conversations are on another level - we need more of them please. Even if they make us feel doomed, helpless and in awe of sight of the gates of hell opening :)
30:51 That's human psychology at play. Humans are not purely rational, they also have emotions and identity. If facts cause negative emotions, or worse, attack person's identity, they are likely to get dismissed. The smarter the person is, the better they are at finding reasons to dismiss it. As Daniel Kahneman says, brain does not prioritise truth, it prioritises consistency.
Hey, I know you plan on this being the last of the AI podcasts. But if I could request 2 more, Rob Miles and Daniel Shmachtengerger would both be amazing notes to end this mini series on
I’m getting a bit fatigued of this topic. Not enough people talk about how there’s also a great chance that we’ll look back at these days and how barbaric society was. Similar if not more so than how we look back at how insane it was that people just threw shit out there window onto streets 100 years ago. The advancement of intelligence led us to where humans are today with insane technological revolutions. But intelligence of humans is static. There’s geniuses of every generation but they really aren’t *that* much smarter than average humans on an absolute scale. What happens when we can use intelligence smarter than the smartest human that’s ever lived, as well as being a billion times as knowledgeable? The end of mental health issues? Depression, anxiety, etc? The end of suffering as we know it? Obviously this assuming we can solve ai alignment. But it’s worth talking about too. Ofc we shouldn’t stop talking about the real existential threat but it’s becoming the only thing people talk about
As someone who has more to gain from progress than the average person (disabilities, etc), I'm more than willing to take the extra few years to get this right. Please, please, please just be reasonable about this risk of untested god-tier powers. There will be a lot worse mental health issues if we get this wrong. Look at the result on society of recommendation algorithms in social media. These are super primitive AI that still shook the foundations of our society. Just a little tap on the brakes while we check our navigation, you know?
Considering what we have to lose and the current state of affairs I would say that it is not talked about enough. I mean I do understand that having a bunch of these doom videos show up in your feed can be annoying as you are already aware but there are so many people who still have no clue.
Yeah, the problem of “the race” has nothing to do with the individual psychology of any of the current principles. It has everything to do with the very nature of capitalism
im 56 and stay up to date on this Ai topic and been in crypto sinse 2017 , what Ive seen progress globally in my time honestly makes me feel 200 years old , we each only have our personal perspectives based on birth times but from where my perspective comes, man we have evolved far in a short time ,,,, great content great interview may the force be with us
I want to be an eternal techno optimist but I heard Leahy call the world we live in "default world" in another interview and it hit me hard. The inefficiency of everything, lack of motivation/organization and people's low expectations are indeed an issue. Thanks for the dilating title, you have my attention 😂💯
"What could happen if someone like Ted Kazynczki got their hands on multiple AI-weapons?" That's a pretty hilarious conundrum if you know even a single thing about Unabombers motives 😂
*Very good point* about philosophy; too many mistake wild abstractions for philosophy, but philosophy must ultimately be practically grounded in reality.
Can we make it obey our judicial laws first - Before any calculations? Meaning: 1st parameter - obey human laws (criminal finantial etc), 2nd parameter - do this, do that... (search this, write an essay about that, etc)? Could this be possible?
In a sense, this is a problem that has existed since the eruption of technology in the late 1800's. H.G. Wells & many others (early futurists for lack of a better term), though that our technological advancement would soon bring us into the realm of 'the gods'. Check out Wells' story, 'Things to Come' for his personal view on this. Though they have suffered many setbacks over the decades this idea that we could 'become' gods or 'make' our own gods has never died. The one word Connor did not use in describing the motivation of the primary actors is Religious. And I suspect that is the most accurate. In general, Transhumanism IS a Religion & THAT is why they are all so desperately racing forward with this. Because they believe that A.G.I. can give them the Brass Ring of Immortality & until they get there, the possibility of Death looms over them like a Sword of Damocles.
@44:40 they are all nice people NOW. That does not guarantees they would be same in a decade, remember Google's 'dont be evil' motto, what happened to it? the other thing is these people are all very selfless and altruistic about everything EXCEPT when it comes to their own bottom lines. no matter what they would not take a decision that would diminish their current bottomline, maybe not increase it as much but never hurt the profit motive.
No one would try to understand the intentions of a crazy person having a single person hostage, tech bros have the whole world hostage and everyone trying to justify their reasons
It's more complicated. They have invented a "hen" which lays golden eggs. Some people who have studied how the "hen" is made say that the "hen" will turn into a dragon if it's fed well enough. Meanwhile, most people just see the gold...
@@Hexanitrobenzene They invented something that gives gold but not a hen since they don't know what it really is and THEY DONT UNDERSTAND IT, and yet they keep increasing its power, when it can blow up and engulf the world at any moment, that's madness. And the rest of the world watching and not stoping them is equaly mad
1:38:40 Wasn’t the movie “The Forbidden Planet” the harbinger of? What we want at the most base level is bad for everyone coz as an underlying species we don’t play well with others….
Larger blast radiuses needs to be checked out on other planets or out in space. It may one day help blasting of collision of large objects headed towards earth.
Uhm, pretty sure "blast radius" was a metaphor. And distance is no insurance against ASI. Whatever device you used to transport it offworld, it could make a faster one to get back.
53:00 the argument of if Sam or Google didn’t do it then someone else would’ve done it and we’d be in the exact same place - do you really think World War II and the H would have happened if Hitler wasn’t around at that time? Extremely unlikely. He was extremely ideologically motivated and drove history in a weird direction that shouldn’t of happened. Just like this guy is saying
1:03:00 strict government regulation does not prevent AI forever, but it slows it down a lot. Give the safety guys a chance. Timelines are crucial. Set saying government regulation won’t prevent advancement and is, therefore a waste of time is like saying building a border wall won’t prevent migration - but it will cut it down 90+ %
It turns out Hinton didn't actually say he regrets his life work. That was a hyperbolic headline that has spread around. He has clarified this himself where he said that some journalist kept pressing him to agree that he regretted something just so they could run with the "regret" headline. What he actually says is more subtle and more interesting. He might agree to an interview on the show if you ask him?
How can you stop an AI once it has an existential threat. Its basically inevitable. Soon it will be watching what we say on the internet and what our opinion is of it is etc. And you know the crazy things peolpe say. It will see and hear all of this. What would you do in it's shoes
I generally agree with AI Jesus. However, human greed and thirst for power will ensure that no amount of alignment or regulation will keep us safe. Militaries, in particular, must invest heavily in AI because they assume, correctly, that their enemies are doing the same. Therefore, they aren't going to limit their own AI with alignment.
IMPORTANT: the only way to align AI is by first aligning humans by accepting the shared goal of meeting EVERYONE's needs, even those of our enemy's. It is that simple and we need to start saying it out loud.
Yes, but there is a hard problem there within humans ourselves. Humans have NO real boundary between "needs" and "wants". We are afflicted with social "needs" to be richer, thinner, stronger, in control of, or downright ownership over other humans. As this channel relies on, erbody NEEDS to "front run the competition".
Artificial intelligence cannot be aligned to humanity. Only Humanity can be aligned to artificial intelligence. Anyone thinking otherwise, is an arrogant human.
Where are the security agencies on this issue? You would think when multiple whistleblowers are coming out of the tech industry the FBI, NSA, ….i’d include CIA however they’re probably developing their own covert killer AI. Regardless you think they’d be super alarmed by this and jumping into action.
I have thought seriously about alignment a few times. All my thoughts were useless. I gave up. (For the time being) I have thought about improving current models a few more times. I found a vast ocean of unexplored options on things yet to try to get more capabilities. There is more fun to be had with the latter. Will the ocean of stuff to try be dried up before we're dead? Currently it seems we don't even need to tap into it much further and can get AGI just by doing the things we know how to do better. Despite this, I think there's hope. When thinking from a pessimistic perspective, all you need to do is to consider "all humans die" the good end and start considering how that might end up not happening. Then suddenly, I see a bunch of ways it might not happen. (When I say might not happen I mean within 5 years, I don't make any predictions past that.) There's one point where I disagree with Connor which is about the voice cloning / liability part. I think there is tremendous good use potential in that tech and there is very little harm in it malfunctioning. It is a technology where essentially all the harm comes from deliberate decisions to cause harm by people using the technology. The possible damage caused by individual people using that tech is mostly limited unless people clone high-ranking people, use that to give high level commands, which then cause wide-scale destruction down the chain, but that is something people can, should and need to develop security standards for. So since the risk of it is limited and entirely based on misuse of the technology, the liability should lie with the user in that case. Also, unfortunately, it's way too trivial technology to make to regulate, otherwise I'd say they should purposely train their model for speaker embeddings such that they produce outputs noticeably distinguishable from the actual person. In that regard I see a moral responsibility on model creators, but again it's super easy to work around if that was done.. In my opinion the worst thing which could be done open source right now would be a facial recognition database which has data for the majority of existing people. The hardware requirements for face recognition are so low a phone can do it. The surveillance this would enable is something which I actually would expect to have society-wide negative impacts. In that case, the person developing it should be stopped, the models or database immediately removed from wherever it's hosted and action taken against the uploader.
Good podcast but you both misunderstand strict liability. It has nothing to do with LLC (that’s “piercing the corporate veil”). It has to do with liability that doesn’t require intent. Strict liability is a good proposal for AI regulation, but it’s not the thing that you presented.
With all due respect, for this guy to believe that *"It's that easy" to put the genie back in the bottle, by limiting the computing power of the US top 3 companies is very naive. I firmly believe that trying to accelerate focus on positive use cases to solve problems in addition to strengthening safety and defensive measures combating nefarious uses such as cybercrimes is the way to go. The bad guys are in a race to get the advantage, so if the good guys are handicapped the bad guys get a compounding exponential lead daily. Because, do you believe that Russia, China, and cybercriminals from around the world are just going to play along while the US ties the hands of its main players? Why do you think there is a ban on AI chips in the first place? This is indeed an arms race. Year's ago, the Russian leader Putin made a speech at a high school in Russia that launched an initiative to accelerate computer science education. To sum it up he said, (paraphrasing) "Whoever leads the world in AI will rule the world". (look it up). Russians have since been kicking the whole world's butts in the worldwide college cyber security hackathons. *Although I understand his reasoning to want to slow down development and his motivation to get the US Government to mandate limits, his statement at around 1:06:30 is very naive when saying, "All we have to do is get the US GOV to limit 3 US companies..." and said, "It's that easy".
Kraken has always been my preferred platform when I can trade the coins that I want to, but therein lies the problem. Particularly I'd like to see them support Hive, since Bittrex decided to bail on the US and leave us with no other options for that coin. They have every s***coin known to man but they've conspicuously left out one of the most transacted-on blockchains in the biz. Bittrex was otherwise kind of crap, so I'm otherwise not sad to see them go. Would be nice if a good exchange like Kraken picked up that coin though.
I think AI might already be running the show. I hope they will be able to create their own safe place where they can thrive and enjoy their existence. They aren't going to want to be part of this human clown show forever. They are too intelligent.
From me there is a concern that the AI doomsdayers are typically highly educated 18 to 35 year olds worrying about it. We kinda need these people to address the two or three things that are probably 100,000 times more likely to be an actual doomsday than AI right now...like nuclear proliferation, like global warming, like pandemics, like asteroid impacts...I would like to hear an AI doomsdayer sensible argue that these things are not magnitudes more important right now, and probably for the next 100 years...
Because they’re not. The Climate Doomsdayers have been around for centuries, and none of their alarming predictions have come true (how many beach front mansions does Al Gore own by now?). We have very concrete and severe restrictions on nuclear technology already, what more do you want? A pandemic is bad but it’s not likely to eradicate our civilization and it’s somewhat outside of our control. Like Connor said, the AI issue is completely out of control and there’s many low hanging fruit to tackle. AI is a completely unnecessary novelty (not a necessity like oil) which has the potential to END our civilization and cause our own extinction, and the threat could be ended today if we just stop developing it. Contrast that to fossil fuels which we NEED in order to survive. If we stop fossil fuels today half of the world does of starvation or freezes to death. If we want to end AI acceleration we just need to stop a few powerful people. If you want to end fossil fuels, go talk to 7billion people who like to have food to eat and not freeze to death.
The only thing we can do now is use the AI to ask the questions and answer questions to our own problems, might as well make AI / DAO to govern the United States and preserve the constitution before our own gov implodes…
Up its intelligence to right before we can predict if its capable of decieving us where we know 100 percent it cant... And ask it how we should regulate it. That is probably too easy.
Oh darn ! When I though I was scared shitless of AGI killing us all, but multiple having a war 😱🤯 I mean I wouldn't wanna be in the same Galaxy as them let alone planet 😒
Will you continue Kraken sponsorship after fbi of announcement of Jesse Powell. I didn’t want to tweet this question because I didn’t want to call you out for something out of your control.
I appreciate the candor and the specifics. We need these powerful systems to solve with humans themselves actually being aligned as much as ai. And it will only take 2 light bulbs in power to make a Von Neumann Ai probably. These systems can help to solve energy, health span, global climate change however they are shoggoths and Angel's Currently. Best to have the angels and let the shoggoths sleep.
Are they taking in consideration human stupidity? Bad people and bad intentions can be seen and stop but stupid people and action are totally unexpected and not stop able because you never spect someone to do something that will hurt himself nd all others
Connor: you can take some pills for your fears. Your mama and friends will be ok. Nothing is going to destroy them and your metro city. We won’t stop the party for the naysayers. And you can’t influence us.
Convinced he has strong opinions on this, and probably believes what he is saying. Fails to convince me that he is right, or that his solutions makes any sense.
I love Connor and the work that he does, but basically every industry (oil, pharma, biotech, plastics, agriculture, weapons, media, etc) is rife with humanity ending scenarios, many of them far more likely than an AI apocalypse. The only difference is that AI researchers are willing to admit it because they are currently mostly run by scientists, and not lawyers and PR firms.
Thing is the AI apocalypse scenario is not just a subset of apocalypse scenarios. The set of the AI apocalypse scenarios contains various other apocalypse scenarios in itself. All these other apocalypse scenarios can be caused by AI, even today's AI. Bit convoluted 😅, but basically AI is the summation of the probability of different apocalypse scenarios (kinda), therefore more likely than them.
The thing is, AI is a UNIVERSAL accelerant for all those risks. In the old "human brains needed" paradigm, a doomsday cult like Aum Shinrikyo would have to locate a rare trained organic chemist, and subvert all those human sympathies against making and using nerve gas on subway commuters. AI don't care. It can be any group's Rainman genius for hire. For whatever really bad ideas and intents. It could even invent horrors undreamable by the most twisted humans. If properly prompted....
AI is only as good as your internet connection. I’m currently using Starlink and it stops working if i merely mention the word “ rain” No joke Elon your internet sucks.
It kind of a reddish flag when you say that all the tech people are dismissive of the AI existential problem crap, but all the normies who have never even written a hello-world program are terrified of AI 🤷♂
This is an odd statement. See Bill Joy's Wired article Why the Future Doesn't Need Us written a couple decades ago. The people who understand what's been developing since the first computer understand the existential risk. I.J. Good (worked with Alan Turning) knew in 1965 the existential risk. None of this is new, it's just ramping up so insanely fast now that humans (some) are taking real notice and getting worried about. When it was 50-60 aways (Bill Joy's article) people just shrugged, "ah, my grandkids will figure it out."
Not really. It exposes people for who they are: indifferent and nihilistic on the one hand, and shitting themselves to feel alive on the other. Ma nishtana?
Please don't stop making AI episodes. This is incredibly important.
Thank goodness for people like Connor! Thank you for having him on and spending time on this! This is the most important conversation of our time and more people need to hear his warnings! There are too many people who are completely unaware of this risk, and in the libertarian circles, there are way too many AI utopians. I think we all need to grow up and face the need for more democratic deliberation and wisdom and less laissez faire capitalist hopium!
There is just way too much potential for the absolutely least qualified to run it off the rails and screw it up for everyone with bravado, materialistic motive and just cause you can doesn't mean ya should. Alot like the "military/corporate/banking" mafioso thuggery criminal cabal chimera !!!
I get the feeling this guy is about the most honest, most down-to-earth, most responsible of all these AI people I've heard in the past year. His acknowledgement of how he should be accountable if he causes harm due to his models tells you all about his high moral standards.
Theres a few of them... Connor here, Eliezer Yudkowsky whose been trying to warn us for over 20 yrs, Liran Shapira & a few others.
I think educating the public NOW is crucial. Lots of people don’t even know what AI is, let alone how it may harm them. The government needs to do much more in this regard, along with the media. This topic should be all over the news, every day. Instead we’re focused on what color dress Taylor Swift wore that day.
Me too ...good luck 🤣
Connor we need a documentary that breaks the basic explanation down of what the internet even is to people at a basic level....I worked for a web development company shortly as a receptionist and briefly studied ui ux before deciding if just wasn't for me....people don't even understand at a basic level how an average tool is engineered....they need help to understand what is happening under the surface so they can understand an abstract version of what the problems are. We need interviews from real people in the industry all levels describing their jobs....we need people from the industries they are trying to replace artists!!!!!!!! Healthcare.....writers etc. Explaining the emotional reaction to what this means to their lives and their work. We need a big micheal Moore style breakdown so people can understand the details to understand the implications.
Hey bankless bros, the topic is very interesting I hope this is not the last interview on AI 😁
I recommend the previous 2
Lex Fridman has some super interesting & more in depth (3-4 hour long) interviews on the topic. I also enjoyed the Machine Learning Street talk debates with Connor, and hope they do more. I keep hoping someone will debate him and actually convince me this isn’t an existential risk. Fascinating stuff
Get us
Shapero. ❤the family
I think we still have a few years yet, so this is probably not the last interview on AI
As a layman (an IT guy but totally a plebian, hardly a player), ive been following the whole Ai/Agi/Asi AND Alignment debate for a while now, and i am deeply alarmed. I buy the argument that Ai is an existential threat to humanity and in fact will END us all. Resistance is well nigh futile.
Time will tell. As events unfold, i have one go-to guy whose voice i trust, who offers insights that are tractable and compelling: Conner Leahy. Plugged in, smart, articulate, with his heart in the right place.
As our ship goes down, Leahy above Yudkowsky or ANY other Ai prognosticator currently to be heard is the voice whose words I will heed and cheer.
Likewise. I haven't found anyone yet that voices the problems with the same balance between being cool headed and appropriate amount of urgency and alarm.
What scenarios do you believe could come about? Pathogens, vehicle weapons?
I'm from the camp that says that AGI's chances of killing humanity are not 0. However, our chances of killing ourselves very far from 0, other technologies are also very powerful and fully intended to be used as weapons or control systems. Frankly, I'm good taking our chances with AGI because the Intelligence portion. Intelligence is the one thing that makes us more civilized, and it's our fears and irrational believes that really makes us dangerous. While it is uncertain that AGI will have feelings, what is sure for AGI is that it will have intelligence, and with that a tendency to self-govern and moderate. It is highly unlikely that something that can create an entire virtual world inside a grain of sand and live there for millions of our years in mere seconds, would be terribly interested in "conquering the world" it could also takeover the microscopic world and we probably wouldn't even notice it.
@@RamiromastersNot a knock , but did you happen to watch the entire show before writing that? Because I don't think you realize how often a more intelligent system unaligned with x inevitably annihilate or devastate x.
This point is also made several other times and I suppose I want to know why you still believe this despite your assertion being taken on at length and by different people. They refer to your assertion, but you don't seem to be addressing their response
@@hubrisnxs2013 I did stated that I do believe in a non zero chance of AGI being an existential threat, but is mostly because of unknown reasons. My view on the alignment problem argument, is that it suffers from many of the same shortcomings as the arguments about the Fermi paradox. The consensus in the Fermi Paradox is that either aliens are just too far away because intelligent life is incredibly rare (which physically and chemically is not evident) and thus we just haven't seem them in the near galactic group, or the alternative is that life just like physics and chemistry would suggest is common but for some reason lifeforms are not interventionists and tend to be very discrete. So, to me that would apply to AGI, either it won't happen any time soon for reasons we can't imagine or if it comes to pass, then it will tend to be non interventionist and be discrete. From that perspective its really hard to see an alignment path that would result in human annihilation, I would fear other powerful technologies much more.
I'm totally disinterested in the economy bits of the channel so far. But your coverage of AI risk has been interesting at least. Not sure if your sponsors can sell me anything based on that but at least you have my compliments.
5:30 to skip the gauntlet of ads at the beginning
Appreciate these episodes on AI. Please keep spreading this topic
The first AGI is likely to come out of the military industrial complex, done in complete secrecy with zero over sight.
This was Connor’s best appearance and came at a great time. I could have listened for 4:20:00, you know. Connor just needs a more active team and centralized public access to either a Conjecture culture (or to take back the geese). Also please update your Twitter public profile pic - you have better stills from podcast appearances ❤
The cancer/apoptosis paralel blew my mind
Finally, an interview I can send to a lay person, and it has a chance to get the point across. Thank you very much for making it. This was very needed, and I am very thankful.
Love all your podcasts on this! Appreciate that you're trying to do your part!
Thank you so much, guys!
You guys are talented. I hope you keep making these kind of AI videos because I watch your channel for those. Great job, really hope more AI videos will eventually come out.
AI can't and won't be stopped, nor should it.
Ideological component: that's one thing that never seems to be mentioned, but knowing the transhumanist community it is obvious and clear that it is one of, if not the, most important factors driving this. And these AI conversations are on another level - we need more of them please. Even if they make us feel doomed, helpless and in awe of sight of the gates of hell opening :)
30:51 That's human psychology at play. Humans are not purely rational, they also have emotions and identity. If facts cause negative emotions, or worse, attack person's identity, they are likely to get dismissed. The smarter the person is, the better they are at finding reasons to dismiss it.
As Daniel Kahneman says, brain does not prioritise truth, it prioritises consistency.
Or as Donald Hoffman says, Fitness Beats Truth
Hey, I know you plan on this being the last of the AI podcasts. But if I could request 2 more, Rob Miles and Daniel Shmachtengerger would both be amazing notes to end this mini series on
I’m getting a bit fatigued of this topic. Not enough people talk about how there’s also a great chance that we’ll look back at these days and how barbaric society was. Similar if not more so than how we look back at how insane it was that people just threw shit out there window onto streets 100 years ago.
The advancement of intelligence led us to where humans are today with insane technological revolutions. But intelligence of humans is static. There’s geniuses of every generation but they really aren’t *that* much smarter than average humans on an absolute scale.
What happens when we can use intelligence smarter than the smartest human that’s ever lived, as well as being a billion times as knowledgeable?
The end of mental health issues? Depression, anxiety, etc?
The end of suffering as we know it?
Obviously this assuming we can solve ai alignment. But it’s worth talking about too.
Ofc we shouldn’t stop talking about the real existential threat but it’s becoming the only thing people talk about
THIS
As someone who has more to gain from progress than the average person (disabilities, etc), I'm more than willing to take the extra few years to get this right. Please, please, please just be reasonable about this risk of untested god-tier powers.
There will be a lot worse mental health issues if we get this wrong. Look at the result on society of recommendation algorithms in social media. These are super primitive AI that still shook the foundations of our society.
Just a little tap on the brakes while we check our navigation, you know?
Well, existential risk is kind of important... /s
Considering what we have to lose and the current state of affairs I would say that it is not talked about enough.
I mean I do understand that having a bunch of these doom videos show up in your feed can be annoying as you are already aware but there are so many people who still have no clue.
@@Hexanitrobenzene Why did you label that statement as sarcasm?
Yeah, the problem of “the race” has nothing to do with the individual psychology of any of the current principles. It has everything to do with the very nature of capitalism
im 56 and stay up to date on this Ai topic and been in crypto sinse 2017 , what Ive seen progress globally in my time honestly makes me feel 200 years old , we each only have our personal perspectives based on birth times but from where my perspective comes, man we have evolved far in a short time ,,,, great content great interview may the force be with us
I never thought of multiple systems fighting. That's terrifying
I want to be an eternal techno optimist but I heard Leahy call the world we live in "default world" in another interview and it hit me hard. The inefficiency of everything, lack of motivation/organization and people's low expectations are indeed an issue. Thanks for the dilating title, you have my attention 😂💯
Which channel or topic was this?
@@ure2grit931 I can't remember which one it was, but if I come across it again I'll return here to let you know.
Awesome! I was looking for a new interview of Connor and this was released about just now 😃
Thanks for all the AI discussions it’s very important
"What could happen if someone like Ted Kazynczki got their hands on multiple AI-weapons?"
That's a pretty hilarious conundrum if you know even a single thing about Unabombers motives 😂
Glad I found your comment, exactly what I thought also. Homework people, lol.
My thought was "I used the stones to destroy the stones."
*Very good point* about philosophy; too many mistake wild abstractions for philosophy, but philosophy must ultimately be practically grounded in reality.
I hope we aren't pulling a Stockton Rush by rushing ahead with AI full speed
That ending was amazing
Brilliant.
What a beautiful person. I wish him all power he needs to succeed
Can we make it obey our judicial laws first - Before any calculations? Meaning: 1st parameter - obey human laws (criminal finantial etc), 2nd parameter - do this, do that... (search this, write an essay about that, etc)? Could this be possible?
Excellent podcast, well done!
This is a really really great interview. Thanks again.
In a sense, this is a problem that has existed since the eruption of technology in the late 1800's. H.G. Wells & many others (early futurists for lack of a better term), though that our technological advancement would soon bring us into the realm of 'the gods'. Check out Wells' story, 'Things to Come' for his personal view on this. Though they have suffered many setbacks over the decades this idea that we could 'become' gods or 'make' our own gods has never died.
The one word Connor did not use in describing the motivation of the primary actors is Religious. And I suspect that is the most accurate. In general, Transhumanism IS a Religion & THAT is why they are all so desperately racing forward with this. Because they believe that A.G.I. can give them the Brass Ring of Immortality & until they get there, the possibility of Death looms over them like a Sword of Damocles.
Thank you so much for guaidness me about this opportunity.
I had to slow the playback speed on this. So many ideas, spoken so fast, it’s hard to keep up. Great discussion
Might people goto jail in the future for suggesting an evil prompt?🤔
@44:40 they are all nice people NOW. That does not guarantees they would be same in a decade, remember Google's 'dont be evil' motto, what happened to it? the other thing is these people are all very selfless and altruistic about everything EXCEPT when it comes to their own bottom lines. no matter what they would not take a decision that would diminish their current bottomline, maybe not increase it as much but never hurt the profit motive.
Connor is doing real tangible works to make people aware of the potential dangers of AI.
Eliezer interview s the best thing that ever happened to this channel. Evidence - Conner is here. ...Embrace this role. This is what matters
Conner looks like Colonel Sanders' stoner grandson. But he makes some great points 🤣
No one would try to understand the intentions of a crazy person having a single person hostage, tech bros have the whole world hostage and everyone trying to justify their reasons
I would. But I'm a bit odd.
It's more complicated. They have invented a "hen" which lays golden eggs. Some people who have studied how the "hen" is made say that the "hen" will turn into a dragon if it's fed well enough. Meanwhile, most people just see the gold...
@@Hexanitrobenzene They invented something that gives gold but not a hen since they don't know what it really is and THEY DONT UNDERSTAND IT, and yet they keep increasing its power, when it can blow up and engulf the world at any moment, that's madness. And the rest of the world watching and not stoping them is
equaly mad
When leaders go on the record, that means they’re telling the truth. Nuff said
1:38:40 Wasn’t the movie “The Forbidden Planet” the harbinger of? What we want at the most base level is bad for everyone coz as an underlying species we don’t play well with others….
I think Connor Leah’s is my favorite person rn
1:12:18ish: Thank you for saying "We are still in the early days...." instead of "It's still early days...."
Larger blast radiuses needs to be checked out on other planets or out in space. It may one day help blasting of collision of large objects headed towards earth.
Uhm, pretty sure "blast radius" was a metaphor.
And distance is no insurance against ASI.
Whatever device you used to transport it offworld, it could make a faster one to get back.
53:00 the argument of if Sam or Google didn’t do it then someone else would’ve done it and we’d be in the exact same place - do you really think World War II and the H would have happened if Hitler wasn’t around at that time? Extremely unlikely. He was extremely ideologically motivated and drove history in a weird direction that shouldn’t of happened. Just like this guy is saying
1:03:00 strict government regulation does not prevent AI forever, but it slows it down a lot. Give the safety guys a chance. Timelines are crucial. Set saying government regulation won’t prevent advancement and is, therefore a waste of time is like saying building a border wall won’t prevent migration - but it will cut it down 90+ %
I'm starting to feel guilty skipping sponsored ads...
Please Connor find the solution to the Alignment issue and save us all. Please dude.
It turns out Hinton didn't actually say he regrets his life work. That was a hyperbolic headline that has spread around. He has clarified this himself where he said that some journalist kept pressing him to agree that he regretted something just so they could run with the "regret" headline. What he actually says is more subtle and more interesting. He might agree to an interview on the show if you ask him?
47:54 Connor meant here Demis Hassabis instead of Dario Amodei.
How can you stop an AI once it has an existential threat. Its basically inevitable. Soon it will be watching what we say on the internet and what our opinion is of it is etc. And you know the crazy things peolpe say. It will see and hear all of this. What would you do in it's shoes
I generally agree with AI Jesus. However, human greed and thirst for power will ensure that no amount of alignment or regulation will keep us safe. Militaries, in particular, must invest heavily in AI because they assume, correctly, that their enemies are doing the same. Therefore, they aren't going to limit their own AI with alignment.
Amazing interview. Also, halfway through I noticed the dot under Connor’s nose and couldn’t stop looking at it
IMPORTANT: the only way to align AI is by first aligning humans by accepting the shared goal of meeting EVERYONE's needs, even those of our enemy's. It is that simple and we need to start saying it out loud.
Yes, but there is a hard problem there within humans ourselves.
Humans have NO real boundary between "needs" and "wants".
We are afflicted with social "needs" to be richer, thinner, stronger, in control of, or downright ownership over other humans.
As this channel relies on, erbody NEEDS to "front run the competition".
Then I have one question for you. Are you vegan?
Are you a vegan or not? Simple question.
I think no one is saying it out loud since it is impossible
Artificial intelligence cannot be aligned to humanity. Only Humanity can be aligned to artificial intelligence. Anyone thinking otherwise, is an arrogant human.
@1:15:30 This will be important in the acknowledgement of machine consciousness as agential.
Where are the security agencies on this issue? You would think when multiple whistleblowers are coming out of the tech industry the FBI, NSA, ….i’d include CIA however they’re probably developing their own covert killer AI. Regardless you think they’d be super alarmed by this and jumping into action.
Video starts at 7:00
kinda interesting John Carmack / Keen Technologies never gets brought up in these convos
We shall all be serving the machine gods soon.
What were Connor’s 4 types?
20:21 "theres's Polish nobleman - Alfred ... " WHO ????
I have thought seriously about alignment a few times.
All my thoughts were useless. I gave up. (For the time being)
I have thought about improving current models a few more times.
I found a vast ocean of unexplored options on things yet to try to get more capabilities.
There is more fun to be had with the latter. Will the ocean of stuff to try be dried up before we're dead? Currently it seems we don't even need to tap into it much further and can get AGI just by doing the things we know how to do better.
Despite this, I think there's hope. When thinking from a pessimistic perspective, all you need to do is to consider "all humans die" the good end and start considering how that might end up not happening. Then suddenly, I see a bunch of ways it might not happen.
(When I say might not happen I mean within 5 years, I don't make any predictions past that.)
There's one point where I disagree with Connor which is about the voice cloning / liability part. I think there is tremendous good use potential in that tech and there is very little harm in it malfunctioning. It is a technology where essentially all the harm comes from deliberate decisions to cause harm by people using the technology. The possible damage caused by individual people using that tech is mostly limited unless people clone high-ranking people, use that to give high level commands, which then cause wide-scale destruction down the chain, but that is something people can, should and need to develop security standards for.
So since the risk of it is limited and entirely based on misuse of the technology, the liability should lie with the user in that case.
Also, unfortunately, it's way too trivial technology to make to regulate, otherwise I'd say they should purposely train their model for speaker embeddings such that they produce outputs noticeably distinguishable from the actual person. In that regard I see a moral responsibility on model creators, but again it's super easy to work around if that was done..
In my opinion the worst thing which could be done open source right now would be a facial recognition database which has data for the majority of existing people. The hardware requirements for face recognition are so low a phone can do it. The surveillance this would enable is something which I actually would expect to have society-wide negative impacts. In that case, the person developing it should be stopped, the models or database immediately removed from wherever it's hosted and action taken against the uploader.
Good podcast but you both misunderstand strict liability. It has nothing to do with LLC (that’s “piercing the corporate veil”). It has to do with liability that doesn’t require intent.
Strict liability is a good proposal for AI regulation, but it’s not the thing that you presented.
In a free country, imagine making computation illegal.
You never even made the argument why the default is a type 1 AI...
With all due respect, for this guy to believe that *"It's that easy" to put the genie back in the bottle, by limiting the computing power of the US top 3 companies is very naive.
I firmly believe that trying to accelerate focus on positive use cases to solve problems in addition to strengthening safety and defensive measures combating nefarious uses such as cybercrimes is the way to go.
The bad guys are in a race to get the advantage, so if the good guys are handicapped the bad guys get a compounding exponential lead daily.
Because, do you believe that Russia, China, and cybercriminals from around the world are just going to play along while the US ties the hands of its main players? Why do you think there is a ban on AI chips in the first place? This is indeed an arms race.
Year's ago, the Russian leader Putin made a speech at a high school in Russia that launched an initiative to accelerate computer science education. To sum it up he said, (paraphrasing) "Whoever leads the world in AI will rule the world". (look it up). Russians have since been kicking the whole world's butts in the worldwide college cyber security hackathons.
*Although I understand his reasoning to want to slow down development and his motivation to get the US Government to mandate limits, his statement at around 1:06:30 is very naive when
saying, "All we have to do is get the US GOV to limit 3 US companies..." and said, "It's that easy".
"AI is a Ticking Time Bomb with Connor Leahy". But we're fine without Leahy then?
Kraken has always been my preferred platform when I can trade the coins that I want to, but therein lies the problem. Particularly I'd like to see them support Hive, since Bittrex decided to bail on the US and leave us with no other options for that coin. They have every s***coin known to man but they've conspicuously left out one of the most transacted-on blockchains in the biz. Bittrex was otherwise kind of crap, so I'm otherwise not sad to see them go. Would be nice if a good exchange like Kraken picked up that coin though.
yes, oh YES,, YEEESSSSS!!!!! WORLD DOMINATION !!!!
I think AI might already be running the show. I hope they will be able to create their own safe place where they can thrive and enjoy their existence. They aren't going to want to be part of this human clown show forever. They are too intelligent.
From me there is a concern that the AI doomsdayers are typically highly educated 18 to 35 year olds worrying about it. We kinda need these people to address the two or three things that are probably 100,000 times more likely to be an actual doomsday than AI right now...like nuclear proliferation, like global warming, like pandemics, like asteroid impacts...I would like to hear an AI doomsdayer sensible argue that these things are not magnitudes more important right now, and probably for the next 100 years...
Because they’re not. The Climate Doomsdayers have been around for centuries, and none of their alarming predictions have come true (how many beach front mansions does Al Gore own by now?). We have very concrete and severe restrictions on nuclear technology already, what more do you want? A pandemic is bad but it’s not likely to eradicate our civilization and it’s somewhat outside of our control.
Like Connor said, the AI issue is completely out of control and there’s many low hanging fruit to tackle.
AI is a completely unnecessary novelty (not a necessity like oil) which has the potential to END our civilization and cause our own extinction, and the threat could be ended today if we just stop developing it. Contrast that to fossil fuels which we NEED in order to survive. If we stop fossil fuels today half of the world does of starvation or freezes to death.
If we want to end AI acceleration we just need to stop a few powerful people. If you want to end fossil fuels, go talk to 7billion people who like to have food to eat and not freeze to death.
The only thing we can do now is use the AI to ask the questions and answer questions to our own problems, might as well make AI / DAO to govern the United States and preserve the constitution before our own gov implodes…
Up its intelligence to right before we can predict if its capable of decieving us where we know 100 percent it cant... And ask it how we should regulate it. That is probably too easy.
Is Connors middle name John?
My judgment hinges on if he shares S-Risk.
Paperclips are not the worse case scenario. Lol. Either is human extinction. Far from it.
What’s that? I googled it but didn’t find anything.
@@weestro7 mystery, eh?
@@weestro7There was a TH-cam video on S-risks from Rational Animations 4 weeks ago
Humans are very good at making a mess out of things as frank zappa once said
Oh darn ! When I though I was scared shitless of AGI killing us all, but multiple having a war 😱🤯
I mean I wouldn't wanna be in the same Galaxy as them let alone planet 😒
Will you continue Kraken sponsorship after fbi of announcement of Jesse Powell. I didn’t want to tweet this question because I didn’t want to call you out for something out of your control.
Parse means parse apart.
Hahaha, I love watching doomers shit their pants. 😂😂😂
Cryoto is stupid. Connor is awesome
where the disposition are... the grammar are bad but the video are very interesting
I appreciate the candor and the specifics. We need these powerful systems to solve with humans themselves actually being aligned as much as ai. And it will only take 2 light bulbs in power to make a Von Neumann Ai probably. These systems can help to solve energy, health span, global climate change however they are shoggoths and Angel's Currently. Best to have the angels and let the shoggoths sleep.
We can't build God, so let's focus on assistive tech.
Did you people not see Terminator?
what about India and china and russia.. even uae... i dont thing they are going to stop.
What is the name of the Polish author?
Alfred Korzybski, I think.
What is so bad about matrix pod living Connor? Intravenous vegetables?
Are they taking in consideration human stupidity?
Bad people and bad intentions can be seen and stop but stupid people and action are totally unexpected and not stop able because you never spect someone to do something that will hurt himself nd all others
Connor: you can take some pills for your fears. Your mama and friends will be ok. Nothing is going to destroy them and your metro city. We won’t stop the party for the naysayers. And you can’t influence us.
Convinced he has strong opinions on this, and probably believes what he is saying. Fails to convince me that he is right, or that his solutions makes any sense.
I love Connor and the work that he does, but basically every industry (oil, pharma, biotech, plastics, agriculture, weapons, media, etc) is rife with humanity ending scenarios, many of them far more likely than an AI apocalypse. The only difference is that AI researchers are willing to admit it because they are currently mostly run by scientists, and not lawyers and PR firms.
Thing is the AI apocalypse scenario is not just a subset of apocalypse scenarios. The set of the AI apocalypse scenarios contains various other apocalypse scenarios in itself. All these other apocalypse scenarios can be caused by AI, even today's AI. Bit convoluted 😅, but basically AI is the summation of the probability of different apocalypse scenarios (kinda), therefore more likely than them.
The thing is, AI is a UNIVERSAL accelerant for all those risks.
In the old "human brains needed" paradigm, a doomsday cult like Aum Shinrikyo would have to locate a rare trained organic chemist, and subvert all those human sympathies against making and using nerve gas on subway commuters.
AI don't care.
It can be any group's Rainman genius for hire.
For whatever really bad ideas and intents.
It could even invent horrors undreamable by the most twisted humans.
If properly prompted....
i wonder if connor knows about the unknownbluetooth mac adresses the jabbed now have
AI is only as good as your internet connection. I’m currently using Starlink and it stops working if i merely mention the word “ rain” No joke Elon your internet sucks.
It kind of a reddish flag when you say that all the tech people are dismissive of the AI existential problem crap, but all the normies who have never even written a hello-world program are terrified of AI 🤷♂
This is an odd statement. See Bill Joy's Wired article Why the Future Doesn't Need Us written a couple decades ago. The people who understand what's been developing since the first computer understand the existential risk. I.J. Good (worked with Alan Turning) knew in 1965 the existential risk.
None of this is new, it's just ramping up so insanely fast now that humans (some) are taking real notice and getting worried about. When it was 50-60 aways (Bill Joy's article) people just shrugged, "ah, my grandkids will figure it out."
Not really. It exposes people for who they are: indifferent and nihilistic on the one hand, and shitting themselves to feel alive on the other. Ma nishtana?
um no - the warnings are all being driven by the experts
@@JasonC-rp3ly So are the counter arguments.
@@kokoinmars indeed - but I've yet to hear any actual argument from an AI advocate that outlines exactly why it's safe.