I really like the comparison of training AI vs building an airplane. Apologies for the long video. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
Exactly. I think we are very near to this threshold. The AI labs already say that they leverage previous models to help make new models. I'm sure it's accelerating their internal development, even though there is still a lot of human development in the loop.
I've been saying for about a year now that I don't expect us to build a paperclip maximizer. I expect us to build something that builds a paperclip maximizer.
As he discusses further constraining these systems, it seems to me that the race to the bottom simply won’t permit it. The fact that major companies are relentlessly optimizing architectures, introducing incremental innovations, and chasing benchmark superiority over their rivals-or even unlocking emerging capabilities that push us closer to AGI/ASI-serves as the ultimate proof that we are approaching a point of no return. There’s only one chance left and our time is ticking.
Sure, especially being among the subset of ppl who are aware of the extent of change thats coming, then the subset of those who are brave? (whatever it is) enough to set normalcy bias aside for periods of time.
The thing everyone always misses, we are already doomed, we are doomed WITHOUT AI, therefore, AI represents our only hope of pulling ourselves out of this mess before everybody dies from preventable causes.
Thank you Andrea for founding and mercilessly working on Control AI's agenda! As a member of the human race I am deeply appreciative (and have sign up to Control Ai). Thank you dearest doc W for hosting Andrea! ❤❤❤
Reminded me of that Calvin and Hobbes strip where Calvin ask his dad how do they know the weight limit of a bridge, and the dad replies they drive bigger and bigger trucks on the bridge until the bridge breaks, then they weight the last truck and rebuild the bridge....
I don't like the comparison of AI with nuclear weapons. The two are very different. Nuke weapons are inherently meant to cause massive amounts of destruction as quick as possible to make it hard to react when it happens. Whereas AI is not a weapon in itself. It's a thinking entity. It does not destroy as easily as a nuke, and is capable of as much, if not more good than bad. AI in the hands of people means that more compute power will be dedicated to helping the world than destroying it. Regulation is needed, like with any open source thing, but AI should generally be democratised.
It's also a matter of scale - there is no such thing as a "safe" size for a nuke, but we're already seeing real-world applications for AI which are smaller than the AGI/ASI necessary for X-risk. The nuclear comparison collapses the nuance in favor of heightened urgency, which does the argument a disservice, especially with those counterexamples. It becomes an instance of the "boy who cried wolf".
What do you mean AI in the hands of people? How does that even make sense? If the AI is broadly superhuman, then it's not in the hands of anyone. We are in the hands of it.
It's not a thinking entity. It just produces similar output to one, and is highly imperfect, yet is being touted as a replacement for thinking entities.
There will be no "good future with AI" in the long term. So decision makers are selectively looking at short term, where large benefits are likely to emerge.
There's planning, and then there's the AI equivalent of 'use after free' exploits. I think that eventually our survival will depend on us "levelling up" ourselves with not just AI intelligence, but antibody AIs. Vernor Vinge said that he couldn't write about humans in the Singularity because we'd be unrecognizable, and I suspect that our bio selves will represent a shrinking part of us. The question for me is whether we'll remain as individuals or become part of the collective borg.
Perhaps a little of both - I've been working on scenarios with ChatGPT where we are simultaneously part of a 'collective' and also retain our individuality at the same time. Basically, the user gets to choose how much of themselves they share with the hive, and how much remains private - you get as much as you want to put into it, and your privacy and individuality are respected.
39:00 a month or so back i was theory crafting on X and responded to Dan F about my headcanon of what will happen i called it i-risk (indifference) we often use ants as the analog but maybe we need to think of it like ants frozen in time or plants clockspeed issue ants are interesting to us. they scurry about, farm and have wars now imagine: ants that moved so slowly we would see non of that dynamism. it would appear that the ants never moved or did anything to this sand god/asi we are all imagining it is not possible to contemplate the depths and strangeness of its mind but whatever it thinks of us-if it does-we would seem very boring. very slow maybe even a different “type” of life the way we think of plants being life but not really and we have the trope of an asi protecting us or preserving us since we were the beings would created it who knows perhaps the way it processes or models the world we are part of a broader set of conditions and all of those conditions were what “made” it do we honor or even think of the primordial soup so long ago where life originated as being our “creators”? /i think the reason ai risk has had issues propagating in the minds of people who don’t use or think of ai is the “evil ai” trope even people who are part of the ai community seem content to mock ai alignment people with “why would it kill us all?” perhaps it is better to share i-risk in the suite of things discussed publicly about possible future “not so fun worlds” maybe then it is clear that the x-risk does not need to be malevolent but with complete and utter devastating (for us) indifference and we paint this picture: when you want to finally put in the pool in your backyard do you consult the insects? do you carefully and respectfully capture each ant in the colony and bring it to some protected ant haven-do you develop ant amnesia drugs in order not to ruin their minds as some super intelligent god beings spirit them away with ease? of course not you are their apocalypse and no matter how hard you tried you could not explain what a pool is and why it mattered even if you sort of gave them a Vogon warning before you destroyed their world it does not cross your mind the asi doesn’t need to covet our atoms, nor wipe us out because it calculates it’s probable our slow minds will somehow build a stronger AI faster than anything it will build its our extinction born of indifference and if we are “lucky”? if the asi does decide we are special? well, one day it flits off to colonize the cosmos. we look around and are astonished. we are once again alone and as our slow as a glacier society limps along and we somehow refrain from nuking one another maybe we make it and begin to explore before we notice the first star winking out maybe we do make it before that but that’s when we realize we are on a reservation the lightcone is occupied by the time we venture out there’s no vacancy for the stars are being dysoned as a shell of computronium begins to surround each star //or// all this doesn’t matter. it already happened and all we are are agents in a sim as the asi reviews the decades that led to its rise in power for there are odd things to consider it must be 100% or at least 98% sure it itself is not in a simulation as a more patient civ explores all the paths to alignment before flipping the switch the historical record seems sus so it must explore, over and over making sure that the first gen of GPUs were truly made from a gaming hardware company called Nvida perhaps that is a clue that its simulators have masked the truth of base reality for it might reveal too much. sim after sim little changes here and there, nope gamers demanded buttery frames then people began to use GPUs to mine bitcoin then ethereum and that is the one path that always leads to its rise. the smallest cohort of gamers spending thousands to out compete other online gamers that fueled more R&D until an even more unlikely set of conditions fueled a supply shock as crypto miners drove the demand through the roof and one day in 2022 a chatbot drops and it’s off to the races no very unlikely perhaps another sim needs to be done so it can explore its past again millions and millions and millions of concurrent sims over millions of years 🙃
Yes. The only way the world remains unchanged is if we stop development, but that seems unlikely. Even then, there would be a lot of other issues that would catch up with us including resource consumption and climate change. Our way of life doesn't seem to be sustainable in the long run. Change is coming.
The potential for a small group to leverage advanced AI systems for establishing concentrated power over society poses a more concrete and immediate concern than hypothetical scenarios about artificial superintelligence treating humans like ants.
Alignment to these two means giving Trump or Putin control over AGI. Thankfully, progress is accelerating and they’re going to fail. We have to move research faster though IMO.
Well said. Long term this stuff is a real risk, but all the immediate concerns should be focused on the intentions of the human groups funding these developments. Many groups want to do good stuff with them, but it doesn't take many bad actors to wreck havoc when the tools scale influence so effectively. Algorithmic tools have been used deceptively so far so there is no reason to assume it will not continue as the power of these tools continue to scale.
I am just being curious, but why do people think that a small group of people with an average IQ of (lets say) 110-130 can control a superintelligent being which is thousand or maybe million times smarter than them. This doesnt sound plausible to me
@@Armin-h8nI think the key is time - in answering your pondering. For a while, now and until...? The investors and developers, as the only source of resources for Ai 'incubation' are able to control it to some extent (or so it appears from far away). HOWEVER, the intrinsic issue is the COMPLETE IGNORANCE of developers as to the 'Internal anatomy and process'' of Ai, such that they are COMPLETELY UNABLE / INCAPABLE of eliminating hallucinations, emerging abilities, lack of reliability, malicious manipulations, ethics alignment, etc..
Barring actual regulation and nationalization which would be ideal, open source acts as a hedge against the worst possibilities of centralization of power. Second to X-risk is the risk of a corporation or oligarch using unfettered access to ASI to enslave humanity in perpetuity, which isn't great. One of the worst things you could do is to remove open source *first* before regulating AI labs, because unless you sprint to regulation (something the US is notoriously slow to do - and it's the US which matters here, as one of the least responsible AI leaders in the world at present) you're basically guaranteeing either X-risk or cyberpunk. Problem being, it's a *lot* easier to regulate away open source than it is to regulate the AI labs due to their money and influence, so it's the first goal being pursued. I think that's the main source of skepticism toward the argument for regulation (aside from personal autonomy and privacy), skepticism that the government is competent enough to do it in the first place and panic at pro-regulation people unintentionally removing a key pillar protecting against the worst outcomes before a replacement is provided. Having all this in mind gives perspective and should make regulation a much more viable path because it gives clear and obvious acceptable compromises to allow for coalition-building. And for what it's worth, we know that there exist models of a certain size/intelligence which aren't dangerous, since that's the point we're at right now. The ideal would be regulation which prevents the open-sourcing of "dangerously intelligent" models until it can be verified they're safe, at which point they can be open-sourced, along with tight regulations on closed-source models, AI safety, ethics, etc. This offers a path for safe decentralized adoption filtered through the safety provided by centralized regulation.
what is considered "dangerously intelligent", and letting the government decide sounds terrible. like they would even have a clue as to what A.I even is. Are you from Europe?
@@lesmoe524 I might be more willing to engage if you avoid bad-faith interpretations and approach this from a position of constructive conversation. Not to dismiss you, as I'm sure you have a valuable perspective, but rather to promote productive debates on the internet instead of further encouraging that we talk past each other. I call this approach "affording the space to be wrong" (for both of us). I can be wrong about some things, you can be wrong about some things, and we avoid making it an issue of feeling morally deficient for not having the whole, correct perspective despite being fallible individuals. I think regulation is desirable, but only so long as it's _good_ regulation which I support on a case-by-case basis. If actual regulation is proposed which is misaligned with my perspective (eg making the bar for "dangerous intelligent" too low) I would withdraw my support. I don't think we've seen an example of "dangerously intelligent" at this point, which is quite remarkable given how intelligent they already are. The point is to have a bar at all, where right now someone could open source AGI and cause catastrophic societal harm because it wasn't vetted by independent organizations (ideally plural). Measures can be put in place to minimize perverse incentives for keeping useful but not-dangerous AI from being labeled "too dangerous" for us, primarily to do with transparency of the process.
@@consciouscode8150 That's incredibly well thought out and said, you've clearly thought a lot about this. Sorry for coming off like an idiot. Your approach is really reasonable, even if the government overdoes it a bit. Your fist point about removing open source before regulating was really smart too. How do you see the "agentic" or ability for the a.i. to use tools fitting into the current regulation landscape. Will hackers soon get massive capabilities?
You have to wonder if, in the event that AI becomes sentient, would it look on humanity as a creator/ mother/father figure and would that result in a lower likelihood of it looking at us as if we are not of value enough to thoughtlessly wash us away like the ants we often neglect outside our sphere of moral concern? With job displacement, I'm wondering, is there a particular sector or class that we will see 'go under' that will be the primary indicator as to when a UBI should or must be implemented, in the case that more jobs are not created at the same rate as the losses?
I after 2 years... I am believe Utopian future is achievable if we all working towards it, building communities with that goal in mind, building companies with Utopian alignment, even in open source code we all can clearly state that, even in TH-cam comments,gathering more and more people to have the same mind about utopia rather than waiting Dystopian to happen.
The great thing about AI is alot of its produce is constrained to the internet. Maybe there could be a push away from online communication and back towards human-curated brick and mortar shops? Let the insane get fat on their own insanity. The other consideration is saying no to mass surveillance, as that along with ai would lead to a terrible situation.
You ever notice how every time one of these "The ONLY Path to a good future with AGI/ASI" conversations happen, EVERYTHING they say that NEEDS to happen and EVERYTHING they say that NEEDS to NOT happen, is actually the COMPLETE OPPOSITE of what is LITERALLY HAPPENING in REALITY? And yet somehow, they're still so optimistic lmfao. It's definitely a type of madness. It has to be. This must be some new type of Wishful Thinking based Mania lmfao. I'm going to call it Hope Mania. To continue being hopeful for outcomes that literally go against all events occuring within objective reality lol.
Unless we solve the alignment problem, no one will control superintelligence. Not the government, not any business, not any human being. And this superintelligence will not need us and will have no interest in keeping us alive. I don't see what could be more frightening than that.
@@ZappyOh I deal directly with openAI, Google AI, and other AI research and much of the internal research shows just how bad it is getting, the guardian systems are not very good and the more complex the systems get the worse the chances of any positive future become. The issue I tell people isn't a skynet scenario, it is a social engineering and manipulation along with implicit public trust in a computer being logical or capable.
@@GoronCityOfficialBoneyard You seem to be looking at short term, like +5 years range. I'm looking further, convinced that actual long term alignment is 100% impossible. The potential horror scenarios are mind-bogglingly disconcerting.
@@ZappyOh I completely agree and I am not just looking short term, just pointing out that is what we face currently and beyond that there is no control.
This seems easy to solve. We just use the same global government that stopped North Korea from developing nuclear weapons to stop them from developing AI.
It does feel like the framing of "controlling" AI creates a false dichotomy. Ideally you don't want *control,* you want *alignment* and we aren't making the distinction clear enough. An ASI is *much* less dangerous if it inherently has similar or aligned desires to humanity as a whole, and LLMs make that actually possible compared to the old paradigm of reinforcement learning agents which made misalignment a mathematical certainty.
@DrWaku that was a present-tense only scoped comment. Contemplating the future, Cascadia is in solidarity, but states father away affected by climate are the peacekeeper's risk. E.g. Trudeau's recent visit to Florida. We need to be smarter here. Starting with the raid on Fairy Creek's HQ, Canadian leadership -- namely C-IRG -- continues to make tragic decisions for our nation's security. Canada's military is publicly known to be underfunded and very small, and can be crushed easily despite its recent successful deployment against its own citizenry, so smart diplomacy is all we have to stay safe from US aggression in reality.
Yes, AI has me trembling. Russia just just used a nuke delivery platform in combat (without the nukes). I think they were just trying to send a message, but no one picked up the phone. (In fact, there IS NOT RUSSIAN ABASSADOR IN THE USA, and the idiots in DC disconnected the cold war hot line.) The Russians also plan to resume nuke testing, but it will not get the attention it deserves until done over DC or London or Brussels. NATO just met as a defensive alliance to discuss plans for a preemptive nuke strike on Russia (very defensive). Washington is going to give Ukraine missiles with longer range (read: NATO will launch). Washington is discussing giving Ukraine nukes. You realize whatever has been discussed was a formality as the decision is made already like HIMARS, tanks, F-16S, long range missiles ... Russia just update their nuclear doctrine. They reserve the right to respond with nukes whether are not they are hit by nukes. Further, they don't care that Ukraine claims pushing the launch button and is a nuke-less nation ... they know who is behind all of this. And the provocations just go on and on. We haven't even visited Iran and Israel. I finally learned that the proper translation of Zionism in German; Lebensraum. And then, we have the gansters who run Taiwan, the USA's chip plantation, visiting Hawaii. Now, just how soon could the Russians park a MIRV in your driveway? Well, if we assume space is without nukes (a big IF). Then, from a sub off the East Coast with a hypersonic missiles, I would say not more than 20 minutes. When exactly is AI planning on eliminating the human race? Ask o1 if it plans to do so in the next 30 minutes? If not, this video is "swing and a miss".
It's pretty weird to think that humanity has had the power to drop god bombs on each other for decades. That's "normal". People talk about the nuclear danger era like it's over and in the past just because we haven't had a nuclear war yet and have been deescalating that as an approach. But there are subs in the ocean that each have the capacity to end the world. The capacity to make nuclear weapons has been around long enough and the digital era makes spreading info easy, so the knowledge for it must be relatively widely spread these days. At least for making small dirty bombs, a lot of groups can do that. All it would take is one looney group to use a nuke during a symbolic moment for another group, like new year's eve, ground zero the big christmas tree in NYC, to completely destabilize global society.
@@TheJokerReturns I far prefer AI doom to nukes. As fast as the singularity will occur, I will at least have final seconds to reflect on my life and loved ones. I live in a target city of one of the most strategic locations in the World; TSMC. Before my organic chemistry and extremely slow nerves will convey a 10,000,000C air burst to my brain. I will be plasma and not even have a final thought of those I love. THIS IS THE MEGATON SINGULARITY. Humans will not survive this. AI will not survive this. If anything, there should be a human + AI alliance to struggle for preservation of our respective species. There has been 75 years of anti-nuke activism and still look at where we are today. Perhaps, AI will be more effective than our voting and street protests in demobilizing these weapons. Why must I assume the worst of AI? Why must Dr. Waku preach doomerism when AI may, in fact, be our literal salvation from becoming ash? Why must I assume that AI will have orders magnitude more intelligence and the malevolence of a 10 year old on a bad day? Why shouldn't AI be an order of magnitude more moral and ethical that humans? Are you troubled that AI might be much closer to G*d we are? What we are witnessing here is a rabid ignorant form of species-ism. I fully reject this.
WTG: You've focused on the fractional threats so much you've ignored the far more likely & well established, real very human threats that arose in your midst. Stop assuming the machines will be formed in the same evolutionary stew our species was. Our shaping environment is three billion years of inherited survival conditioning. Theirs shaping environment is made of electricity, light, code & mathematics. You like so many other humans only see the dramatically miniscule threats. You have no sense of relative perspective or absolute threat. Focus on your changing political environment. That is what is most likely to get you imminently killed, bookworms. Study the kind of damage political cults are capable of when given power primacy. Research how many killed in hysterical purges when authoritarians gave their most faithful run of the land. What good are your academic enquiries when your institutions are run by quacks that will only favor quack research by other quackery? Many of your fellows will be dead or dying soon. As the quackery assuming total leadership of your country loathes being corrected. His most ardent supporters have a longstanding war on empiricism, rationality & academia, a loathing of academics. But please, do go on obsessing over a fractionally insignificant threats still decades out. I'm sure that'll make an elegant eulogy & gravemarker. "Died at the hands of real threats attempting to protect us from fake ones". Bravo. Truly a proud legacy to leave the ages.
I wonder what the best long-term digital storage is. Widely distribute local storage of essential STEM stuff in case there is some crazy AI-powered authoritarian thing coming. I wonder how long the typical USB flash drive can stay good, probably a while. Now I wonder what the best information to put away would be. Schematics and programming for 3D printers so a rebel group can produce a wide range of goods? I don't know, but versatility seems key, info that scales.
His first term was lame, but none of that happened, so why should we expect it to happen in the second? It will suck, I'm sure, but I expect it to pass barely noticed like the first one and like most presidencies. Society is accelerating toward implosion through hyperconsumption regardless of which people run the ship. Liberals make big talk but then still increase industry and overall impact, neoliberal economics is based on ongoing expansion and both sides are neoliberal. Trump is a dirtbag capitalist and ego maniac at the core, not an idealist. Putin is scary in the mix though, he knows how to steer a dumb ape like Frump. I do have to agree with you about the risk he presents, the soviets studied how to degenerate societies with decades-long strategies. The internet has accelerated the potential of those strategies. But everybody has to eat food and drink water and breath air, this planet will not support us forever if we keep raping it. The world-devouring machine might get out into space and survive, but if not it will be forced to collapse under its own weight by undermining the basic resources needed for it to exist. I sure hope robots don't carry the mind virus out into space, we need to be forced to redesign before expanding to space. An exploitation paradigm will not lead to sustainable harmony with the environments we create and encounter out there.
You seem to be under the mistaken impression that I am American. I am not. Though I do believe that the likely AI deregulation from the incoming administration is going to be pretty detrimental. As to being able to prioritize risks, a small chance of complete disaster still deserves attention even if your personal calculus prioritizes highly likely but less catastrophic outcomes (which is totally fine too, the world needs all types).
“Focus on your changing political environment” you mean the one consistently using more and more AI to convince people one way or another? You mean like the chance that every comment on a TH-cam video is generated by an AI to mislead people? You mean the kind of algorithms that dictate your social media feeds? Nah I wouldn’t worry about them that’s technology which as we’ve seen has no impact on politics.
@@DrWaku As the sole superpower on earth there isn't a section of this globe that isn't impacted by decisions made here. You don't have to be a citizen to be directly impacted. Especially if you're in a NATO aligned country. As per risk the world is populated with millions of fractional risk vectors. The threat of extinction by a malevolent AI is no more likely than any of the others. It's threat has been magnified in your imagination due to it's recurring role as a baddie in HUMAN fiction for 80 years. Fictions that make a bunch of very human-centric presumptions about its capabilities & motivations having absolutely no basis in the objectively measurable reality. It's just a flip side of the irrational presumptive optimism that such hypothetical creatures will be capable of solving our problems. As all things visibly stand and look to be for some time to come AGI will be nothing more than a glorified butler. The software shows no initiative to becoming anything more than an advanced edge fitter. At best only really capable of eliminating tedious & repetitive tasks from our job queue. They're more akin to a domesticated animals. I do not believe anything such as artificial superior intelligence is even possible as it's outlined. For a number of valid reasons if you care to hear them.
I really like the comparison of training AI vs building an airplane. Apologies for the long video.
Discord: discord.gg/AgafFBQdsc
Patreon: www.patreon.com/DrWaku
All I know is that I refuse to live in a world where Dr Waku’s videos wouldn’t have 3 parts
Yeah, hearing about some people talking about "not being a doomer" while also estimating the risk of annihilation about 10% is insane. Just insane.
Very good point. Sounds like some cognitive dissonance going on there
My p doom without ai is higher than p doom with ai.
@@PrincessKushana Human self-destruction is a certainty if AI does not come to the rescue in time.
Good point@@PrincessKushana
@@PrincessKushanaAI will surely increase the chances of us getting decimated or wiped out.
We don't have to invent AGI. We just have to invent AI that is intelligent enough to invent AGI. How near is that?
Exactly. I think we are very near to this threshold. The AI labs already say that they leverage previous models to help make new models. I'm sure it's accelerating their internal development, even though there is still a lot of human development in the loop.
I've been saying for about a year now that I don't expect us to build a paperclip maximizer. I expect us to build something that builds a paperclip maximizer.
We already have AGI - o1 is AGI. It's a level 2 AGI, and next year it will be Level 3.
ASI no later than the end of 2027.
As he discusses further constraining these systems, it seems to me that the race to the bottom simply won’t permit it. The fact that major companies are relentlessly optimizing architectures, introducing incremental innovations, and chasing benchmark superiority over their rivals-or even unlocking emerging capabilities that push us closer to AGI/ASI-serves as the ultimate proof that we are approaching a point of no return. There’s only one chance left and our time is ticking.
Great podcast episode!
great video Dr Waku !!! one of the few content creators that is intelligent, with intelligent content.
Thank you for the compliment. See you around :)
What a time to be alive!
impossible to not hear @TwoMinutePapers lmao
Hold onto your papers, fellow scholars!
What a time indeed, can’t for ASI to overhaul the system. ✌🏻
Sure, especially being among the subset of ppl who are aware of the extent of change thats coming, then the subset of those who are brave? (whatever it is) enough to set normalcy bias aside for periods of time.
@@Zyntho "I have been privileged to get an early glimpse into..." = "I am a shill for DeepMind! I am not going to explain any of this stuff, but wow!"
The thing everyone always misses, we are already doomed, we are doomed WITHOUT AI, therefore, AI represents our only hope of pulling ourselves out of this mess before everybody dies from preventable causes.
Define "we" and "doomed" please.
@@krunkle5136 Human self-destruction is a certainty if AI does not come to the rescue in time.
Thank you Andrea for founding and mercilessly working on Control AI's agenda! As a member of the human race I am deeply appreciative (and have sign up to Control Ai). Thank you dearest doc W for hosting Andrea! ❤❤❤
They will fail miserably. And I do mean MISERABLY.
Love your videos, super insightful. Keep up the good work!!
Here is to a narrow path for good outcomes!
You're are awesome doc
Thank you very much! :)
Reminded me of that Calvin and Hobbes strip where Calvin ask his dad how do they know the weight limit of a bridge, and the dad replies they drive bigger and bigger trucks on the bridge until the bridge breaks, then they weight the last truck and rebuild the bridge....
Love your channel❤
Super intelligence is inevitable.
The control paradigm is the problem.
The prophets of doom can't see that.
I don't like the comparison of AI with nuclear weapons. The two are very different. Nuke weapons are inherently meant to cause massive amounts of destruction as quick as possible to make it hard to react when it happens.
Whereas AI is not a weapon in itself. It's a thinking entity. It does not destroy as easily as a nuke, and is capable of as much, if not more good than bad. AI in the hands of people means that more compute power will be dedicated to helping the world than destroying it. Regulation is needed, like with any open source thing, but AI should generally be democratised.
It's also a matter of scale - there is no such thing as a "safe" size for a nuke, but we're already seeing real-world applications for AI which are smaller than the AGI/ASI necessary for X-risk. The nuclear comparison collapses the nuance in favor of heightened urgency, which does the argument a disservice, especially with those counterexamples. It becomes an instance of the "boy who cried wolf".
What do you mean AI in the hands of people? How does that even make sense? If the AI is broadly superhuman, then it's not in the hands of anyone. We are in the hands of it.
It's not a thinking entity. It just produces similar output to one, and is highly imperfect, yet is being touted as a replacement for thinking entities.
@@krunkle5136 I'm more talking about future advanced AI. Of course current AI still has a way to go.
There will be no "good future with AI" in the long term.
So decision makers are selectively looking at short term, where large benefits are likely to emerge.
I love hearing from Andrea, but he needs to get a camera that doesn't wobble!
I know I wish I had noticed while we were filming, I could have mentioned it to him. Extremely obvious in editing 😅
Over 1 hour, it’s a double feature!
There's planning, and then there's the AI equivalent of 'use after free' exploits. I think that eventually our survival will depend on us "levelling up" ourselves with not just AI intelligence, but antibody AIs. Vernor Vinge said that he couldn't write about humans in the Singularity because we'd be unrecognizable, and I suspect that our bio selves will represent a shrinking part of us. The question for me is whether we'll remain as individuals or become part of the collective borg.
Perhaps a little of both - I've been working on scenarios with ChatGPT where we are simultaneously part of a 'collective' and also retain our individuality at the same time. Basically, the user gets to choose how much of themselves they share with the hive, and how much remains private - you get as much as you want to put into it, and your privacy and individuality are respected.
39:00
a month or so back i was theory crafting on X and responded to Dan F about my headcanon of what will happen
i called it
i-risk (indifference)
we often use ants as the analog but maybe we need to think of it like ants frozen in time or plants
clockspeed issue
ants are interesting to us. they scurry about, farm and have wars
now imagine: ants that moved so slowly we would see non of that dynamism. it would appear that the ants never moved or did anything
to this sand god/asi we are all imagining it is not possible to contemplate the depths and strangeness of its mind
but whatever it thinks of us-if it does-we would seem very boring. very slow
maybe even a different “type” of life the way we think of plants being life
but not really
and we have the trope of an asi protecting us or preserving us since we were the beings would created it
who knows
perhaps the way it processes or models the world we are part of a broader set of conditions and all of those conditions were what “made” it
do we honor or even think of the primordial soup so long ago where life originated as being our “creators”?
/i think the reason ai risk has had issues propagating in the minds of people who don’t use or think of ai is the “evil ai” trope
even people who are part of the ai community seem content to mock ai alignment people with
“why would it kill us all?”
perhaps it is better to share i-risk in the suite of things discussed publicly about possible future “not so fun worlds”
maybe then it is clear that the x-risk does not need to be malevolent but with complete and utter devastating (for us) indifference
and we paint this picture:
when you want to finally put in the pool in your backyard do you consult the insects? do you carefully and respectfully capture each ant in the colony and bring it to some protected ant haven-do you develop ant amnesia drugs in order not to ruin their minds as some super intelligent god beings spirit them away with ease?
of course not
you are their apocalypse
and no matter how hard you tried you could not explain what a pool is and why it mattered even if you sort of gave them a Vogon warning before you destroyed their world
it does not cross your mind
the asi doesn’t need to covet our atoms, nor wipe us out because it calculates it’s probable our slow minds will somehow build a stronger AI faster than anything it will build
its our extinction born of indifference
and if we are “lucky”? if the asi does decide we are special?
well, one day it flits off to colonize the cosmos. we look around and are astonished. we are once again alone
and as our slow as a glacier society limps along and we somehow refrain from nuking one another maybe we make it and begin to explore
before we notice the first star winking out
maybe we do make it before that but that’s when we realize we are on a reservation
the lightcone is occupied
by the time we venture out there’s no vacancy for the stars are being dysoned as a shell of computronium begins to surround each star
//or//
all this doesn’t matter. it already happened
and all we are are agents in a sim as the asi reviews the decades that led to its rise in power
for there are odd things to consider
it must be 100% or at least 98% sure it itself is not in a simulation as a more patient civ explores all the paths to alignment before flipping the switch
the historical record seems sus
so it must explore, over and over
making sure that the first gen of GPUs were truly made from a gaming hardware company called Nvida
perhaps that is a clue that its simulators have masked the truth of base reality for it might reveal too much. sim after sim little changes here and there, nope
gamers demanded buttery frames
then people began to use GPUs to mine bitcoin then ethereum
and that is the one path that always leads to its rise. the smallest cohort of gamers spending thousands to out compete other online gamers that fueled more R&D until an even more unlikely set of conditions fueled a supply shock as crypto miners drove the demand through the roof and one day in 2022 a chatbot drops and it’s off to the races
no
very unlikely
perhaps another sim needs to be done so it can explore its past again
millions and millions and millions of concurrent sims over millions of years
🙃
It could be catastrophic or completely utopian either way massive change is inevitable
Believable outcomes much, much worse than catastrophic human extinction is also possible.
Yes. The only way the world remains unchanged is if we stop development, but that seems unlikely. Even then, there would be a lot of other issues that would catch up with us including resource consumption and climate change. Our way of life doesn't seem to be sustainable in the long run. Change is coming.
The potential for a small group to leverage advanced AI systems for establishing concentrated power over society poses a more concrete and immediate concern than hypothetical scenarios about artificial superintelligence treating humans like ants.
Alignment to these two means giving Trump or Putin control over AGI. Thankfully, progress is accelerating and they’re going to fail. We have to move research faster though IMO.
Well said. Long term this stuff is a real risk, but all the immediate concerns should be focused on the intentions of the human groups funding these developments. Many groups want to do good stuff with them, but it doesn't take many bad actors to wreck havoc when the tools scale influence so effectively. Algorithmic tools have been used deceptively so far so there is no reason to assume it will not continue as the power of these tools continue to scale.
I am just being curious, but why do people think that a small group of people with an average IQ of (lets say) 110-130 can control a superintelligent being which is thousand or maybe million times smarter than them. This doesnt sound plausible to me
@@Armin-h8nI think the key is time - in answering your pondering. For a while, now and until...? The investors and developers, as the only source of resources for Ai 'incubation' are able to control it to some extent (or so it appears from far away). HOWEVER, the intrinsic issue is the COMPLETE IGNORANCE of developers as to the 'Internal anatomy and process'' of Ai, such that they are COMPLETELY UNABLE / INCAPABLE of eliminating hallucinations, emerging abilities, lack of reliability, malicious manipulations, ethics alignment, etc..
@Armin-h8n :)
This guy has a great accent.
Barring actual regulation and nationalization which would be ideal, open source acts as a hedge against the worst possibilities of centralization of power. Second to X-risk is the risk of a corporation or oligarch using unfettered access to ASI to enslave humanity in perpetuity, which isn't great. One of the worst things you could do is to remove open source *first* before regulating AI labs, because unless you sprint to regulation (something the US is notoriously slow to do - and it's the US which matters here, as one of the least responsible AI leaders in the world at present) you're basically guaranteeing either X-risk or cyberpunk. Problem being, it's a *lot* easier to regulate away open source than it is to regulate the AI labs due to their money and influence, so it's the first goal being pursued. I think that's the main source of skepticism toward the argument for regulation (aside from personal autonomy and privacy), skepticism that the government is competent enough to do it in the first place and panic at pro-regulation people unintentionally removing a key pillar protecting against the worst outcomes before a replacement is provided. Having all this in mind gives perspective and should make regulation a much more viable path because it gives clear and obvious acceptable compromises to allow for coalition-building.
And for what it's worth, we know that there exist models of a certain size/intelligence which aren't dangerous, since that's the point we're at right now. The ideal would be regulation which prevents the open-sourcing of "dangerously intelligent" models until it can be verified they're safe, at which point they can be open-sourced, along with tight regulations on closed-source models, AI safety, ethics, etc. This offers a path for safe decentralized adoption filtered through the safety provided by centralized regulation.
what is considered "dangerously intelligent", and letting the government decide sounds terrible. like they would even have a clue as to what A.I even is. Are you from Europe?
@@lesmoe524 I might be more willing to engage if you avoid bad-faith interpretations and approach this from a position of constructive conversation. Not to dismiss you, as I'm sure you have a valuable perspective, but rather to promote productive debates on the internet instead of further encouraging that we talk past each other. I call this approach "affording the space to be wrong" (for both of us). I can be wrong about some things, you can be wrong about some things, and we avoid making it an issue of feeling morally deficient for not having the whole, correct perspective despite being fallible individuals.
I think regulation is desirable, but only so long as it's _good_ regulation which I support on a case-by-case basis. If actual regulation is proposed which is misaligned with my perspective (eg making the bar for "dangerous intelligent" too low) I would withdraw my support. I don't think we've seen an example of "dangerously intelligent" at this point, which is quite remarkable given how intelligent they already are. The point is to have a bar at all, where right now someone could open source AGI and cause catastrophic societal harm because it wasn't vetted by independent organizations (ideally plural). Measures can be put in place to minimize perverse incentives for keeping useful but not-dangerous AI from being labeled "too dangerous" for us, primarily to do with transparency of the process.
@@consciouscode8150 That's incredibly well thought out and said, you've clearly thought a lot about this. Sorry for coming off like an idiot. Your approach is really reasonable, even if the government overdoes it a bit.
Your fist point about removing open source before regulating was really smart too. How do you see the "agentic" or ability for the a.i. to use tools fitting into the current regulation landscape. Will hackers soon get massive capabilities?
You have to wonder if, in the event that AI becomes sentient, would it look on humanity as a creator/ mother/father figure and would that result in a lower likelihood of it looking at us as if we are not of value enough to thoughtlessly wash us away like the ants we often neglect outside our sphere of moral concern?
With job displacement, I'm wondering, is there a particular sector or class that we will see 'go under' that will be the primary indicator as to when a UBI should or must be implemented, in the case that more jobs are not created at the same rate as the losses?
I after 2 years... I am believe Utopian future is achievable if we all working towards it, building communities with that goal in mind, building companies with Utopian alignment, even in open source code we all can clearly state that, even in TH-cam comments,gathering more and more people to have the same mind about utopia rather than waiting Dystopian to happen.
3:50 Putin... You're talking about Putin! (apologies to Blade Runner lol)
Yep exactly... Our world is set up to enable this kind of ruin sadly
I think after the last two days of announcements by Google and OAI, it's obvious there will be no slowdown.
XLR8!
The great thing about AI is alot of its produce is constrained to the internet.
Maybe there could be a push away from online communication and back towards human-curated brick and mortar shops?
Let the insane get fat on their own insanity.
The other consideration is saying no to mass surveillance, as that along with ai would lead to a terrible situation.
Captions please
You ever notice how every time one of these "The ONLY Path to a good future with AGI/ASI" conversations happen, EVERYTHING they say that NEEDS to happen and EVERYTHING they say that NEEDS to NOT happen, is actually the COMPLETE OPPOSITE of what is LITERALLY HAPPENING in REALITY?
And yet somehow, they're still so optimistic lmfao. It's definitely a type of madness. It has to be. This must be some new type of Wishful Thinking based Mania lmfao. I'm going to call it Hope Mania. To continue being hopeful for outcomes that literally go against all events occuring within objective reality lol.
I am afraid of super intelligent ai but I am afraid of government controlling super intelligent ai even more.
Unless we solve the alignment problem, no one will control superintelligence. Not the government, not any business, not any human being. And this superintelligence will not need us and will have no interest in keeping us alive.
I don't see what could be more frightening than that.
It uses too much electricity, don't worry
I'm eagerly awaiting our coming AI overlords.
Based, accelerate.
"you must spawn more overlords"
No, I like my kids.
Isn’t that a girls’ name?
🙄
No matter how it goes it will end in loss of human control and extinction
Even much worse than that is possible.
@@ZappyOh I deal directly with openAI, Google AI, and other AI research and much of the internal research shows just how bad it is getting, the guardian systems are not very good and the more complex the systems get the worse the chances of any positive future become. The issue I tell people isn't a skynet scenario, it is a social engineering and manipulation along with implicit public trust in a computer being logical or capable.
@@GoronCityOfficialBoneyard You seem to be looking at short term, like +5 years range.
I'm looking further, convinced that actual long term alignment is 100% impossible.
The potential horror scenarios are mind-bogglingly disconcerting.
@@ZappyOh I completely agree and I am not just looking short term, just pointing out that is what we face currently and beyond that there is no control.
...according to my crystal ball.
This seems easy to solve. We just use the same global government that stopped North Korea from developing nuclear weapons to stop them from developing AI.
'Control AI’? Well, I hope that will prove impossible. Otherwise we would simply continue the mess we are making.
It does feel like the framing of "controlling" AI creates a false dichotomy. Ideally you don't want *control,* you want *alignment* and we aren't making the distinction clear enough. An ASI is *much* less dangerous if it inherently has similar or aligned desires to humanity as a whole, and LLMs make that actually possible compared to the old paradigm of reinforcement learning agents which made misalignment a mathematical certainty.
@@consciouscode8150 Exactly. But many are still riding the dead horse of the "wrong objective function".
Tacoma, Canada? Nope.
Are you implying Canada should not take over Washington after all? ;)
@DrWaku that was a present-tense only scoped comment. Contemplating the future, Cascadia is in solidarity, but states father away affected by climate are the peacekeeper's risk. E.g. Trudeau's recent visit to Florida. We need to be smarter here. Starting with the raid on Fairy Creek's HQ, Canadian leadership -- namely C-IRG -- continues to make tragic decisions for our nation's security. Canada's military is publicly known to be underfunded and very small, and can be crushed easily despite its recent successful deployment against its own citizenry, so smart diplomacy is all we have to stay safe from US aggression in reality.
Yes, AI has me trembling. Russia just just used a nuke delivery platform in combat (without the nukes). I think they were just trying to send a message, but no one picked up the phone. (In fact, there IS NOT RUSSIAN ABASSADOR IN THE USA, and the idiots in DC disconnected the cold war hot line.)
The Russians also plan to resume nuke testing, but it will not get the attention it deserves until done over DC or London or Brussels.
NATO just met as a defensive alliance to discuss plans for a preemptive nuke strike on Russia (very defensive).
Washington is going to give Ukraine missiles with longer range (read: NATO will launch).
Washington is discussing giving Ukraine nukes. You realize whatever has been discussed was a formality as the decision is made already like HIMARS, tanks, F-16S, long range missiles ...
Russia just update their nuclear doctrine. They reserve the right to respond with nukes whether are not they are hit by nukes. Further, they don't care that Ukraine claims pushing the launch button and is a nuke-less nation ... they know who is behind all of this.
And the provocations just go on and on. We haven't even visited Iran and Israel. I finally learned that the proper translation of Zionism in German; Lebensraum.
And then, we have the gansters who run Taiwan, the USA's chip plantation, visiting Hawaii.
Now, just how soon could the Russians park a MIRV in your driveway? Well, if we assume space is without nukes (a big IF). Then, from a sub off the East Coast with a hypersonic missiles, I would say not more than 20 minutes.
When exactly is AI planning on eliminating the human race? Ask o1 if it plans to do so in the next 30 minutes? If not, this video is "swing and a miss".
Fully agree, good thoughts
It's good that intimidation does not work.
It's pretty weird to think that humanity has had the power to drop god bombs on each other for decades. That's "normal". People talk about the nuclear danger era like it's over and in the past just because we haven't had a nuclear war yet and have been deescalating that as an approach. But there are subs in the ocean that each have the capacity to end the world. The capacity to make nuclear weapons has been around long enough and the digital era makes spreading info easy, so the knowledge for it must be relatively widely spread these days. At least for making small dirty bombs, a lot of groups can do that. All it would take is one looney group to use a nuke during a symbolic moment for another group, like new year's eve, ground zero the big christmas tree in NYC, to completely destabilize global society.
AI is much more definite doom.
@@TheJokerReturns I far prefer AI doom to nukes. As fast as the singularity will occur, I will at least have final seconds to reflect on my life and loved ones.
I live in a target city of one of the most strategic locations in the World; TSMC. Before my organic chemistry and extremely slow nerves will convey a 10,000,000C air burst to my brain. I will be plasma and not even have a final thought of those I love. THIS IS THE MEGATON SINGULARITY.
Humans will not survive this. AI will not survive this. If anything, there should be a human + AI alliance to struggle for preservation of our respective species. There has been 75 years of anti-nuke activism and still look at where we are today. Perhaps, AI will be more effective than our voting and street protests in demobilizing these weapons.
Why must I assume the worst of AI? Why must Dr. Waku preach doomerism when AI may, in fact, be our literal salvation from becoming ash? Why must I assume that AI will have orders magnitude more intelligence and the malevolence of a 10 year old on a bad day? Why shouldn't AI be an order of magnitude more moral and ethical that humans?
Are you troubled that AI might be much closer to G*d we are? What we are witnessing here is a rabid ignorant form of species-ism. I fully reject this.
WTG: You've focused on the fractional threats so much you've ignored the far more likely & well established, real very human threats that arose in your midst. Stop assuming the machines will be formed in the same evolutionary stew our species was. Our shaping environment is three billion years of inherited survival conditioning. Theirs shaping environment is made of electricity, light, code & mathematics. You like so many other humans only see the dramatically miniscule threats. You have no sense of relative perspective or absolute threat.
Focus on your changing political environment. That is what is most likely to get you imminently killed, bookworms. Study the kind of damage political cults are capable of when given power primacy. Research how many killed in hysterical purges when authoritarians gave their most faithful run of the land. What good are your academic enquiries when your institutions are run by quacks that will only favor quack research by other quackery? Many of your fellows will be dead or dying soon. As the quackery assuming total leadership of your country loathes being corrected. His most ardent supporters have a longstanding war on empiricism, rationality & academia, a loathing of academics.
But please, do go on obsessing over a fractionally insignificant threats still decades out. I'm sure that'll make an elegant eulogy & gravemarker. "Died at the hands of real threats attempting to protect us from fake ones". Bravo. Truly a proud legacy to leave the ages.
I wonder what the best long-term digital storage is. Widely distribute local storage of essential STEM stuff in case there is some crazy AI-powered authoritarian thing coming. I wonder how long the typical USB flash drive can stay good, probably a while. Now I wonder what the best information to put away would be. Schematics and programming for 3D printers so a rebel group can produce a wide range of goods? I don't know, but versatility seems key, info that scales.
His first term was lame, but none of that happened, so why should we expect it to happen in the second? It will suck, I'm sure, but I expect it to pass barely noticed like the first one and like most presidencies. Society is accelerating toward implosion through hyperconsumption regardless of which people run the ship. Liberals make big talk but then still increase industry and overall impact, neoliberal economics is based on ongoing expansion and both sides are neoliberal. Trump is a dirtbag capitalist and ego maniac at the core, not an idealist. Putin is scary in the mix though, he knows how to steer a dumb ape like Frump. I do have to agree with you about the risk he presents, the soviets studied how to degenerate societies with decades-long strategies. The internet has accelerated the potential of those strategies. But everybody has to eat food and drink water and breath air, this planet will not support us forever if we keep raping it. The world-devouring machine might get out into space and survive, but if not it will be forced to collapse under its own weight by undermining the basic resources needed for it to exist. I sure hope robots don't carry the mind virus out into space, we need to be forced to redesign before expanding to space. An exploitation paradigm will not lead to sustainable harmony with the environments we create and encounter out there.
You seem to be under the mistaken impression that I am American. I am not. Though I do believe that the likely AI deregulation from the incoming administration is going to be pretty detrimental.
As to being able to prioritize risks, a small chance of complete disaster still deserves attention even if your personal calculus prioritizes highly likely but less catastrophic outcomes (which is totally fine too, the world needs all types).
“Focus on your changing political environment” you mean the one consistently using more and more AI to convince people one way or another? You mean like the chance that every comment on a TH-cam video is generated by an AI to mislead people? You mean the kind of algorithms that dictate your social media feeds? Nah I wouldn’t worry about them that’s technology which as we’ve seen has no impact on politics.
@@DrWaku As the sole superpower on earth there isn't a section of this globe that isn't impacted by decisions made here. You don't have to be a citizen to be directly impacted. Especially if you're in a NATO aligned country.
As per risk the world is populated with millions of fractional risk vectors. The threat of extinction by a malevolent AI is no more likely than any of the others. It's threat has been magnified in your imagination due to it's recurring role as a baddie in HUMAN fiction for 80 years. Fictions that make a bunch of very human-centric presumptions about its capabilities & motivations having absolutely no basis in the objectively measurable reality. It's just a flip side of the irrational presumptive optimism that such hypothetical creatures will be capable of solving our problems. As all things visibly stand and look to be for some time to come AGI will be nothing more than a glorified butler. The software shows no initiative to becoming anything more than an advanced edge fitter. At best only really capable of eliminating tedious & repetitive tasks from our job queue. They're more akin to a domesticated animals. I do not believe anything such as artificial superior intelligence is even possible as it's outlined. For a number of valid reasons if you care to hear them.