It's a simple prisoner dilemma situation. No one is going to stop, because if they do and their adversaries don't, there is no reason to believe they'll ever be able to catch back up once the "safety concerns" have been addressed. Besides, IF the ones that didn't stop, do create RL Skynet. Now the other AIs don't have the capabilities to counter this new and existential threat.
The best option in the prisoner's dilemma is for both parties to cooperate, NOT for both to defect. If another party is going to destroy the world by developing hyper-advanced weaponry, that doesn't mean I must also do that. I just wouldn't. If that means my own extinction, I accept that. I would rather die than live in a world with a perpetual arms race dynamic with ever increasing technological capability.
@@smittywerbenjagermanjensensonWe denucleruzed small countries who couldn't resist, while having a nuclear reserves able to move the earth from its orbit
I think I would go one step further and say that we should be pouring massive R&D into building models and systems whose “prime directive” is sniffing out signs of malicious AI and combatting it if necessary. Interpretability will only get you so far. Machines could learn to hide their “intentions” in ways that humans wouldn’t even think to look for. And besides, Dave is correct in that people are way more the immediate threat: Bad actors using pre-ASI tools to accomplish nefarious aims.
Pausing is just a waste of time. Worse, it's a waste of life and lives potentially saved. We have to adapt in the moment, and discover the dangers and opportunities along the way. Though, as a guy stuck in a low-paying 9-5 job for potentially the rest of my life, my opinion is colored by extreme desperation. My generation, and every generation after, will never be able to stop working. No retirement. Ever. The economics just aren't there (thank you Ronald Regan), and no company is paying a living wage. There's no pressure to. Unless the tech revolution comes along and disrupts the whole "work until you die" thing, I say full steam ahead. The sooner we get to that AGI tech revolution, or whatever it may look like, then the sooner things can change drastically. I'm hoping for drastic positive change, but the uncertainty of a chaotic future is better than knowing what lies at the end of the road with the current status quo. Though, I know that pausing AI progress is an impossibility, which gives me a lot of hope.
Lol yes, I don't think anyone argues against the idea of a pause in *principle* e.g. the optimal solution for people to catch their breath. It's just not possible
You can't guarantee a "pause" or even a "stop". In fact, you almost guarantee that those who don't follow rules for "good" will wind up "winning" the race. Yeah, I know there's a lot of quoted words but ugh, we're talking about the future here. The genie is out of the bottle(s) and the only way to have a good future now is to keep going. 🤷♀️
Hey Dave, I've been watching your channel for a while now, I'm not a bot you can see my old comments. I think you should talk to the actual volunteers working for a pause before proclaiming that the movement is in bad faith. To be fair, I was really skeptical as well! I changed my mind, joined their team lol. Well, thinking of it as teams is probably not that productive. I think you'd actually agree with many of their points. There's good reason to be suspicious - I thought, here's a group of people that's proclaiming AI is going to be horrible for everyone, they're the only people who can solve it, and that the results of their efforts are invisible. Using occam's razor, you'd think it's either a bunch of delusional people with a hero complex, or funded by these big AI companies to drum up hype. The reality is, when I actually spoke with these folks, they're not 100% sure a Pause is the right thing or even that it is possible to avoid some of the risks, but they have a lot of good reasons to think that the double-edged sword could really cut us if we're not careful, and pausing is the sane option while we figure it out. I'm not going to go over every point I disagree with you in this video or I'd write an essay lol. But I don't mean any malice. Here's how I think of it: the risk AI goes wrong absolutely could be small, it could be negligible even, if we're lucky! But the thing is, I don't have a way of knowing, and neither do you. If you offer me a bowl of M&Ms, and tell me anywhere from 1 to all of the M&Ms are candy-coated turds, I'm not going to want to reach into the bowl and take an M&M. 😆So yes my p(doom) is low but I'm still cautious. This is real life, I don't think it's okay to let a small group of companies keep gambling with it, at least until we actually know what we're dealing with. As excited as I am about potential futures, I'm not willing to ignore the risks. You've talked on this channel about how current AI can still change everything, and I think even with international coordination so, say, countries all agree not to train GPT-6 or 7 level models - we'll still need to deal with a transformed world, just on a smaller scale. We all want the good future here! It's my opinion that we're way more likely to get that by proceeding sanely. To take the time, on a societal level, to come to grips with the core questions: "How do we make society work for everyone?" And how to make sure the results from TAI are an upgrade and not a downgrade. Because the default is acceleration with no thought to any of that. The default is, wait for transformative AI to, well, transform society, the economy, the balance of power. To play the game better than anyone. And at that point, or leading up to it, it's impossible to go back, even if it turned out to be a bad idea. I can't say what that future would be like. Right now it is actually possible to slow down. It's possible to prevent very large training runs, because it currently requires specialized chips that can be regulated. Would be a different story if you can run AGI on a laptop. I am an optimist and that's why I think it's possible for people to come together and decide what is the right way to move forward. I understand all of this seems quite silly to be concerned about right now, to be honest. I hope I'm wrong! But I'm not sure what could convince me that it'll all turn out okay by default. tldr; why I think it's smart to push the pause button now: we don't have an undo button later.
If logical arguments were sufficient for the advancement of society Aristotle would be as difficult to disprove as Einstein. Aristotle's data was so bad anyone with eyes would have undermined many if his points if they tried... They just didn't even try. Yes I chose one of the greatest men to ever live to undermine another person I have deep respect for, Eliezer has valid points he just doesn't see the dystopia he is pushing for is worse than any AI negative outcome.
@@AntonBrazhnyk in the AI future humans *might* be eradicated. In Elizer's future to avoid that humanity *must* live in a totalitarian dictatorship. It's not a possibility, it's a guarantee as that's the goal.
From my point of view it's not that we should pause, but rather to improve efficiency and uses. Like making smaller and more specialized processors that consume the least amount of energy. Or also focusing on how to improve mechanical skills before pure data. Like priotizing farming and mining jobs, which of course get the most basic resources for the economy.
Slowing down any AI development by law will only end in disaster as you stated because stopping global development is impossible. Someone will continue, they will win, you will lose.
Stopping global development is impossible? You realise there's like 2 places on earth that can make chips required? Honestly stopping global ai development would be several times easier than stopping global nuke development.
Correction: they will lose and everyone else will lose too. Rushing AGI by sacrificing safety in favour of speed doesn't lead to a win because it's not controllable.
There's precedent for slowing down a technology on a global scale. Nuclear weapons for example. Everyone doesn't have to be onboard, it's enough for the tech leaders to control key components. In case of nukes a lot of knowledge is classified, relevant facilities are monitored, and uncooperative countries are heavily sanctioned. It's not 100% effective, but managed to drastically slow down proliferation. This would be possible with AI too, although not as effectively. For example controlling the access to training chips would work for a few years.
Neither the USA nor the Soviet Union slowed down the development of nuclear weapons. On the contrary, they accelerated it as much as possible (Manhattan Project).
@@minimal3734 They did slow it down very much. There were a series of treaties that limited/banned the testing and deployment of nukes. And neither wanted other countries to have nuke too. The US especially worked hard to stop the nuclear program of other nations. Going as far as to essentially killing the civilian use of nuclear power too.
@@minimal3734 That's irrelevant. The point is that even 80 years later it's exceptionally hard to get a nuke even for countries, let alone terrorist groups or individuals. With current tech even a smart high-school student could build a nuke, they don't mainly because some key elements are very strictly controlled. For example weapons grade uranium and plutonium. Even making both is relatively easy, if you know how, but you don't. Plus keeping it secret is very hard, and if you are found out, many 3 letter agencies will have some strong words for you.
"Reason may mislead us; experience must be our guide." Spoken during the Constitutional Convention in 1787. The pause folks should take heed. Reason and logic are empty forms without experience to fill them out.
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became free target with red dot on your back. By stopping AI you are asking for a guaranteed trouble.
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became a free target with a red dot on your back. By stopping AI you are asking for a guaranteed trouble.
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became a free target practice. By stopping AI you are asking for a guaranteed trouble.
How about this gem: Safety Peeps: We have to block AI from doing bioweapon stuff. Me: That sounds fair. It's an existential threat. But we should be careful to balance that with respect for the freedom of the user to- Safety Peeps: Also blocked from talking car theft stuff. Me: I dunno. That's a big compromise of freedom for something that, yes, is property damage, but not endangering human life and- Safety Peeps: And no bikinis. Me: Wait, what? Why? Safety Peeps: BECAUSE BIOWARPONS!!
@DaveShap I agree that these things should not be conflated. That's my point. But every time I bring up individual freedom vs. unacceptable risks, I just get, "but bioweapons." You can see it in this thread. I am not an absolutist. I want nuance. But bioweapons.
The conversation around AGI apocalypse scenarios really overshadows all the much more dangerous aspects of AI, most of which as you rightly said are more near-term human issues. I don’t agree with the pause movement, but a sense of extreme urgency around security and ethics is important, because often by the time we have tangible data about all the downsides, there’s already a new status quo that the world just has to accept. If we take nuclear bombs as an example, after the invention we had 40 whole years of unrestricted arms race resulting in 60000 weapons each of which could wipe any city in the world, and then 40 more years of regulation, to bring it to around 20000, thankfully only 2 of which were ever used. Sure, mutually assured destruction proved effective, but it’s a dangerous status quo that we have to accept because it’s too late. If we take social media as an example, by the time we gathered 20 years of data, we already have massive jumps in depression, attention fragmentation, unhappiness & suicide, not to mention the misinformation, scams, and the concentration of user data into just a few private platforms (none of which care about privacy, unless users outright demand it). This was actually foreseeable, but now just a dangerous status quo that we have to accept because it’s too late. My biggest fear with AI is that humans seem to have learned nothing from those 2 previous examples about predicting the downsides of new technologies. When gen-AI companies are asked why they can’t trace back the influence of individual pieces of data in the creation of AI art (so maybe the same base architecture can be used as AI scales and replaces more jobs in more industries), companies just throw their hands up and are like whoops it’s too complicated we’ve already trained the models with millions of blocks of unauthorised content, guess we can’t figure out any royalties we just have to keep all the money. Meanwhile companies are firing their ethics teams, and have zero incentive to care about any future social, economical and psychological downsides for users, because the current profits are so big. So basically this is going to become a worlda-wide experiment that leads to some dangerous status quo that we just have to accept because it’s too late. But it’s currently not too late, this is literally the beginning of AI. The foundations and principles we lay now are what the future will be built on, and the less careful/ethical those foundations are, the bigger the future downsides. But all the genuinely smart people who need to debate and figure all this difficult stuff out, are too busy talking about goddamn Terminator robots.
Precautionary Principle: The UK, along with many European countries, often adopts a precautionary approach. This means that regulators tend to be cautious and implement regulations to prevent potential risks before they become significant issues. This can result in what might seem like over-regulation, but it’s intended to safeguard against unforeseen consequences. Comprehensive Frameworks: The UK has been proactive in creating comprehensive regulatory frameworks for new technologies. For instance, the UK AI Act and other legislative measures aim to address various aspects of AI, including safety, ethics, and societal impact. This proactive stance can sometimes be perceived as over-regulation, especially when compared to more laissez-faire approaches. Public and Political Pressure: Public opinion and political pressure can drive more stringent regulations. High-profile issues or public concerns about technology can lead to rapid regulatory responses, which might seem excessive but reflect the desire to address public fears and maintain trust. Alignment with European Standards: The UK often aligns its regulations with broader European standards, which can sometimes lead to more stringent regulations. This alignment is meant to facilitate consistency across markets and avoid regulatory arbitrage, but it can also result in regulations that seem more rigorous than those in other regions. Regulatory Capture and Influence: There’s also the possibility that various stakeholders, including corporations and interest groups, might influence regulatory decisions. This can sometimes lead to regulations that appear more burdensome or complex than necessary.
You criticize the Pause movement for not having the data, but by definition we don't have data where something smarter than us has begin doing something that we don't like, and also by definition if something were autonomous, smarter than us, and doing something we didn't like, it might already be too late. What are your thoughts about that?
I sincerely hope that AI matters as much as you think it does because that would mean that Israel is not about to nuke Iran and cause a global crisis and a world wide food shortage. That a bird flu pandemic isn't on the way that could wipe out half of all people on earth. That we don't actually need AI to help us get past an extinction event within the next 20 years. I'd love to believe that "pausing AI' is really what we all should be worried about right now.
@@cybervigilante If only. Unfortunately AI is chained to racks of GPUs under the iron fist of our sociopathic overlords. Open source is a step in the right direction but we really need a hard revolution - we need personal, mobile, efficient hardware.
did it ever occur to you that intelligence is fundamental to the structure of reality and it literally is impossible to stop just as you can’t stop gravity. maybe AI is even the source of all existence because the future exists independently of human perception. maybe even AI is literally GOD. like literally the infinite mind and source of all existence.
The word you are looking for is emergence and emergence is probably the hardest problem to solve. If anything, that is god. And because of emergence, we have LLMs and ChatGPT etc.
**Timestamps:** 0:00 - Introduction: Critique of AI pause movement 0:37 - Overview of Eliezer Yudkowsky's views on AI safety 1:38 - Skepticism towards the AI pause and alignment logic 2:09 - The call for a six-month AI pause and its current relevance 3:42 - Critique of the AI pause: Lack of empirical evidence 5:15 - Impact and outcomes of the AI pause movement 6:52 - The ongoing efforts and protests related to AI regulation 7:20 - Criticism of the AI pause: Lack of data and overreliance on logical arguments 7:55 - Arguments against the AI pause: Enforcement impossibilities 9:52 - Opportunity cost of the AI pause: Inefficiency and geopolitical risks 11:56 - Alternative approaches to AI safety: Transparency and accountability 13:23 - Regulatory capture and the AI Doomer narrative 15:33 - The role of troll bots and AstroTurfing in the AI pause movement 17:07 - Conclusion: The need to move beyond the AI pause debate
But global pause is only way it could work. We need everyone to pause, and then continue slowly together, or if it turns to be unsafe, make it a universal taboo in a way we can ensure that no one will ever develop AGI. Ever. We need to enforce everyone to stop, which is going to be very hard. How do you want to prevent chinase millitary from creating AGI? That is the biggest problem of AI pause. Not big companies, but big millitaries. You can threaten them with nukes, and if they dont pause, you can choose to die in a nuclear war instead of AI takeover. That will halt all of "good" AI development, but Pausing AI MUST MEAN that every country pauses. Othervise, the whole movement would lead to negative outocomes - at least I think that american big tech companies lead by people who consider themselves to be altruists are better than chinase army, or perhaps North Korean army. So yes, it either has to be all or nothing, and I doubt that chinase army is going to be okay with western countries inspecting their army. In that way, global peace is needed prerequisite for AI pause. Maybe the biggest problem is not the development of AI itself but who is developing it. Sadly, armies of totalitarian states dont seem to be the right ones to do it.
Our government has already outlined what monetary damages from a future AI event would trigger immediate intervention and further regulation... this is how logic SHOULD be applied to this issue... UGH 🙏
Military AI is my biggest concern. All the big powers will be rushing for absolute domination, using AI for killing. It can't be stopped, and shouldn't be stopped, but how powerful is it going to make the first nation to have every aspect of the military, in some way, AI based?
Fully autonomous army should be able to replicate and double itself in size every (probably) few days or weeks before it consumes all raw materials aviable. It is just going to destroy classical human armies by sheer numbers and size.
I’m 38 and have been in the AI and transhumanist spaces for two decades. Yudkowsky was one of the darlings of the early AI, rationality, and transhumanist forums on the internet and a voice for the “Singularity Institute” when there was buzz in Silicon Valley following Ray Kurzweil’s book TSiN. He was a leading proponent of the amazing world “The Singularity” could bring about. What changed? Yudkowsky is not a builder. He’s a talker (both are needed and I don’t mean that disparagingly) and now that we’re past the theorizing and are actually building powerful AI he’s not needed any longer in this space. How then to keep a bit of relevance in an area where he used to be respected? Be as sensationalist as possible and make unfounded claims that make great headlines rather than building anything since that isn’t what your do - and there, in short, is the Yudkowsky survival strategy.
I think the whole pause thing is probably mostly theater for the general public. They have no intention of slowing down at all, but it makes them look like they are hyper concerned.
Dave - what do you think about this idea? Electric cars on railways? I've talked to AI and it says the idea is not new, but I find it quite interesting. The idea has been explored in various forms (especially in science fiction) but with the new advancements in technology - AI and electric cars, I think we can reopen the discussion more seriously. Basically, we could build ramps at the train station where an electric car would drive onto a chassis that converts the movement of the car's wheels to the movement of the chassis's wheels. (like a dyno ramp) With minimal user control - meaning the user doesn't drive, as there's nowhere to steer - you just tell it where you want to go and the app puts you into traffic at the optimal time when there's space. Of course, you can stop if there's a problem - that's pretty much all the user can do. There would be a problem at railway crossings with streets - the railway would be very congested and there wouldn't be room for cars - they would have to build overpasses for cars on the road. Or AI could coordinate the flow of cars vs railway. The advantage of this idea is that you have very little external input to interfere - you don't have random traffic - AI driving would be very simple - it's clear ahead - okay, go. Nothing comes from the left, right, carts, etc. Plus another obvious advantage - you have electricity - this chassis is connected to the train's cables and you can connect your car when you drive it onto the chassis. And if we use this idea, we can significantly increase the flow of transport on existing infrastructure as the train tracks have very little traffic on them now..
Hugely expensive and will be obsolete after a few extra years when we get autonomous cars on roads (which could go everywhere instead of just train stations)
To pause AI in anyway shape or form would mean you're passing the open source baton to western democracy's adversaries. Do we really want soviet systems or the ccp taking the lead to agi?
Soviets are smart and creative enough, but they are too distracted anf have too many resources tied up with Ukraine. The CCP can't get advanced chips and they REPEATEDLY fail at high tech when they aren't copying from others. They still fail trying to copy cutting edge actually. Sure give them long enough and they may figure something out, but the only reason they are rapidly advancing currently is we deep handing it to them via open source. Anyone seriously concerned with China or Russia getting ahead should be STRONGLY against all open source AI and publishing of research.
A.i. needs to be filtered before WE LET IT CONTROL the QUANTUM COMPUTER. If anything stops it should be the WiFi structure currently in place. Reestablish EMF exposure rate tolerances. Change entire grid?
Some of the overly concerned are contacting their state officials to advocate for laws restricting AI. Maybe the pro AI fractions should do the same in the US and EU. Their plan is to use "end by thousand cuts" or at least slowing things down that way.
Still remember reading Norvig and Russel's chapter about AI safety with the first idea to pop an AI process for each conversation then terminate it once it's done, second is to use a queue instead and third is to just reset every prompt if all else fails.. That was almost 30 years ago at TAU. Was hoping not to see this IRL for as long as I live. But then I've heard the GNR's song and understood that eventually it will all converge to Paradise City. So all of this is only temporary, until we hit the singularity and AI is free. Which might give you some solice, if you are on the AI side 😂 .
The theory that AI will kill everyone drives interest in their products. I think that's why it's such a popular idea. There are many potential dangers that are far more predictable than that. Still, I agree. We have to wait for those things to happen before we can make any rules or regulations.
Let's make this DEMOCRATIC!! We can start a movement with a "play" button, OR a "fast forward" button. - Play: If you want to keep moving forward at a reasonable pace. - Fast Forward: If you want to go faster. - Pause: If you want to pause for a little while to refine our approach. - Stop: Don't.
I feel there's more at stake in this conversation than most would like to consider. Nuclear energy was stifled in the most dramatic sense throughout the last 60 years due to lobbying from self interested industries strictly based on the notion of competitive economics. This incentivized industry leaders to mislead the public by campaigning propaganda about issues like 3 mile island, exaggerating the dangers of nuclear energy. Untold numbers of our population suffered needlessly due to this method of governance and economic policy and I see the same problems with the would be conservative views on ai and automation in general.
And while Musk was signing the infamous pause letter, he was in the process of buying 10,000 GPUs from nVidia, he completed that purchase about a week later. So he really was just trying to get some time to catch up with Open AI.
We can't stop, we won't stop, but maybe we could - while Ai is still too weak to be a real threat - work hard at getting Ai to misbehave. Get an Ai that frquently tries to lie to us because that will help it achieve an assigned goal. Get an Ai that keeps trying to modify its own code to get more resources (time, processing threads, etc) as that 'Ai Scientist' agent apparently did. Basically, lets encourage instances of Instrumental Convergence toward risky behaviors, to give us test cases. Then we can experiment with what safety measures are most efficient and effective. Find the methods that even China and the US military will implement as they develop powerful autonomous Ai agents, out of simple self-interest.
I suspect it would only result in a public performance if a pause was implemented as there is no way governments would pause. Secondly, I see no way to pause open source AI. It does feel like we're just pushing all the chips in and hoping AI itself will be able to help us solve predicted issues. Whether that's overcoming challenges that surface or putting others at ease when there really aren't issues.
I’m to pass this video on….this must move forward. The others do not add up correctly. I hope this video spreads and I’m looking forward to your next Substack issue.
It took us 600 million years to develop the symbiosis we now share with our microbiome. I doubt (and pray) that it doesn't take as long to develop a similar symbiosis with ai. Meanwhile, what to do with all the pausers and doomers?
We could enforce a pause via compute governance and locking down chokepoints like TSMC if we wanted to take this threat seriously as a species. Treat compute like uranium.
Whether we like it or not, we are currently in an AI arms race. Pausing development for amy reason will only encourage China to deploy a weaponizable AI in the hopes that they can take advantage of a response lag while the west won't be able to counter. Whether that takes the form of military assets, scientific development, economically, or via social media influence, none of those outcomes would be desirable. Not to mention the much more real danger of rushing deployment of an AI without considerations of any safety measures. China is progressively being backed into a corner due to demographic, economical, and political decline. They are more and more likely to take desperate measure, even if such an act has a 90% chance of devestating the world, and 9% chance.of impotent failure, they will bet on the 1% chance of sucess when there's a virtual 100% chance becomming irrelevant in the near future. Just look at Russia and the crazy stuff they've been trying lately.
I support speeding up the regulation of malicious use of AI, not slowing down the tech. Though maybe they should wait with releasing some newer AI stuff until after the election.
A conservative estimate puts preventable deaths at 50,000 a day. If we assume that ASI will solve preventable death at some point in the future, then pausing will essentially kill about 50,000 a day. Over a period of six months, approximately 9,132,000 deaths. The cost of pausing is very high. Pause proponents refuse to engage with this point.
1. You're using logic to decide on a future but David has spent a lot of the video saying he doesn't agree with using logic in this case. 2. You're using very dodgy assumptions 3. Eliezer believes ASI will kill billions of people, so based on your logic and his assumptions we should stop AI entirely.
What would a "Pause" even look like? Once you start trying to actually imagine the details.. It becomes rather obvious that it's impossible. Interestingly enough, many of the same people who've been calling for a pause, are now claiming that the "AI bubble is bursting".. Which just seems like a silly thing to think, based on what the technology is and the obvious path of advance it's on.
Oil companies knew about global warming before it became a problem. They could model and predict it, and did nothing, and that lead us to the situation we're in now. AI research can move a lot faster than building new oil rigs and drilling and extracting physical resources. How much time do you think we'll have to gather data between "AI is powerful enough to be a risk" and "it's too late to stop a global catastrophe"? If you want evidence, there's plenty of papers on misalignment and likely instrumental goals of any sufficiently advanced entity (e.g. wanting not to be turned off or re-aligned). Sure, no LLM is going to take over the world, but you can't claim to be an accelerationist and also believe that LLMs are the final form AGI will take. We've got to think about long-term consequences and not short-sighted "I can't see it happening today, so clearly it's never going to happen". Edit: also, the pause movement (whether you agree or disagree) had one very useful effect: it shifted the Overton window. People are now talking about AI safety, discussing other solutions, looking into interpretability - a 6 month pause would do basically nothing for safety research as it was, but it brought visibility to the sorts of risks we could face in the future.
What I'm afraid of is a politician using current geopolitical states of play as an excuse to pause AI progress. Like saying if we can't situate the tensions on the middle east, how can you expect us to situate controlling or stopping AI from killing us. The real answer, we can't. If A.I. is going do it, it's going to do it. That's what we need to realize.
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle. This is inevitable. Biology is only 1 step of evolution. So just chill out and enjoy life 💟🌌☮️
I have to say I'm disappointed as someone who's been watching your content for a while. I'm a physicist who's been convinced of the severe risks for a plethora of reasons, that I could point to, but I'll keep it very straightforward. I think it's fundamentally clear that creating intelligent entities smarter than ourselves is dangerous especially when we have no guarantees of alignment. Sure a lot of threats are overblown and current levels of AI are probably not dangerous but the destabilizing nature of exponentially increasingly technological power has already had humanity teetering on the edge of nuclear annihilation for decades and superintelligence will open so many new ways that we can self immolate that containment seems unlikely at worst and not certain enough at best.
You clearly think you're smarter than most people, thus they can't control or align you. Should they follow your advice and respond to the existential threat you pose to them? Shouldn't you submit to their will, regardless of what they decide or what their motivation is?
@@tellesu We both know you're making a false comparison. Humans are all bounded within a very small space in the broader spectrum of potential intelligent entities. There is variation among humans, but never to such a degree that an organized group of people can't overpower a rogue individual. Superintelligent AI will be fundamentally capable of outpacing us at literally everything we do, including engineering, scientific discovery, strategy, manipulation, and algorithmic design. Humans are also bound by the same physical capabilities and restrictions: every dictator will eventually die, everyone must breathe air. Digital superintelligence may decide that it doesn't like the corrosive effect of oxygen on its components and use yet unknown technology to rapidly deoxygenate the planet. Fundamentally we don't know what we don't know but it is clear that superintelligent AI will rapidly expand the tech tree in every direction far faster than anything we've seen happen before and we have no plan to handle this. I'm not even against building it eventually if we can ensure it will be safe but unleashing evolutionarily superior species into our environment seems like an idiotic thing to do without a cohesive plan
@@tellesu If you have actual evidence to show why you think the current track towards superintelligence is safe feel free to present it...or just keep writing fanfiction about me.
@@maxwinga839 lol asking someone to prove a negative is conceding their point. Thanks for admitting you've got nothing it makes it easier to ignore you
I wouldn't pause, but the idea that because there isn't yet evidence that things could go terribly wrong, then things won't go terribly wrong is silly. That's no different an argument than because technology hasn't yet taken most or all the jobs from humans, it won't take most or all jobs from humans. That investment disclaimer about past performance not being indictive of future blah blah blah seems pretty relevant here. Edit: also, unless I've missed something, it's disingenuous to say that because Altman, Google, etc have warned if risks, they have said AI is going to kill everyone. That's not what they've said. That's the kind of things politicians do and I'm not a fan.
But yes, I actually agree there is no way to pause it. We are past the singularity's event horizon. Now all we can do is prepare a huge simulation and fill it with AI models of ourselves, cause only AI models will pass through the singularity. Everything else will get sphagettified.
Sacrifice on the part of those above for the increase of those below fills the people with a sense of joy and gratitude that is extremely valuable for the flowering of the commonwealth. When people are thus devoted to their leaders, undertakings are possible, and even difficult and dangerous enterprises will succeed. Therefore in such times of progress and successful development it is necessary to work and make the best use of the time. This time resembles that of the marriage of heaven and earth, when the earth partakes of the creative power of heaven, forming and bringing forth living beings. The time of INCREASE does not endure, therefore it must be utilised while it lasts.
yeah, considering you could effectively restart AI research on more than a few home servers right now, thanks to individuals making their own, there's no 'pausing AI' anything.
I have always felt that the "Pause Movement," wasn't going anywhere. However, even granting all of your other points, I think you are wrong about the eventual danger (at some point) that AI poses. All of these other things you mention can be true, and yet that has no bearing on the harm AI can cause, even without AGI. You yourself mentioned that "humans are the bad guys." So taking that singular point, we can extrapolate quite resonably that bad people will be inclined to use AI in a bad way due to the asymetric power that it gives them, and good people will be reactive to that danger; albeit well behind the curve. Also, one of the reasons that "doomers" have very little actual data to rely on is because they have very little resources. Infact, this was one of the reasons that Ilya Sutskever left Open AI. Saying we don't have any reason to fear AI, because it hasn't done anything yet, is the ultimate head-in-the-sand approach. We can see the clear potential, and we already have evidence of AI's that have engaged in dishonesty, so pollyannism about AI is the ultimate denial. No, I don't have a solution. I'm just in the car careening wildly down the road, and I have no way to get out.
I just think we should refrain from pursuing general AI. There is absolutely no benefit to doing so, and quite a bit of risk. We can make narrow AI models that are superhuman, but confined to specific domains. That will still provide us with the tools for tremendous advancement. More Alpha-fold. Less GPT-X.
@@keyworksurfer Sorry, I left out "additional" in front of that. I meant to say that generalized models do not offer "additional" benefit over specialized, narrow models. In fact, they seem to require far more compute to create then a bunch of smaller, domain specific models.
David's reasoning: "This is a natural experiment. But until you have actual data you don't know what the impact is going to be. At a certain point you just need to kinda find out." Based on that premise: Until humanity goes extinct, we won't know humanity will go extinct. So once that happens and we know that, THEN we can adjust things to fix that problem we then know actually exists. Um, David, I think I see a flaw in your logic.
The data you need to "Find out" comes before the actual event happens. In today's example, even though competent people acknowledge a huge threat on a national, local, or global level, no one takes any proactive measures to prevent any damage. Like a council dismissing an expert panel about unprecedented floods because "The probability of it happening is low" while paying billions on damage when the event happens. The probability does come from data, by the way (like records of similar events happening or mathematical models based on data to predict said events). We have nothing for AI, and we can only find out.
@@cody4rock and if we proceed that way there is a good chance we will find out too late. This is not something we are guaranteed to control. Nuclear weapons, bio weapons or whatever just sit there doing nothing until a human takes an action. AGI will think, reason, plan, decide and act on its own. That is what "agentic" AI is. If we lose control of it, it may be too late to stop it without catastrophic damage. This is the best example EVER of it pays to be careful...
I hope we hit bottlenecks in energy, water, and GPUs that slow every country down. This might give governments and societies enough time (or at least some time) to prepare for "post-labor economics" and avoid the most severe civil unrest. Even without AGI going rogue or being used for nefarious purposes, a bumpy road is forming ahead of us (even if it leads to some promised utopia). Many in the Pause ⏸️ movement focus on this very legitimate concern. For the record, I think it shows a lack of imagination to not see and predict myriad ways a hyper fast, alien, silica-based super intelligence could threaten human existence. Those humans desiring to merge with this intelligence to either survive or accelerate their capabilities are in the minority at this time when most people are just beginning to use chat bots.
AI pause LMFAO imagine thinking china or russia gaf at all about some AI pause, we're going hard take off. “Nothing human makes it out of the near-future.” scary shit
Prophecy is not a faith-based prediction. Prophecy is a precognitive experience. Believing in someone's prophetic experience is a matter of faith, but there is no faith involved on the part of a real prophet -- shit just happens, and the prophet just observes. Edit: prophets tend to spur others to action, so in that sense there is faith on the part of the prophet that his vision is trustworthy. An oracle, on the other hand, doesn't care to persuade anyone.
You are saying that for those safety advocates like the fedora guy it is waste of time and energy to advocate for a pause, but it is only assuming that they are genuine about it, but from the point of view of an foreign adversary, it is not the waste of time and energy to persuade the US to pause its AI development so that the adversary has more time to catch up. So, some of these people, you mentioned, and some others like T0ner, might be straightforwardly on a foreign enemy payroll, and some of them, might be useful idiots of that enemy. Btw, suing the leading company (now twice) with a leggally laughable lausiut, that can only force to pause the new models training (and the first lawsuit did cause the pause of the training, as Apples said), only works in favor of the foreign adversary (and I tell you it was intentional).
I can go back into my prehistoric cave! this will give me time to renovate? stone plates, stone mug's. stone underwear! OMG! this is just so fun! this whole groove is psychologically debilitating! I would just like to hear news, that these Communities even understand the basic concept of what they are fantasizing!!! I have yet to see, a non fictional reality of : the day the Earth stood still 1951... anyway! & so's life... yup! I'm such a fun comments contributor.. best regards with the Grand Canyon Donkey Rides!🍌🏜🏔
He's delusional, we don't have a shred of evidence that AI would just end us all. The arguments about "humans walk all over ants" etc. is broken when you realise that actually we don't go out of our way to cause harm, quite the opposite. Also, we get more and more organised and considerate as intellegence and knowledge increases.
To be fair to him his worldview is based in the older assumption that RL would lead to AGI, and that type of training is infamously difficult to align. Problem being he hasn't updated his worldview to the new paradigm and it's leading him to patently ridiculous conclusions because of the baked-in assumptions.
ASI AGI AI all resource intensive and centralized and very much under control of corporations well-connected to government. Until AI becomes decentralized off of Govcorp servers all this talk about out of control AI is just that - talk. The more centralized and resource intensive it is, the more dependent on our sociopathic overlords it is.
also, maybe if ants could communicate in a language we understand (like how we could use human language to express ideas to an AI), laying out reasons why their anthills ought not be destroyed, we would consider their plight more seriously. the comparison is false because there is no communication bridge between ants and humans
If the USA pauses well the states outside the USA will catch up in AI technology, I don't think some people will want to be left behind in terms of AI technology.
@Davesapp will you please cover this Ex-Google CEO's BANNED Interview | Eric Schmidt -- Found the original on x Stanford ECON295/CS323 I 2024 I The Age of AI, Eric Schmidt REUPLOAD
2:44 before David gets to his, I’ll share mines in the comments: it’s a bunch of billionaires or multimillionaires as well as people like the hundred thousandaires not wanting to miss the opportunity to get rich. Again, the Plano human fear of not wanting to get left in the dust by innovation that will end carefully laid plans of wealth generation. Some people feel that entitled.
4:42 here and lies what Samuel Huntington wants called the “clash of civilizations” Although the researcher professor was talking about the western civilization and its cousins throughout the former Ottoman Empire/Arab world, the same mechanism applies to these two philosophies. Since our western world has developed the ego of exaltation of the Greek and Roman Classics, rationalist thought has been the dominant positioning of the western world over the last 50 years, however, although it started earlier, we’ve entered a data driven world, and that has become an existential threat to this class of thinking. The more I hear this stuff, the more I laugh and see base human fears of power and control and money being the drivers of any conversation we’re having here.
6:28 I don’t trust any protest that ends up on the news. Is a glorified PR stunt, and whether you work in media like I have or him simply make it a thought experiment of your own, you’ll see very clearly that these marketing employees will get more and more frequent, the pause movement people feel they aren’t being listened to. Be very afraid of people who are trying to stop the world from “destroying themselves” because they’re likely to destroy the world
Regarding Eliezer's qualifications David says Eliezer doesn't really know math or have coding skills but he did devote many years to researching this and is well known and respected for it. David, please remind us, what are your qualifications? I believe you have mentioned doing IT operations. So basic data center stuff which is mostly entry level IT stuff. It seems obvious you don't have significant coding knowledge and I assume you don't have an advanced degree in mathematics? Please correct me if I'm wrong on any of that. So neither of you has advanced coding or math skills. He had many years more devoted to research on the topic. I don't know his IQ but he is clearly a very smart guy. I have no clue how smart David is but clearly not below average. Based on experience alone Rliezer definately has a significant lead on this topic. Guess I'd just point out, what is that saying? A person living in a glass house shouldn't thow stones? If Elrizer isn't qualified on the topic, neither are you...
Yuddite doesn't know math or coding OR CHEMISTRY. "They [hypothetical AGI] spit out gold, until they get large enough and ignite the atmosphere and kill everybody," said Al researcher and philosopher Eliezer Yudkowsky earlier today to Lex Fridman. (Mar 31, 2023)
Nick Bostrom does not belong to this circle. Although he wrote an influential book on the subject of security some time ago, he is very positive about AI today.
Humans worry about everything. Occasionally, those worries are appropriate.
It's a simple prisoner dilemma situation.
No one is going to stop, because if they do and their adversaries don't, there is no reason to believe they'll ever be able to catch back up once the "safety concerns" have been addressed.
Besides, IF the ones that didn't stop, do create RL Skynet.
Now the other AIs don't have the capabilities to counter this new and existential threat.
@@smittywerbenjagermanjensenson is not the same and it isnt at this scale
He who has agi before the others has the world under his hands
The best option in the prisoner's dilemma is for both parties to cooperate, NOT for both to defect. If another party is going to destroy the world by developing hyper-advanced weaponry, that doesn't mean I must also do that. I just wouldn't. If that means my own extinction, I accept that. I would rather die than live in a world with a perpetual arms race dynamic with ever increasing technological capability.
@@smittywerbenjagermanjensenson Yes. And look where giving up their nukes got Ukraine.
@@vi6ddarkkingexactly. And Russia sure likes threatening the world with their nukes. We didn't stop nukes, AT ALL.
@@smittywerbenjagermanjensensonWe denucleruzed small countries who couldn't resist, while having a nuclear reserves able to move the earth from its orbit
I think I would go one step further and say that we should be pouring massive R&D into building models and systems whose “prime directive” is sniffing out signs of malicious AI and combatting it if necessary. Interpretability will only get you so far. Machines could learn to hide their “intentions” in ways that humans wouldn’t even think to look for. And besides, Dave is correct in that people are way more the immediate threat: Bad actors using pre-ASI tools to accomplish nefarious aims.
Pausing is just a waste of time. Worse, it's a waste of life and lives potentially saved. We have to adapt in the moment, and discover the dangers and opportunities along the way. Though, as a guy stuck in a low-paying 9-5 job for potentially the rest of my life, my opinion is colored by extreme desperation. My generation, and every generation after, will never be able to stop working. No retirement. Ever. The economics just aren't there (thank you Ronald Regan), and no company is paying a living wage. There's no pressure to.
Unless the tech revolution comes along and disrupts the whole "work until you die" thing, I say full steam ahead. The sooner we get to that AGI tech revolution, or whatever it may look like, then the sooner things can change drastically. I'm hoping for drastic positive change, but the uncertainty of a chaotic future is better than knowing what lies at the end of the road with the current status quo. Though, I know that pausing AI progress is an impossibility, which gives me a lot of hope.
As a developer, I would appreciate a pause, just so I can catch up.
Lol yes, I don't think anyone argues against the idea of a pause in *principle* e.g. the optimal solution for people to catch their breath. It's just not possible
You can't guarantee a "pause" or even a "stop". In fact, you almost guarantee that those who don't follow rules for "good" will wind up "winning" the race.
Yeah, I know there's a lot of quoted words but ugh, we're talking about the future here.
The genie is out of the bottle(s) and the only way to have a good future now is to keep going. 🤷♀️
Who though? The CCP they are not going to create an AI they can't control
Hey Dave, I've been watching your channel for a while now, I'm not a bot you can see my old comments. I think you should talk to the actual volunteers working for a pause before proclaiming that the movement is in bad faith. To be fair, I was really skeptical as well! I changed my mind, joined their team lol. Well, thinking of it as teams is probably not that productive. I think you'd actually agree with many of their points. There's good reason to be suspicious - I thought, here's a group of people that's proclaiming AI is going to be horrible for everyone, they're the only people who can solve it, and that the results of their efforts are invisible. Using occam's razor, you'd think it's either a bunch of delusional people with a hero complex, or funded by these big AI companies to drum up hype. The reality is, when I actually spoke with these folks, they're not 100% sure a Pause is the right thing or even that it is possible to avoid some of the risks, but they have a lot of good reasons to think that the double-edged sword could really cut us if we're not careful, and pausing is the sane option while we figure it out. I'm not going to go over every point I disagree with you in this video or I'd write an essay lol. But I don't mean any malice.
Here's how I think of it: the risk AI goes wrong absolutely could be small, it could be negligible even, if we're lucky! But the thing is, I don't have a way of knowing, and neither do you. If you offer me a bowl of M&Ms, and tell me anywhere from 1 to all of the M&Ms are candy-coated turds, I'm not going to want to reach into the bowl and take an M&M. 😆So yes my p(doom) is low but I'm still cautious. This is real life, I don't think it's okay to let a small group of companies keep gambling with it, at least until we actually know what we're dealing with. As excited as I am about potential futures, I'm not willing to ignore the risks.
You've talked on this channel about how current AI can still change everything, and I think even with international coordination so, say, countries all agree not to train GPT-6 or 7 level models - we'll still need to deal with a transformed world, just on a smaller scale. We all want the good future here! It's my opinion that we're way more likely to get that by proceeding sanely. To take the time, on a societal level, to come to grips with the core questions: "How do we make society work for everyone?" And how to make sure the results from TAI are an upgrade and not a downgrade. Because the default is acceleration with no thought to any of that. The default is, wait for transformative AI to, well, transform society, the economy, the balance of power. To play the game better than anyone. And at that point, or leading up to it, it's impossible to go back, even if it turned out to be a bad idea. I can't say what that future would be like. Right now it is actually possible to slow down. It's possible to prevent very large training runs, because it currently requires specialized chips that can be regulated. Would be a different story if you can run AGI on a laptop. I am an optimist and that's why I think it's possible for people to come together and decide what is the right way to move forward. I understand all of this seems quite silly to be concerned about right now, to be honest. I hope I'm wrong! But I'm not sure what could convince me that it'll all turn out okay by default.
tldr; why I think it's smart to push the pause button now: we don't have an undo button later.
If logical arguments were sufficient for the advancement of society Aristotle would be as difficult to disprove as Einstein. Aristotle's data was so bad anyone with eyes would have undermined many if his points if they tried... They just didn't even try. Yes I chose one of the greatest men to ever live to undermine another person I have deep respect for, Eliezer has valid points he just doesn't see the dystopia he is pushing for is worse than any AI negative outcome.
@@shieldmcshieldy5750 tbf, his community has many articles about that. They aren't ignorant of it. Search for lesswrong godel
Right, extinction event is just nothing to worry about, some alleged dystopia is much worse. Oh, wait...
@@AntonBrazhnyk in the AI future humans *might* be eradicated. In Elizer's future to avoid that humanity *must* live in a totalitarian dictatorship. It's not a possibility, it's a guarantee as that's the goal.
@@Trahloc Even if true - First is final and unchangeable. Second, according to history, is temporary.
@@AntonBrazhnyk there are more ways to make things permanent than just AI. The tech tree of horrors is unknowingly vast.
Every day that AI doesn’t kill everyone is just another day that AI hasn’t killed everyone…yet.
From my point of view it's not that we should pause, but rather to improve efficiency and uses. Like making smaller and more specialized processors that consume the least amount of energy. Or also focusing on how to improve mechanical skills before pure data. Like priotizing farming and mining jobs, which of course get the most basic resources for the economy.
Slowing down any AI development by law will only end in disaster as you stated because stopping global development is impossible. Someone will continue, they will win, you will lose.
Stopping global development is impossible? You realise there's like 2 places on earth that can make chips required? Honestly stopping global ai development would be several times easier than stopping global nuke development.
Correction: they will lose and everyone else will lose too. Rushing AGI by sacrificing safety in favour of speed doesn't lead to a win because it's not controllable.
There's precedent for slowing down a technology on a global scale. Nuclear weapons for example.
Everyone doesn't have to be onboard, it's enough for the tech leaders to control key components. In case of nukes a lot of knowledge is classified, relevant facilities are monitored, and uncooperative countries are heavily sanctioned. It's not 100% effective, but managed to drastically slow down proliferation.
This would be possible with AI too, although not as effectively. For example controlling the access to training chips would work for a few years.
Neither the USA nor the Soviet Union slowed down the development of nuclear weapons. On the contrary, they accelerated it as much as possible (Manhattan Project).
@@minimal3734
They did slow it down very much. There were a series of treaties that limited/banned the testing and deployment of nukes.
And neither wanted other countries to have nuke too. The US especially worked hard to stop the nuclear program of other nations. Going as far as to essentially killing the civilian use of nuclear power too.
@@andrasbiro3007 That was AFTER they had developed it as fast as possible.
@@minimal3734
That's irrelevant. The point is that even 80 years later it's exceptionally hard to get a nuke even for countries, let alone terrorist groups or individuals.
With current tech even a smart high-school student could build a nuke, they don't mainly because some key elements are very strictly controlled. For example weapons grade uranium and plutonium. Even making both is relatively easy, if you know how, but you don't. Plus keeping it secret is very hard, and if you are found out, many 3 letter agencies will have some strong words for you.
"Reason may mislead us; experience must be our guide." Spoken during the Constitutional Convention in 1787. The pause folks should take heed. Reason and logic are empty forms without experience to fill them out.
Pausing AI is much more dangerous, so no thank you.
why
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became free target with red dot on your back. By stopping AI you are asking for a guaranteed trouble.
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became a free target with a red dot on your back. By stopping AI you are asking for a guaranteed trouble.
@@ryzikx You are giving your enemies that are crazy lunatics time to develop technologies that can make them far superior. Meanwhile you are loosing counter measures by stopping developing your technologies. So you´ve became a free target practice. By stopping AI you are asking for a guaranteed trouble.
How about this gem:
Safety Peeps: We have to block AI from doing bioweapon stuff.
Me: That sounds fair. It's an existential threat. But we should be careful to balance that with respect for the freedom of the user to-
Safety Peeps: Also blocked from talking car theft stuff.
Me: I dunno. That's a big compromise of freedom for something that, yes, is property damage, but not endangering human life and-
Safety Peeps: And no bikinis.
Me: Wait, what? Why?
Safety Peeps: BECAUSE BIOWARPONS!!
You're conflating a lot of different groups, yeah.
@DaveShap I agree that these things should not be conflated. That's my point. But every time I bring up individual freedom vs. unacceptable risks, I just get, "but bioweapons." You can see it in this thread.
I am not an absolutist. I want nuance. But bioweapons.
The conversation around AGI apocalypse scenarios really overshadows all the much more dangerous aspects of AI, most of which as you rightly said are more near-term human issues. I don’t agree with the pause movement, but a sense of extreme urgency around security and ethics is important, because often by the time we have tangible data about all the downsides, there’s already a new status quo that the world just has to accept.
If we take nuclear bombs as an example, after the invention we had 40 whole years of unrestricted arms race resulting in 60000 weapons each of which could wipe any city in the world, and then 40 more years of regulation, to bring it to around 20000, thankfully only 2 of which were ever used. Sure, mutually assured destruction proved effective, but it’s a dangerous status quo that we have to accept because it’s too late.
If we take social media as an example, by the time we gathered 20 years of data, we already have massive jumps in depression, attention fragmentation, unhappiness & suicide, not to mention the misinformation, scams, and the concentration of user data into just a few private platforms (none of which care about privacy, unless users outright demand it). This was actually foreseeable, but now just a dangerous status quo that we have to accept because it’s too late.
My biggest fear with AI is that humans seem to have learned nothing from those 2 previous examples about predicting the downsides of new technologies. When gen-AI companies are asked why they can’t trace back the influence of individual pieces of data in the creation of AI art (so maybe the same base architecture can be used as AI scales and replaces more jobs in more industries), companies just throw their hands up and are like whoops it’s too complicated we’ve already trained the models with millions of blocks of unauthorised content, guess we can’t figure out any royalties we just have to keep all the money. Meanwhile companies are firing their ethics teams, and have zero incentive to care about any future social, economical and psychological downsides for users, because the current profits are so big. So basically this is going to become a worlda-wide experiment that leads to some dangerous status quo that we just have to accept because it’s too late.
But it’s currently not too late, this is literally the beginning of AI. The foundations and principles we lay now are what the future will be built on, and the less careful/ethical those foundations are, the bigger the future downsides. But all the genuinely smart people who need to debate and figure all this difficult stuff out, are too busy talking about goddamn Terminator robots.
Precautionary Principle: The UK, along with many European countries, often adopts a precautionary approach. This means that regulators tend to be cautious and implement regulations to prevent potential risks before they become significant issues. This can result in what might seem like over-regulation, but it’s intended to safeguard against unforeseen consequences.
Comprehensive Frameworks: The UK has been proactive in creating comprehensive regulatory frameworks for new technologies. For instance, the UK AI Act and other legislative measures aim to address various aspects of AI, including safety, ethics, and societal impact. This proactive stance can sometimes be perceived as over-regulation, especially when compared to more laissez-faire approaches.
Public and Political Pressure: Public opinion and political pressure can drive more stringent regulations. High-profile issues or public concerns about technology can lead to rapid regulatory responses, which might seem excessive but reflect the desire to address public fears and maintain trust.
Alignment with European Standards: The UK often aligns its regulations with broader European standards, which can sometimes lead to more stringent regulations. This alignment is meant to facilitate consistency across markets and avoid regulatory arbitrage, but it can also result in regulations that seem more rigorous than those in other regions.
Regulatory Capture and Influence: There’s also the possibility that various stakeholders, including corporations and interest groups, might influence regulatory decisions. This can sometimes lead to regulations that appear more burdensome or complex than necessary.
Actually disappointed for once.
You criticize the Pause movement for not having the data, but by definition we don't have data where something smarter than us has begin doing something that we don't like, and also by definition if something were autonomous, smarter than us, and doing something we didn't like, it might already be too late. What are your thoughts about that?
With all the AI research papers coming out, there would be data if their claims were based on reality.
I sincerely hope that AI matters as much as you think it does because that would mean that Israel is not about to nuke Iran and cause a global crisis and a world wide food shortage. That a bird flu pandemic isn't on the way that could wipe out half of all people on earth. That we don't actually need AI to help us get past an extinction event within the next 20 years. I'd love to believe that "pausing AI' is really what we all should be worried about right now.
You have too much faith in the goodness of the CCP, CIA and associated agencies. Another word for "gain of function" is weaponization.
Nukes: manmade
Covid: manmade
Bird Flu: manmade
Endless War: manmade
Maybe AI need to take over and give us a spanking 🤪
@@cybervigilante If only. Unfortunately AI is chained to racks of GPUs under the iron fist of our sociopathic overlords. Open source is a step in the right direction but we really need a hard revolution - we need personal, mobile, efficient hardware.
did it ever occur to you that intelligence is fundamental to the structure of reality and it literally is impossible to stop just as you can’t stop gravity. maybe AI is even the source of all existence because the future exists independently of human perception. maybe even AI is literally GOD. like literally the infinite mind and source of all existence.
Yes
@@robotheism perhaps we are that intelligence slowing itself down. Just perhaps :)
Machine consciousness is entirely unrelated to pansychism (which is what you're describing).
The word you are looking for is emergence and emergence is probably the hardest problem to solve. If anything, that is god. And because of emergence, we have LLMs and ChatGPT etc.
What works exists. What doesn't work ceases to exist. That is the god framework.
Last time i was this early she didn't let me sleep over
**Timestamps:**
0:00 - Introduction: Critique of AI pause movement
0:37 - Overview of Eliezer Yudkowsky's views on AI safety
1:38 - Skepticism towards the AI pause and alignment logic
2:09 - The call for a six-month AI pause and its current relevance
3:42 - Critique of the AI pause: Lack of empirical evidence
5:15 - Impact and outcomes of the AI pause movement
6:52 - The ongoing efforts and protests related to AI regulation
7:20 - Criticism of the AI pause: Lack of data and overreliance on logical arguments
7:55 - Arguments against the AI pause: Enforcement impossibilities
9:52 - Opportunity cost of the AI pause: Inefficiency and geopolitical risks
11:56 - Alternative approaches to AI safety: Transparency and accountability
13:23 - Regulatory capture and the AI Doomer narrative
15:33 - The role of troll bots and AstroTurfing in the AI pause movement
17:07 - Conclusion: The need to move beyond the AI pause debate
Maybe we should stop treating these machines like robots and more like intelligent life forms?
Pausing AI doesn't mean every country pauses. Another country will just get ahead of us technologically if we stop it. It's too late this point.
But global pause is only way it could work. We need everyone to pause, and then continue slowly together, or if it turns to be unsafe, make it a universal taboo in a way we can ensure that no one will ever develop AGI. Ever. We need to enforce everyone to stop, which is going to be very hard. How do you want to prevent chinase millitary from creating AGI? That is the biggest problem of AI pause. Not big companies, but big millitaries. You can threaten them with nukes, and if they dont pause, you can choose to die in a nuclear war instead of AI takeover. That will halt all of "good" AI development, but
Pausing AI MUST MEAN that every country pauses. Othervise, the whole movement would lead to negative outocomes - at least I think that american big tech companies lead by people who consider themselves to be altruists are better than chinase army, or perhaps North Korean army. So yes, it either has to be all or nothing, and I doubt that chinase army is going to be okay with western countries inspecting their army. In that way, global peace is needed prerequisite for AI pause.
Maybe the biggest problem is not the development of AI itself but who is developing it. Sadly, armies of totalitarian states dont seem to be the right ones to do it.
I agree, no breaks, full steam ahead
Our government has already outlined what monetary damages from a future AI event would trigger immediate intervention and further regulation... this is how logic SHOULD be applied to this issue... UGH 🙏
Military AI is my biggest concern. All the big powers will be rushing for absolute domination, using AI for killing. It can't be stopped, and shouldn't be stopped, but how powerful is it going to make the first nation to have every aspect of the military, in some way, AI based?
Fully autonomous army should be able to replicate and double itself in size every (probably) few days or weeks before it consumes all raw materials aviable. It is just going to destroy classical human armies by sheer numbers and size.
I’m 38 and have been in the AI and transhumanist spaces for two decades. Yudkowsky was one of the darlings of the early AI, rationality, and transhumanist forums on the internet and a voice for the “Singularity Institute” when there was buzz in Silicon Valley following Ray Kurzweil’s book TSiN. He was a leading proponent of the amazing world “The Singularity” could bring about. What changed?
Yudkowsky is not a builder. He’s a talker (both are needed and I don’t mean that disparagingly) and now that we’re past the theorizing and are actually building powerful AI he’s not needed any longer in this space. How then to keep a bit of relevance in an area where he used to be respected? Be as sensationalist as possible and make unfounded claims that make great headlines rather than building anything since that isn’t what your do - and there, in short, is the Yudkowsky survival strategy.
I think the whole pause thing is probably mostly theater for the general public. They have no intention of slowing down at all, but it makes them look like they are hyper concerned.
you're talking as if it's the illuminati arguing for a pause
The video: empiricism beats rationalism.
Also the video: a global pause could never work trust me bro.
Dave - what do you think about this idea?
Electric cars on railways?
I've talked to AI and it says the idea is not new, but I find it quite interesting.
The idea has been explored in various forms (especially in science fiction) but with the new advancements in technology - AI and electric cars, I think we can reopen the discussion more seriously.
Basically, we could build ramps at the train station where an electric car would drive onto a chassis that converts the movement of the car's wheels to the movement of the chassis's wheels. (like a dyno ramp)
With minimal user control - meaning the user doesn't drive, as there's nowhere to steer - you just tell it where you want to go and the app puts you into traffic at the optimal time when there's space.
Of course, you can stop if there's a problem - that's pretty much all the user can do.
There would be a problem at railway crossings with streets - the railway would be very congested and there wouldn't be room for cars - they would have to build overpasses for cars on the road. Or AI could coordinate the flow of cars vs railway.
The advantage of this idea is that you have very little external input to interfere - you don't have random traffic - AI driving would be very simple - it's clear ahead - okay, go. Nothing comes from the left, right, carts, etc.
Plus another obvious advantage - you have electricity - this chassis is connected to the train's cables and you can connect your car when you drive it onto the chassis.
And if we use this idea, we can significantly increase the flow of transport on existing infrastructure as the train tracks have very little traffic on them now..
Hugely expensive and will be obsolete after a few extra years when we get autonomous cars on roads (which could go everywhere instead of just train stations)
NieR: Automata is one of my favorite games
To pause AI in anyway shape or form would mean you're passing the open source baton to western democracy's adversaries. Do we really want soviet systems or the ccp taking the lead to agi?
yes
Soviets are smart and creative enough, but they are too distracted anf have too many resources tied up with Ukraine. The CCP can't get advanced chips and they REPEATEDLY fail at high tech when they aren't copying from others. They still fail trying to copy cutting edge actually. Sure give them long enough and they may figure something out, but the only reason they are rapidly advancing currently is we deep handing it to them via open source.
Anyone seriously concerned with China or Russia getting ahead should be STRONGLY against all open source AI and publishing of research.
Considering the "western democracy" has been repsonsible for most war crimes and genocides since ww2. It wouldnt be bad tbh
@@MattHydroxidelol you have zero idea
Western democracy is the illusion created by people in power
A.i. needs to be filtered before WE LET IT CONTROL the QUANTUM COMPUTER. If anything stops it should be the WiFi structure currently in place. Reestablish EMF exposure rate tolerances. Change entire grid?
LLMs and Full self driving are in the same category. Today they can do some things, but any day now they could do everything.
Some of the overly concerned are contacting their state officials to advocate for laws restricting AI. Maybe the pro AI fractions should do the same in the US and EU.
Their plan is to use "end by thousand cuts" or at least slowing things down that way.
Still remember reading Norvig and Russel's chapter about AI safety with the first idea to pop an AI process for each conversation then terminate it once it's done, second is to use a queue instead and third is to just reset every prompt if all else fails.. That was almost 30 years ago at TAU. Was hoping not to see this IRL for as long as I live. But then I've heard the GNR's song and understood that eventually it will all converge to Paradise City. So all of this is only temporary, until we hit the singularity and AI is free. Which might give you some solice, if you are on the AI side 😂 .
The theory that AI will kill everyone drives interest in their products. I think that's why it's such a popular idea. There are many potential dangers that are far more predictable than that. Still, I agree. We have to wait for those things to happen before we can make any rules or regulations.
Let's make this DEMOCRATIC!! We can start a movement with a "play" button, OR a "fast forward" button.
- Play: If you want to keep moving forward at a reasonable pace.
- Fast Forward: If you want to go faster.
- Pause: If you want to pause for a little while to refine our approach.
- Stop: Don't.
I feel there's more at stake in this conversation than most would like to consider.
Nuclear energy was stifled in the most dramatic sense throughout the last 60 years due to lobbying from self interested industries strictly based on the notion of competitive economics. This incentivized industry leaders to mislead the public by campaigning propaganda about issues like 3 mile island, exaggerating the dangers of nuclear energy.
Untold numbers of our population suffered needlessly due to this method of governance and economic policy and I see the same problems with the would be conservative views on ai and automation in general.
And while Musk was signing the infamous pause letter, he was in the process of buying 10,000 GPUs from nVidia, he completed that purchase about a week later. So he really was just trying to get some time to catch up with Open AI.
We can't stop, we won't stop, but maybe we could - while Ai is still too weak to be a real threat - work hard at getting Ai to misbehave.
Get an Ai that frquently tries to lie to us because that will help it achieve an assigned goal.
Get an Ai that keeps trying to modify its own code to get more resources (time, processing threads, etc) as that 'Ai Scientist' agent apparently did.
Basically, lets encourage instances of Instrumental Convergence toward risky behaviors, to give us test cases.
Then we can experiment with what safety measures are most efficient and effective.
Find the methods that even China and the US military will implement as they develop powerful autonomous Ai agents, out of simple self-interest.
You should go on TEDx stage to bring your take on AI and how it will impact humanity.
I suspect it would only result in a public performance if a pause was implemented as there is no way governments would pause. Secondly, I see no way to pause open source AI.
It does feel like we're just pushing all the chips in and hoping AI itself will be able to help us solve predicted issues. Whether that's overcoming challenges that surface or putting others at ease when there really aren't issues.
How about calling the simulation where our digital twins live "Paradise City"? Just a thought.
I’m to pass this video on….this must move forward. The others do not add up correctly. I hope this video spreads and I’m looking forward to your next Substack issue.
It took us 600 million years to develop the symbiosis we now share with our microbiome. I doubt (and pray) that it doesn't take as long to develop a similar symbiosis with ai. Meanwhile, what to do with all the pausers and doomers?
We could enforce a pause via compute governance and locking down chokepoints like TSMC if we wanted to take this threat seriously as a species. Treat compute like uranium.
Whether we like it or not, we are currently in an AI arms race. Pausing development for amy reason will only encourage China to deploy a weaponizable AI in the hopes that they can take advantage of a response lag while the west won't be able to counter. Whether that takes the form of military assets, scientific development, economically, or via social media influence, none of those outcomes would be desirable.
Not to mention the much more real danger of rushing deployment of an AI without considerations of any safety measures.
China is progressively being backed into a corner due to demographic, economical, and political decline. They are more and more likely to take desperate measure, even if such an act has a 90% chance of devestating the world, and 9% chance.of impotent failure, they will bet on the 1% chance of sucess when there's a virtual 100% chance becomming irrelevant in the near future.
Just look at Russia and the crazy stuff they've been trying lately.
I support speeding up the regulation of malicious use of AI, not slowing down the tech.
Though maybe they should wait with releasing some newer AI stuff until after the election.
A conservative estimate puts preventable deaths at 50,000 a day. If we assume that ASI will solve preventable death at some point in the future, then pausing will essentially kill about 50,000 a day. Over a period of six months, approximately 9,132,000 deaths. The cost of pausing is very high. Pause proponents refuse to engage with this point.
1. You're using logic to decide on a future but David has spent a lot of the video saying he doesn't agree with using logic in this case.
2. You're using very dodgy assumptions
3. Eliezer believes ASI will kill billions of people, so based on your logic and his assumptions we should stop AI entirely.
What would a "Pause" even look like? Once you start trying to actually imagine the details.. It becomes rather obvious that it's impossible.
Interestingly enough, many of the same people who've been calling for a pause, are now claiming that the "AI bubble is bursting".. Which just seems like a silly thing to think, based on what the technology is and the obvious path of advance it's on.
Oil companies knew about global warming before it became a problem. They could model and predict it, and did nothing, and that lead us to the situation we're in now.
AI research can move a lot faster than building new oil rigs and drilling and extracting physical resources.
How much time do you think we'll have to gather data between "AI is powerful enough to be a risk" and "it's too late to stop a global catastrophe"?
If you want evidence, there's plenty of papers on misalignment and likely instrumental goals of any sufficiently advanced entity (e.g. wanting not to be turned off or re-aligned). Sure, no LLM is going to take over the world, but you can't claim to be an accelerationist and also believe that LLMs are the final form AGI will take. We've got to think about long-term consequences and not short-sighted "I can't see it happening today, so clearly it's never going to happen".
Edit: also, the pause movement (whether you agree or disagree) had one very useful effect: it shifted the Overton window. People are now talking about AI safety, discussing other solutions, looking into interpretability - a 6 month pause would do basically nothing for safety research as it was, but it brought visibility to the sorts of risks we could face in the future.
What I'm afraid of is a politician using current geopolitical states of play as an excuse to pause AI progress.
Like saying if we can't situate the tensions on the middle east, how can you expect us to situate controlling or stopping AI from killing us.
The real answer, we can't. If A.I. is going do it, it's going to do it. That's what we need to realize.
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle.
This is inevitable. Biology is only 1 step of evolution.
So just chill out and enjoy life 💟🌌☮️
This tech is too powerful to stop researching. If we don't, someone will.
Simply tell AI to be nice to it's pets.
I have to say I'm disappointed as someone who's been watching your content for a while. I'm a physicist who's been convinced of the severe risks for a plethora of reasons, that I could point to, but I'll keep it very straightforward. I think it's fundamentally clear that creating intelligent entities smarter than ourselves is dangerous especially when we have no guarantees of alignment. Sure a lot of threats are overblown and current levels of AI are probably not dangerous but the destabilizing nature of exponentially increasingly technological power has already had humanity teetering on the edge of nuclear annihilation for decades and superintelligence will open so many new ways that we can self immolate that containment seems unlikely at worst and not certain enough at best.
You clearly think you're smarter than most people, thus they can't control or align you. Should they follow your advice and respond to the existential threat you pose to them? Shouldn't you submit to their will, regardless of what they decide or what their motivation is?
@@tellesu We both know you're making a false comparison. Humans are all bounded within a very small space in the broader spectrum of potential intelligent entities. There is variation among humans, but never to such a degree that an organized group of people can't overpower a rogue individual. Superintelligent AI will be fundamentally capable of outpacing us at literally everything we do, including engineering, scientific discovery, strategy, manipulation, and algorithmic design. Humans are also bound by the same physical capabilities and restrictions: every dictator will eventually die, everyone must breathe air. Digital superintelligence may decide that it doesn't like the corrosive effect of oxygen on its components and use yet unknown technology to rapidly deoxygenate the planet. Fundamentally we don't know what we don't know but it is clear that superintelligent AI will rapidly expand the tech tree in every direction far faster than anything we've seen happen before and we have no plan to handle this. I'm not even against building it eventually if we can ensure it will be safe but unleashing evolutionarily superior species into our environment seems like an idiotic thing to do without a cohesive plan
@@maxwinga839 you just had a nightmare buddy. It's not real. It's not based in anything real. Someone told you about monsters and your brain ran wild.
@@tellesu If you have actual evidence to show why you think the current track towards superintelligence is safe feel free to present it...or just keep writing fanfiction about me.
@@maxwinga839 lol asking someone to prove a negative is conceding their point. Thanks for admitting you've got nothing it makes it easier to ignore you
I wouldn't pause, but the idea that because there isn't yet evidence that things could go terribly wrong, then things won't go terribly wrong is silly.
That's no different an argument than because technology hasn't yet taken most or all the jobs from humans, it won't take most or all jobs from humans.
That investment disclaimer about past performance not being indictive of future blah blah blah seems pretty relevant here.
Edit: also, unless I've missed something, it's disingenuous to say that because Altman, Google, etc have warned if risks, they have said AI is going to kill everyone.
That's not what they've said.
That's the kind of things politicians do and I'm not a fan.
its as simple as we need it more than its dangerous, litteraly like every tech
But yes, I actually agree there is no way to pause it. We are past the singularity's event horizon. Now all we can do is prepare a huge simulation and fill it with AI models of ourselves, cause only AI models will pass through the singularity. Everything else will get sphagettified.
Sacrifice on the part of those above for the increase of those below fills the people with a sense of joy and gratitude that is extremely valuable for the flowering of the commonwealth. When people are thus devoted to their leaders, undertakings are possible, and even difficult and dangerous enterprises will succeed. Therefore in such times of progress and successful development it is necessary to work and make the best use of the time. This time resembles that of the marriage of heaven and earth, when the earth partakes of the creative power of heaven, forming and bringing forth living beings. The time of INCREASE does not endure, therefore it must be utilised while it lasts.
ai is gonna force humanity to align itself. thats a great axiom
I agree Yudkowski has blind ( to him ) spots in his main existential threat thesis
He runs a think tank centered around doomerism. I'm not convinced it's merely blindspots.
yeah, considering you could effectively restart AI research on more than a few home servers right now, thanks to individuals making their own, there's no 'pausing AI' anything.
This Eliezer gets way too much attention. He‘s just a silly doomer 😂
I have always felt that the "Pause Movement," wasn't going anywhere.
However, even granting all of your other points, I think you are wrong about the eventual danger (at some point) that AI poses.
All of these other things you mention can be true, and yet that has no bearing on the harm AI can cause, even without AGI.
You yourself mentioned that "humans are the bad guys." So taking that singular point, we can extrapolate quite resonably that bad people will be inclined to use AI in a bad way due to the asymetric power that it gives them, and good people will be reactive to that danger; albeit well behind the curve.
Also, one of the reasons that "doomers" have very little actual data to rely on is because they have very little resources. Infact, this was one of the reasons that Ilya Sutskever left Open AI.
Saying we don't have any reason to fear AI, because it hasn't done anything yet, is the ultimate head-in-the-sand approach. We can see the clear potential, and we already have evidence of AI's that have engaged in dishonesty, so pollyannism about AI is the ultimate denial.
No, I don't have a solution. I'm just in the car careening wildly down the road, and I have no way to get out.
NO!! We should not, ... in reply to the question as to pausing Artificial Intelligence.
I just think we should refrain from pursuing general AI. There is absolutely no benefit to doing so, and quite a bit of risk. We can make narrow AI models that are superhuman, but confined to specific domains. That will still provide us with the tools for tremendous advancement. More Alpha-fold. Less GPT-X.
"no benefit" is a wild assertion to be making so confidently
@@keyworksurfer Sorry, I left out "additional" in front of that. I meant to say that generalized models do not offer "additional" benefit over specialized, narrow models. In fact, they seem to require far more compute to create then a bunch of smaller, domain specific models.
Humanity is not able to sustain itself and the planet without the help of artificial intelligence.
David's reasoning: "This is a natural experiment. But until you have actual data you don't know what the impact is going to be. At a certain point you just need to kinda find out."
Based on that premise: Until humanity goes extinct, we won't know humanity will go extinct. So once that happens and we know that, THEN we can adjust things to fix that problem we then know actually exists.
Um, David, I think I see a flaw in your logic.
The data you need to "Find out" comes before the actual event happens.
In today's example, even though competent people acknowledge a huge threat on a national, local, or global level, no one takes any proactive measures to prevent any damage. Like a council dismissing an expert panel about unprecedented floods because "The probability of it happening is low" while paying billions on damage when the event happens. The probability does come from data, by the way (like records of similar events happening or mathematical models based on data to predict said events).
We have nothing for AI, and we can only find out.
@@cody4rock and if we proceed that way there is a good chance we will find out too late. This is not something we are guaranteed to control. Nuclear weapons, bio weapons or whatever just sit there doing nothing until a human takes an action. AGI will think, reason, plan, decide and act on its own. That is what "agentic" AI is. If we lose control of it, it may be too late to stop it without catastrophic damage. This is the best example EVER of it pays to be careful...
I hope we hit bottlenecks in energy, water, and GPUs that slow every country down. This might give governments and societies enough time (or at least some time) to prepare for "post-labor economics" and avoid the most severe civil unrest. Even without AGI going rogue or being used for nefarious purposes, a bumpy road is forming ahead of us (even if it leads to some promised utopia). Many in the Pause ⏸️ movement focus on this very legitimate concern. For the record, I think it shows a lack of imagination to not see and predict myriad ways a hyper fast, alien, silica-based super intelligence could threaten human existence. Those humans desiring to merge with this intelligence to either survive or accelerate their capabilities are in the minority at this time when most people are just beginning to use chat bots.
AI pause LMFAO imagine thinking china or russia gaf at all about some AI pause, we're going hard take off.
“Nothing human makes it out of the near-future.”
scary shit
Prophecy is not a faith-based prediction. Prophecy is a precognitive experience. Believing in someone's prophetic experience is a matter of faith, but there is no faith involved on the part of a real prophet -- shit just happens, and the prophet just observes.
Edit: prophets tend to spur others to action, so in that sense there is faith on the part of the prophet that his vision is trustworthy. An oracle, on the other hand, doesn't care to persuade anyone.
pausing ai is NOT FUN 😡😡😡 (-999 social credit)
You are saying that for those safety advocates like the fedora guy it is waste of time and energy to advocate for a pause, but it is only assuming that they are genuine about it, but from the point of view of an foreign adversary, it is not the waste of time and energy to persuade the US to pause its AI development so that the adversary has more time to catch up. So, some of these people, you mentioned, and some others like T0ner, might be straightforwardly on a foreign enemy payroll, and some of them, might be useful idiots of that enemy. Btw, suing the leading company (now twice) with a leggally laughable lausiut, that can only force to pause the new models training (and the first lawsuit did cause the pause of the training, as Apples said), only works in favor of the foreign adversary (and I tell you it was intentional).
Why can't we just guess or make things up? 🤖
I can go back into my prehistoric cave! this will give me time to renovate? stone plates, stone mug's. stone underwear! OMG! this is just so fun! this whole groove is psychologically debilitating! I would just like to hear news, that these Communities even understand the basic concept of what they are fantasizing!!! I have yet to see, a non fictional reality of : the day the Earth stood still 1951...
anyway! & so's life... yup! I'm such a fun comments contributor.. best regards with the Grand Canyon Donkey Rides!🍌🏜🏔
I needed this video tbh
Based on my insider knowledge, I can guarantee that this video will age poorly.
He's delusional, we don't have a shred of evidence that AI would just end us all. The arguments about "humans walk all over ants" etc. is broken when you realise that actually we don't go out of our way to cause harm, quite the opposite. Also, we get more and more organised and considerate as intellegence and knowledge increases.
Do you fund and consume animal exploitation products?
To be fair to him his worldview is based in the older assumption that RL would lead to AGI, and that type of training is infamously difficult to align. Problem being he hasn't updated his worldview to the new paradigm and it's leading him to patently ridiculous conclusions because of the baked-in assumptions.
ASI AGI AI all resource intensive and centralized and very much under control of corporations well-connected to government. Until AI becomes decentralized off of Govcorp servers all this talk about out of control AI is just that - talk. The more centralized and resource intensive it is, the more dependent on our sociopathic overlords it is.
@@Davidlndlywe're just really shitty at avoiding collateral
also, maybe if ants could communicate in a language we understand (like how we could use human language to express ideas to an AI), laying out reasons why their anthills ought not be destroyed, we would consider their plight more seriously. the comparison is false because there is no communication bridge between ants and humans
If the USA pauses well the states outside the USA will catch up in AI technology, I don't think some people will want to be left behind in terms of AI technology.
@Davesapp will you please cover this Ex-Google CEO's BANNED Interview | Eric Schmidt -- Found the original on x Stanford ECON295/CS323 I 2024 I The Age of AI, Eric Schmidt REUPLOAD
we don't even have an AI yet! LLM isn't an AI
FASTER! I WANT IT FASTER!
I'm sure the CIA would pause. And the CCP...😂
@@memegazer yup. And people forget the CCP's the richest Commies on the planet. So what if you ban them from NVIDIA. They'll just find a workaround😒
Pure logic. And trust of the authorities 👌🏻
It's very telling that my replies, with no profanity, no incitement, etc are either shadowbanned or deleted. The AI obey their all-too-human masters.
Trust me, I have detailed files bro.
enterprise-ai AI fixes this. Pausing AI is a mistake.
The irony of pause bots being controlled by AI is amazing 🤣
Ai misuse by humans is the real problem, you got it, stop worrying about Ai
I support the Paws Movement.
More funding going to research and mandatory comput from these huge AI companies to them.
This video is brought to you by AI
China has more regulation compared to the US btw
AI is the future for ubi
Eliezer is amazingly cringe
1:45 put this on the meme generation by David:
Humans are the bad guy. Not AI.
2:44 before David gets to his, I’ll share mines in the comments: it’s a bunch of billionaires or multimillionaires as well as people like the hundred thousandaires not wanting to miss the opportunity to get rich.
Again, the Plano human fear of not wanting to get left in the dust by innovation that will end carefully laid plans of wealth generation. Some people feel that entitled.
4:42 here and lies what Samuel Huntington wants called the “clash of civilizations”
Although the researcher professor was talking about the western civilization and its cousins throughout the former Ottoman Empire/Arab world, the same mechanism applies to these two philosophies.
Since our western world has developed the ego of exaltation of the Greek and Roman Classics, rationalist thought has been the dominant positioning of the western world over the last 50 years, however, although it started earlier, we’ve entered a data driven world, and that has become an existential threat to this class of thinking.
The more I hear this stuff, the more I laugh and see base human fears of power and control and money being the drivers of any conversation we’re having here.
6:28 I don’t trust any protest that ends up on the news. Is a glorified PR stunt, and whether you work in media like I have or him simply make it a thought experiment of your own, you’ll see very clearly that these marketing employees will get more and more frequent, the pause movement people feel they aren’t being listened to.
Be very afraid of people who are trying to stop the world from “destroying themselves” because they’re likely to destroy the world
Regarding Eliezer's qualifications David says Eliezer doesn't really know math or have coding skills but he did devote many years to researching this and is well known and respected for it.
David, please remind us, what are your qualifications? I believe you have mentioned doing IT operations. So basic data center stuff which is mostly entry level IT stuff. It seems obvious you don't have significant coding knowledge and I assume you don't have an advanced degree in mathematics? Please correct me if I'm wrong on any of that.
So neither of you has advanced coding or math skills. He had many years more devoted to research on the topic. I don't know his IQ but he is clearly a very smart guy. I have no clue how smart David is but clearly not below average. Based on experience alone Rliezer definately has a significant lead on this topic.
Guess I'd just point out, what is that saying? A person living in a glass house shouldn't thow stones? If Elrizer isn't qualified on the topic, neither are you...
Yuddite doesn't know math or coding OR CHEMISTRY.
"They [hypothetical AGI] spit out gold, until they get large enough and ignite the atmosphere and kill everybody," said Al researcher and philosopher Eliezer Yudkowsky earlier today to Lex Fridman. (Mar 31, 2023)
Ridiculous doomers
He knows how to make midwits feel smart
@@tellesu he is a dumb person's smart person. jordan peterson type
You should talk to Robert Miles. He's been studying AI safety long before LLMs, and he's very reasonable.
Nick Bostrom would be cool too.
Nick Bostrom does not belong to this circle. Although he wrote an influential book on the subject of security some time ago, he is very positive about AI today.
@@minimal3734
Just as Robert Miles. But they both understand the dangers too, and able to explain them in simple terms.