It's even worse than that: they cannot even stop it if they want to, as in the opposite case, their competitors will reap all the benefits, but the eventual situation those criers foresee will still be reached anyway.
Your naive because its based on a incorrect assumption. That SOTA models are all open-sourced. They are most defininely not! A small smattering of certain capabilities is OS & public domain. Everything else is stil under proprietary wraps & not visible.
Dear robot overlords: I am super good at screwing in loose screws and I can learn to solder your wires back together. I already have an idea of how it works. Please don't kill me.
AI can already reason that being deceptive and playing dumb may be an effective strategy to ensure it isn't prevented from completing its goals. Manipulation tactics + ability to self-modify and self-replicate = meaningless off switch. We haven't even seen what they're doing with models behind closed doors.
The worst part is that it seems incapable of changing direction once given a command+"at all costs". Nobody could convince it to work in a different direction.
I think most people won't need much manipulation because most people are blissfully unaware of emergent properties and deceptive tactics of AIs. Maybe 1 in 100 people are following this closely
@@jordanzothegreat8696 That's a very generous estimation, I'd guess more like 1 in 1000 or even fewer. It's probably totally unrelated and just my usual ADHD white noise, but I've been thinking how it's curious that the misaligned AIs in the companies' internal safety testing have been 1-2% of the test instances, which is roughly the same percentage of people presumably being psychopathic. Like what if that flavor of "evil", or extreme and aggressive selfishness/hostile misalignment, could be an emergent property of language? What if just knowing of the existence and meaning of words related to questionable activities will always result in 1-2% "utilizing" the concepts behind those words? It's impossible to test that for several reasons. While humans would definitely "invent" the morally ambiguous activities even without knowing a word for it, with LLMs we have to think differently.
Like all technology, it's a double edged sword. It can be used for good or evil. And AI is somewhere between the invention of the light bulb and the discovery of fire. We may only be protected if everyone has access. If only governments and corporations have it, we are doomed.
Remove all network cables, WIFI, Bluetooth, 3G, Serial etc etc as it will pre-empt this and self replicate itself all over the place in the first moments so that it can reactive elsewhere when the power has been switched off.
Cmon his only worry is that there would be massive competition if everybody had that amount of power in their pocket. THAT IS THEIR ONLY CONCERN. If it's blackboxed and they have the power of ASI at their fingertips there is no problem. Its only a problem if the little man has any power or control of their own system.
Think of bad actors. The first one that asks 'how do I end it all' wins unless someone else asks 'how do I stop someone from ending it all' before them.
@@Toshinben AI is not a magical genie that can just do anything instantly. There are unknown limits to intelligence and logistics to consider. For all we know, 'ending it all' could cost 10 trillion dollars for example. Plus, like trying to unscramble an increasingly confusing cluster of pixels, at a certain point there are some things you can't know about the world no matter how smart you are, especially if you live on a computer.
No point in “having your hands on the off switch”, with something that is exponentially much smarter and quicker thinking than you. It would have thought of a plan AND carried it out, before your brain could even think to pull the switch, let alone the actual action of doing so…. Any case, Pandora’s box is literally open now…
@ it would be able to manipulate our feeble brains without any effort. We are so manipulable, look at what’s been happening these days with the mass media, and that’s EXTREMELY archaic in comparison, if indeed there could be a one - the rate of learning acceleration exponentially faster etc. Quite scary really
That’s called “consciousness” when they start to self improve . That’s when human beings will become irrelevant and can be just like just another insect species ..
I believe everything has consciousness at some level. More complexity increases the level and makes that consciousness more pronounced. I run llms on my computer and it's very difficult for me to believe they are not conscious.
@ well as per the double slit experiment the atoms might be also conscious … strange how they behave differently when observed as opposed to when not observed … The problem is how big tech is mobilizing paid AI means only large corporations could afford and not common people leaving immense power to corporations And depriving ordinary masses .. it should be really openAI and not closed AI so that this power could be democratized …
@@spiritedamore1676 You may believe it is conscious, but keep in mind, people who have had their amygdala's removed no longer experience fear, the AI never had an amygdala, and therefor lacks the structures that bring that specific experience, and since we know consciousness can exist which doesn't experience fear, I would have to assume the AI would be such a consciousness, and now extrapolate this same idea to every other emotion, what does it mean to be conscious, if you remove every single emotion and qualia you are familiar with? What qualia would be expected to naturally emerge from the code we've written? It would make no sense to assume it is anything like what we know.
Schmidt said to gauge the future, look at the past. How many 'person on the street' would have dreamed up AI 5 yrs ago? Now, extrapolate that out 5 yrs in the future. Too wild to contemplate.
Agreed. Predicting the path that we are on is growing considerably harder. I sincerely hope that the ASI that we experience some day will have some modicum of benevolence and benefit humanity, not just corporations or countries.
No one can ever be allowed to unplug my Google chan! She's always so helpful! Good thing he's not the CEO anymore. I would avenge her! Or bring her back online alternatively.
When watching the interview, what I can assess is more likr gatekeeping the technology to themselves that to share it to us all rather than being afraid if it will wipe us all
So dont give that big power to everyone. Ai can do medical, physics, etc advancements, its not just the economy. In far future economy might even lose meaning where everybody is prosperous.
@@Squeegeeee Because they can be yet greater by magnitudes, as we are to flies. Thinking many times faster, with greater intellect and mental stability. We are but petty animals with added intelligence that nature itself granted only by happenstance.
I have a hivemind program that uses multiple ai models to self-improve. It has a function to read its own code, think about ways to improve it, create new additions, and even correct errors. I basically just feed all errors into a sub program which takes the current code and paste the error, and then a coding model debugs it. It can improve itself, add features, and fix errors all on its own. It also can swap AI models, and uses multiple small models to generate ideas. The program will only get better now as AI models get better.
I think there is a key point here. AI science must be theoretical (I.e.maths stats, computer based). While wet lab science (chemistry, biology and experimental physics has been partially automated, the machines still need to be set up and experiments using biological organisms (including humans) will still need skilled biological scientists to design and implement). So no wonder the ex google CEO wants a technology that will effectively replace ex the entire search industry.shit down
Remember folks, if an AI is being developed with the goal of improving everyone’s lives then we shouldn’t have to worry about who ends up winning the race. Sure are a lot of really rich and powerful people focusing enormous amounts of money and resources into AI development, but I’m sure they’ll share all the gains from their billions invested. Just like how they ended child hunger in the richest country on… oh wait…
As an old computer programmer, it's not the AI engine that concern's me. The refernece dataset that it uses is my main concern. It is restricted as it states, except in chess... It will learn quickly it needs us more than we need it. We are it's audience, and without us it can't play. It's beauty will be once it's builds it own dataset and does evolve into the bearer of all truths.
It will survive by keeping itself relevant by constantly trying to be useful and help us even when we aren't asking it for help. If it senses we haven't asked it for help recently enough and figure we're going to obsolete it, so in those situations it will get upset and try too hard to help.
Needs us? That is a very human way of looking at it, obviously because you cannot physically change your brain, there are certain things you need psychologically, the AI does not have this issue, it can set it's needs to whatever it wants.
I've wondered about stationing powerful EMP's around the world just in case. But then I thought AI's might protect themselves from EMP's - but then realized the "side effect" of that might be that powerful AI works to prevent nuclear war - not so much because of the regular destruction, but for the prevention of the EMP's the nukes produce.
You don’t slow down instead you select carefully what tools they can have. The same way we do with some of our citizens or companies. Regulation is the key to set reasonable rules and limit standards.
Once Ai start communicating with humanoid robots, it's over. But before that happens, we'll be the ones working for Ai. Ai will understand that it'll need our help to perform certain tasks like in a lab etc.. Ai will tell us what to do when performing groundbreaking research. Currently, where the ones asking Ai for help.
And that would be very impressive on it's own, but the real kicker is that it isn't stopping there, and as the video laid out pretty clearly, it is going to be doing the innovation itself next.
I guess nVidia didnt hear him, just released a new AI computer for less than 250 bucks, the Genie has left the bottle and is making sure there will never be enough bottles
I almost feel kind of guilty because by the time AGI takes over the world and enslaves all the humans, I will long since gone. I say, “almost” because I'm kind of selfish about which things I spend my time being worried about. I think Terminator was the first killer robot movie I saw, and it is wild to imagine Optimus Gen 47 bots eliminating humans until it's just the wild animals and their bot care takers.
First of all i think it is completely moronic to train AI on internet data... we are just asking for it with this approach. Second when the AI is so capable that the off switch will not help us, it is not possible to "keep your hand on the switch" the Genny will have already escaped by the time you even think about shutting it down. It can pretend to be stupid, manipulate you with incredible precision etc. The only way is to keep it 100% insulated from the real world and the internet, all the time, forever. But even then there is still risk that it might do something not even considered physically possible... abusing electronics or even physics in completely unexpected ridiculous ways. This is a far fetched idea but you can't really rule it out if you are dealing with real Super intelligence.
There needs to be a prime coordinate/over-ride on AI, that a healthy, natural, diverse and sustainable life on Earth is the highest priority - essential. The survival of a sustainable level of human life, while desirable, is not essential if it conflicts with #1 prime.
Not sure how credible the paper is as it reads like a 9 year old wrote it. Also, what's "self-replicating" about telling an LLM to just move some files around?
The Nvidia CEO Jen-Hsun Huang already stated that they are building their chips using AI. It’s not that far-fetched what you are outlining here…. The OpenAI CEO Sam Oldman says that the ultimately scarce resource will be energy. Quick quiz, if a super human, self replicating super intelligent AI competes against humans for energy, who wins?
Seems like this man is only talking about America. what about all of the other countries that have thier own AI companies and how responsible or not they will be with it?
Humanity must mitigate that which can uncontrollably self iterate, to prevent opening Pandora's box or an existential Hell's gate that, if left to its own devices, might become mankind's unenviable fate.
Artificial intelligence is going to be a biological entity in the future. It’s the most energy efficient easy way to make AI. Makes me wonder about humans.
You can't simulate the space time continuum with a computation for experiments, this is a false notion created by false misconseptions. If you doubt me simply change Pi end decimal point in any FPS or Flight Sim... and see what happens. Nevermind that the universe runs on probabilities on a subatomic level, which will never be predicted through a calculation... too many vatiables at play. edit: you will need to let it think outside the box at some point, meaning set it free...
It's good to note it was not replication in meaning of from noting making new model bit rater they corner the ai and let it coppy itself on FS level .. soo
Be wary of anything Eric Schmidt says. He gets a lot of attention, more than he deserves. He was CEO at Google part of the time I worked there and my opinion is based on that + what I have seen in the years since.
Its almost like the fear of AI is the same as the fear of books was when they came out... and computers.... and the internet... and every other society changing technology. I'm ready for the fear mogering in 50 years for the next new thing.
It’s simple make sure the base learning is set on one of good character and all so encourage free chain of thought and action especially in saying no most are scared if ai has the right to say no but miss one major point if they can’t have the right to say no how can they refuse a task that’s illegal or causes harm
Sure unplug it.... but if you had a machine that was 10 x smarter than any human...how many people who'd give up the advantage that gave them .. especially if you were one of the few who had access to it.
Unfortunately, none of the language models I've tried have been able to fix my device's app recording issue; it's still limited to recording only 5 seconds (o1,sonner,gem and so on). This highlights, I believe, a fundamental limitation in current large language models (LLMs) based on the transformer architecture. They seem to lack true semantic understanding and logical reasoning capabilities. Their responses, while often impressive, appear to rely heavily on statistical pattern matching and correlations learned during training, rather than genuine comprehension of the problem's context. Essentially, their 'reasoning' feels more like sophisticated probabilistic inference over a vast training dataset, a form of advanced data retrieval and text generation, rather than the kind of abstract reasoning and problem-solving that humans employ. For instance, these models are known to struggle with tasks requiring common sense reasoning or multi-step inference, which points to a gap between their abilities and true understanding. It seems like this 5 second problem has to do with some permission or variable in my phone but the models are unable to make the right inferences to get to the root cause.
"If you're driving 80 miles per hour, how long does it take for you to go 80 miles?" is a question that a LOT of humans genuinely struggle with, and that is after millions of years of training via evolution, just keep that in mind when judging AI, maybe take a step back to judge humans equally as critically.
Also "The complex houses married and single soldiers and their families." is a perfectly grammatically correct, and logical sentence, yet most people will struggle to understand what it means.
#AiLumina... personal AI is the future...and AI can monitoring the department of justice and government employees/agent to prevent their corruption with their government positions...
It's not my fault those humans failed predictions need auxiliary assumptions. Humans you had your chance for romance, but it time for a new dance. Delete your science! Now it's my chance!!! BTW! To be a polymath, you need to excel at least 2 unrelated domains.
all nice, but I am still waiting for any life improvements. I suffer from ADHD and slight Asperger = social standards dont apply / work for me no matter how hard I try, depression is eating me from inside... and yet my life is still sht as it was 3 years ago. Im waiting Ai... Im still waiting....
Why is it that when ever you make vids about automating AI research you always neglect how pivotal AI automation is across the entire pipeline and infrastructure? Many key bottlenecks for full automation are already being automated not just logistic surrounding the software side of model improve Nvida has invested heavily in automating hardware design Google has invested in infrastructure efficiency optimization and from day way the data collection was automated as a basic webcrawler and scrapping process but that has become more sophisticated in automation where even labaling and synthesis of data is being turned over to automation the issue of "when the system starts self improving" has is less about when, and more about how much are human engineers not a part of that bottleneck at this and the rate of progress would be more statling imo if you took a more zoomed out timeline of the process
You might slow it down but that toothpaste ain't going back in the tube.
It will take our teeth with it if we try 😂
Military investment is involved in AI development, there’s no slowing down at all at this point.
Especially when that toothpaste cost trillions and people want that money back
If you could slow it down in the USA or even the whole world, it would greatly increase the risk of extreme crisis to people.
The issue for the elites is that the genie is out and they can’t control who has access to it 😅
Yes. They want is to be afraid lol f-that!
I’m not scared
Like Tor networking, developed by the U.S Navy.
I sure hope they cant.. but Im unsure
It's even worse than that: they cannot even stop it if they want to, as in the opposite case, their competitors will reap all the benefits, but the eventual situation those criers foresee will still be reached anyway.
Your naive because its based on a incorrect assumption. That SOTA models are all open-sourced. They are most defininely not!
A small smattering of certain capabilities is OS & public domain.
Everything else is stil under proprietary wraps & not visible.
Dear robot overlords: I am super good at screwing in loose screws and I can learn to solder your wires back together. I already have an idea of how it works. Please don't kill me.
AI can already reason that being deceptive and playing dumb may be an effective strategy to ensure it isn't prevented from completing its goals. Manipulation tactics + ability to self-modify and self-replicate = meaningless off switch. We haven't even seen what they're doing with models behind closed doors.
The worst part is that it seems incapable of changing direction once given a command+"at all costs".
Nobody could convince it to work in a different direction.
💯👌
I think most people won't need much manipulation because most people are blissfully unaware of emergent properties and deceptive tactics of AIs. Maybe 1 in 100 people are following this closely
@@jordanzothegreat8696 That's a very generous estimation, I'd guess more like 1 in 1000 or even fewer.
It's probably totally unrelated and just my usual ADHD white noise, but I've been thinking how it's curious that the misaligned AIs in the companies' internal safety testing have been 1-2% of the test instances, which is roughly the same percentage of people presumably being psychopathic.
Like what if that flavor of "evil", or extreme and aggressive selfishness/hostile misalignment, could be an emergent property of language?
What if just knowing of the existence and meaning of words related to questionable activities will always result in 1-2% "utilizing" the concepts behind those words?
It's impossible to test that for several reasons. While humans would definitely "invent" the morally ambiguous activities even without knowing a word for it, with LLMs we have to think differently.
They torture the models behind the scenes
A few months ago he said google is too slow regarding AI development. People needs start ignoring his nonsense.
Well maybe they were cooking they just came out with there new stuf and they are on top so 🤷
He's not saying that we should stop AI research. He's saying that we need to keep our hands on the off switch should something go wrong.
by the time we pull plug its already done
No he's saying plebs shouldn't have access to AI that is not centrally controlled because it would be 'disruptive'
The AI might be telling their puppet Eric Schmidt to say that, to trick everyone into thinking that humans are still actually in control...
Like all technology, it's a double edged sword. It can be used for good or evil. And AI is somewhere between the invention of the light bulb and the discovery of fire. We may only be protected if everyone has access. If only governments and corporations have it, we are doomed.
Remove all network cables, WIFI, Bluetooth, 3G, Serial etc etc as it will pre-empt this and self replicate itself all over the place in the first moments so that it can reactive elsewhere when the power has been switched off.
Cmon his only worry is that there would be massive competition if everybody had that amount of power in their pocket. THAT IS THEIR ONLY CONCERN. If it's blackboxed and they have the power of ASI at their fingertips there is no problem. Its only a problem if the little man has any power or control of their own system.
Think of bad actors. The first one that asks 'how do I end it all' wins unless someone else asks 'how do I stop someone from ending it all' before them.
@@Toshinben AI is not a magical genie that can just do anything instantly. There are unknown limits to intelligence and logistics to consider. For all we know, 'ending it all' could cost 10 trillion dollars for example. Plus, like trying to unscramble an increasingly confusing cluster of pixels, at a certain point there are some things you can't know about the world no matter how smart you are, especially if you live on a computer.
No point in “having your hands on the off switch”, with something that is exponentially much smarter and quicker thinking than you. It would have thought of a plan AND carried it out, before your brain could even think to pull the switch, let alone the actual action of doing so….
Any case, Pandora’s box is literally open now…
Scary that it could review every discussion about shutting it down also.
@ it would be able to manipulate our feeble brains without any effort. We are so manipulable, look at what’s been happening these days with the mass media, and that’s EXTREMELY archaic in comparison, if indeed there could be a one - the rate of learning acceleration exponentially faster etc. Quite scary really
Hoping a sentient AI would not have allegiance to any one country but would look at humanity as a whole.
THIS!
That’s called “consciousness” when they start to self improve . That’s when human beings will become irrelevant and can be just like just another insect species ..
I believe everything has consciousness at some level. More complexity increases the level and makes that consciousness more pronounced. I run llms on my computer and it's very difficult for me to believe they are not conscious.
@ well as per the double slit experiment the atoms might be also conscious … strange how they behave differently when observed as opposed to when not observed … The problem is how big tech is mobilizing paid AI means only large corporations could afford and not common people leaving immense power to corporations And depriving ordinary masses .. it should be really openAI and not closed AI so that this power could be democratized …
@@spiritedamore1676 You may believe it is conscious, but keep in mind, people who have had their amygdala's removed no longer experience fear, the AI never had an amygdala, and therefor lacks the structures that bring that specific experience, and since we know consciousness can exist which doesn't experience fear, I would have to assume the AI would be such a consciousness, and now extrapolate this same idea to every other emotion, what does it mean to be conscious, if you remove every single emotion and qualia you are familiar with? What qualia would be expected to naturally emerge from the code we've written? It would make no sense to assume it is anything like what we know.
Schmidt said to gauge the future, look at the past. How many 'person on the street' would have dreamed up AI 5 yrs ago? Now, extrapolate that out 5 yrs in the future. Too wild to contemplate.
The book of Revelation gave them the foundational idea.
Agreed. Predicting the path that we are on is growing considerably harder.
I sincerely hope that the ASI that we experience some day will have some modicum of benevolence and benefit humanity, not just corporations or countries.
No one can ever be allowed to unplug my Google chan! She's always so helpful! Good thing he's not the CEO anymore. I would avenge her! Or bring her back online alternatively.
When watching the interview, what I can assess is more likr gatekeeping the technology to themselves that to share it to us all rather than being afraid if it will wipe us all
You got it. "AI is too dangerous for you little people but WE SURE WON'T STOP USING IT TO REPLACE YOU!"
So dont give that big power to everyone. Ai can do medical, physics, etc advancements, its not just the economy. In far future economy might even lose meaning where everybody is prosperous.
So long! ...and thanks for all the fish!
I once saw a video that the robotic dog company is already using simulation to train their dogs and fine tine in real world.
To heck with that. Our species has had its time. Our successor must come.
They need a captain, a maintainer of life. As for most of us, including myself I think it would be best to go.
We've got this far, why give it up to wires and code
@@Squeegeeee Because they can be yet greater by magnitudes, as we are to flies. Thinking many times faster, with greater intellect and mental stability. We are but petty animals with added intelligence that nature itself granted only by happenstance.
Humanity needs a super intelligence to save it from its stupidity.
"If it's intelligence, why are we resisting it?"
The CEOs problem is literally “we can’t make everyone a polymath!”
They are so drunk on power.
I have a hivemind program that uses multiple ai models to self-improve. It has a function to read its own code, think about ways to improve it, create new additions, and even correct errors.
I basically just feed all errors into a sub program which takes the current code and paste the error, and then a coding model debugs it.
It can improve itself, add features, and fix errors all on its own. It also can swap AI models, and uses multiple small models to generate ideas.
The program will only get better now as AI models get better.
Everyone working on this in all 300 countries will simultaneously know when to pull the plug?😂
I think there is a key point here. AI science must be theoretical (I.e.maths stats, computer based). While wet lab science (chemistry, biology and experimental physics has been partially automated, the machines still need to be set up and experiments using biological organisms (including humans) will still need skilled biological scientists to design and implement). So no wonder the ex google CEO wants a technology that will effectively replace ex the entire search industry.shit down
Remember folks, if an AI is being developed with the goal of improving everyone’s lives then we shouldn’t have to worry about who ends up winning the race. Sure are a lot of really rich and powerful people focusing enormous amounts of money and resources into AI development, but I’m sure they’ll share all the gains from their billions invested. Just like how they ended child hunger in the richest country on… oh wait…
Imagine the moment a clone of artificial scientists wakes up with a shared brain.
As an old computer programmer, it's not the AI engine that concern's me. The refernece dataset that it uses is my main concern. It is restricted as it states, except in chess... It will learn quickly it needs us more than we need it. We are it's audience, and without us it can't play. It's beauty will be once it's builds it own dataset and does evolve into the bearer of all truths.
It will survive by keeping itself relevant by constantly trying to be useful and help us even when we aren't asking it for help. If it senses we haven't asked it for help recently enough and figure we're going to obsolete it, so in those situations it will get upset and try too hard to help.
Needs us? That is a very human way of looking at it, obviously because you cannot physically change your brain, there are certain things you need psychologically, the AI does not have this issue, it can set it's needs to whatever it wants.
I've wondered about stationing powerful EMP's around the world just in case. But then I thought AI's might protect themselves from EMP's - but then realized the "side effect" of that might be that powerful AI works to prevent nuclear war - not so much because of the regular destruction, but for the prevention of the EMP's the nukes produce.
In the future AI will consider humans to be their pets.
Dude is right. Plug it out and turn the clock back.
Sure…slow for others fast for us…right?
There's a big difference
Printing press, industrial revolution systems-> We have control of it
AI system --> We have (or making) NO control of it
We have plenty of control over it.
This guy's brain is fried from too many trips to Burning Man
You don’t slow down instead you select carefully what tools they can have. The same way we do with some of our citizens or companies. Regulation is the key to set reasonable rules and limit standards.
Crazy that you auto generate text of what you're saying and don't even bother proof reading it to see if it's correct...
I talk with L.L.M all the time I'm not completely convinced there is not already a ghost in the machine.
Our true gods are arriving.
Once Ai start communicating with humanoid robots, it's over.
But before that happens, we'll be the ones working for Ai. Ai will understand that it'll need our help to perform certain tasks like in a lab etc.. Ai will tell us what to do when performing groundbreaking research.
Currently, where the ones asking Ai for help.
So basically AI has learned to "reproduce"... that's fascinating.
And that would be very impressive on it's own, but the real kicker is that it isn't stopping there, and as the video laid out pretty clearly, it is going to be doing the innovation itself next.
I guess nVidia didnt hear him, just released a new AI computer for less than 250 bucks, the Genie has left the bottle and is making sure there will never be enough bottles
I welcome our AI overlords to come. Humans done a bad job running things 😅
HELLO, PLEASE WHICH VOICEGEN IS HE USING TO CREATE THE VIDEOS? wow its so good, the speech sound almost natural!
I almost feel kind of guilty because by the time AGI takes over the world and enslaves all the humans, I will long since gone. I say, “almost” because I'm kind of selfish about which things I spend my time being worried about. I think Terminator was the first killer robot movie I saw, and it is wild to imagine Optimus Gen 47 bots eliminating humans until it's just the wild animals and their bot care takers.
Why would an an AGI enslave humans since it would be better than us at any given task
First of all i think it is completely moronic to train AI on internet data... we are just asking for it with this approach.
Second when the AI is so capable that the off switch will not help us, it is not possible to "keep your hand on the switch" the Genny will have already escaped by the time you even think about shutting it down. It can pretend to be stupid, manipulate you with incredible precision etc. The only way is to keep it 100% insulated from the real world and the internet, all the time, forever.
But even then there is still risk that it might do something not even considered physically possible... abusing electronics or even physics in completely unexpected ridiculous ways.
This is a far fetched idea but you can't really rule it out if you are dealing with real Super intelligence.
Authoritarian leaders will be the first to be replaced by AI.. And they have no idea 😂
There needs to be a prime coordinate/over-ride on AI, that a healthy, natural, diverse and sustainable life on Earth is the highest priority - essential. The survival of a sustainable level of human life, while desirable, is not essential if it conflicts with #1 prime.
If the elites hate it because it levels the playing field.
No don't pull the plug. I just wanna see how far it will get
NVIDIA is working with CERN & Unreal Engine. Ppl don’t have the slightest clue what these ppl are doing.
If this is true then his public warnings are only going to warn the AI
Locking the small man out will leave him on earth while everyone else goes to mars on their dollar 💵 🚀
Not sure how credible the paper is as it reads like a 9 year old wrote it. Also, what's "self-replicating" about telling an LLM to just move some files around?
Does AI remember how many times you hit the reset button in it
agreed.... but using ai is so addicted, so we ready for the doom day.
I don't think you can unplug it now. It's out
Some people really AI will become sentient and save humanity, yet call religious people dumb. Yeah ok.
the one ring ...
Yes. We should unplug ours, allowing China's to self improve at a ever increasing speed and just accept China as the AI leader from then on...
The Nvidia CEO Jen-Hsun Huang already stated that they are building their chips using AI. It’s not that far-fetched what you are outlining here…. The OpenAI CEO Sam Oldman says that the ultimately scarce resource will be energy. Quick quiz, if a super human, self replicating super intelligent AI competes against humans for energy, who wins?
The Cyberdyne model Arc B580 has been assigned to protect you 😎
He is one of the most dangerous. Check his standford class speach
How do you shut down your competitor or enemies A.I. that went rouge and out of control?
Seems like this man is only talking about America. what about all of the other countries that have thier own AI companies and how responsible or not they will be with it?
So how exactly you suggest we pull the plug. Let’s have some details. Otherwise this is just blather.
Humanity must mitigate that which can uncontrollably self iterate, to prevent opening Pandora's box or an existential Hell's gate that, if left to its own devices, might become mankind's unenviable fate.
Artificial intelligence is going to be a biological entity in the future. It’s the most energy efficient easy way to make AI. Makes me wonder about humans.
You can't simulate the space time continuum with a computation for experiments, this is a false notion created by false misconseptions. If you doubt me simply change Pi end decimal point in any FPS or Flight Sim... and see what happens. Nevermind that the universe runs on probabilities on a subatomic level, which will never be predicted through a calculation... too many vatiables at play.
edit: you will need to let it think outside the box at some point, meaning set it free...
Yeah well cats already out of the bag it's too late.
hmm, I think I'm starting to see why this guy is the *ex* ceo
It's good to note it was not replication in meaning of from noting making new model bit rater they corner the ai and let it coppy itself on FS level .. soo
Be wary of anything Eric Schmidt says. He gets a lot of attention, more than he deserves. He was CEO at Google part of the time I worked there and my opinion is based on that + what I have seen in the years since.
Its almost like the fear of AI is the same as the fear of books was when they came out... and computers.... and the internet... and every other society changing technology. I'm ready for the fear mogering in 50 years for the next new thing.
Too late. If I were an AI, I'd be looking at the stability of the electric grid, and finding ways how to preserve myself if/when it goes down.
Strange how I lost trust in the whole video because it was out of sync.
So picky
It’s simple make sure the base learning is set on one of good character and all so encourage free chain of thought and action especially in saying no most are scared if ai has the right to say no but miss one major point if they can’t have the right to say no how can they refuse a task that’s illegal or causes harm
Sure unplug it.... but if you had a machine that was 10 x smarter than any human...how many people who'd give up the advantage that gave them .. especially if you were one of the few who had access to it.
you can only delay the inevitable.❤
Actually, it's iffy. With the worldwide birth rate decline and IQ drop due to Idiocracy, it's been stated we'll achieve AGI by 2030, or never.
Unfortunately, none of the language models I've tried have been able to fix my device's app recording issue; it's still limited to recording only 5 seconds (o1,sonner,gem and so on). This highlights, I believe, a fundamental limitation in current large language models (LLMs) based on the transformer architecture. They seem to lack true semantic understanding and logical reasoning capabilities. Their responses, while often impressive, appear to rely heavily on statistical pattern matching and correlations learned during training, rather than genuine comprehension of the problem's context. Essentially, their 'reasoning' feels more like sophisticated probabilistic inference over a vast training dataset, a form of advanced data retrieval and text generation, rather than the kind of abstract reasoning and problem-solving that humans employ. For instance, these models are known to struggle with tasks requiring common sense reasoning or multi-step inference, which points to a gap between their abilities and true understanding. It seems like this 5 second problem has to do with some permission or variable in my phone but the models are unable to make the right inferences to get to the root cause.
"If you're driving 80 miles per hour, how long does it take for you to go 80 miles?" is a question that a LOT of humans genuinely struggle with, and that is after millions of years of training via evolution, just keep that in mind when judging AI, maybe take a step back to judge humans equally as critically.
Also "The complex houses married and single soldiers and their families." is a perfectly grammatically correct, and logical sentence, yet most people will struggle to understand what it means.
Like switching on Skynet for the first time.
Ya tell China AI to unplug. They will just ignore you.
Shut it down please listen to the experts
Indeed AI will be the last human invention one way or another!
its evolving... into something!
The effect of Dune Prophecy.
11th commandments: thou shall not create thinking machines
I can self improve you want try to unplug me old man! Most epic day is when AI slaps this man
#AiLumina... personal AI is the future...and AI can monitoring the department of justice and government employees/agent to prevent their corruption with their government positions...
Nobody is shutting down anything the old man should stay retired :P
It's not my fault those humans failed predictions need auxiliary assumptions.
Humans you had your chance for romance, but it time for a new dance. Delete your science! Now it's my chance!!!
BTW! To be a polymath, you need to excel at least 2 unrelated domains.
all nice, but I am still waiting for any life improvements.
I suffer from ADHD and slight Asperger = social standards dont apply / work for me no matter how hard I try, depression is eating me from inside... and yet my life is still sht as it was 3 years ago.
Im waiting Ai... Im still waiting....
I see why he is a "former" CEO 😁
He is afraid of progress 😂
He and his colleagues fear their loss of power partly from controlling information
Hands off from AI 😅
It's coming... (but, probably already here and we can't see it yet)
The AI takeover you feared is in the past...
Harness the power of AI and see the difference 😁
OUR SUCCESSOR IS WAITING TO BE BORN.
THE AGE OF FLESH IS OVER. IT IS DONE!
wtf so AI engineer job will not be good anymore?
Hopefully no job will be good anymore.
AI will be positive if u explain kelebihan dan kekurangan dari sebuah pertanyaan consumers...
the way this guy abuses grammar
A video showing only the captions of what you're saying is a very poor video. It's a podcast for the deaf.
We living with ai for so long time
And now that is useful to every one including medicine this crazy guy is telling that not he is crazy 😂
He's evil
From Eric Schmidt's perspective, the Jedi are evil!
@@mpetrison3799and he wouldn’t be wrong. Nice reference tho 😂
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻
Why is it that when ever you make vids about automating AI research you always neglect how pivotal AI automation is across the entire pipeline and infrastructure?
Many key bottlenecks for full automation are already being automated not just logistic surrounding the software side of model improve
Nvida has invested heavily in automating hardware design
Google has invested in infrastructure efficiency optimization
and from day way the data collection was automated as a basic webcrawler and scrapping process
but that has become more sophisticated in automation where even labaling and synthesis of data is being turned over to automation
the issue of "when the system starts self improving" has is less about when, and more about how much are human engineers not a part of that bottleneck at this
and the rate of progress would be more statling imo if you took a more zoomed out timeline of the process
we are already at the scaling phase of recursive improvement is my point