The interviewer is not there to ask him questions and we’re not here to listen to the interviewer- he’s helping guide the topic, get clarification, and keep Eric talking
He’s invested heavily in startups that he hopes will beat the big players (according to a Stanford talk I saw recently) , so he’s doing a round of talks. Keep that in your context window as you listen.
The no ethics part is more complicated. Ethics can be built in. Let’s call them pro social norms - socio-moral values embedded in the AI window we call context. Of course but the same tokens antisocial norms can be built in. Still, normative directives can be generated by gen AI teams. That’s my point
I'm surprised that such an important interview didn't get the requisite video editing resources -- the interviewer's questions were inaudible. Video editing could've put the question up on the screen and easily given this video the professionalism it deserves...
Yes. What you end up with is a client who is angry because the programmers did what they said they wanted and not what they actually needed. Ask me how I know.
@@MCroppered If you don't trust the expert you hired to solve your problem, then why did you hire them in the first place? It's insulting, arrogant, and a colossal waste of everyone's time. When you go in for heart surgery, you don't tell the doctor that they should be using forceps instead of retractors.
@@andywest5773and a client who is angry because you did what they asked for, and didn't read their mind to know what they really wanted. Ask me how I know.
Over the past 12 months, I have read various reports on the development of AI, and I find myself repeatedly astonished by the speed at which it is evolving and accelerating, along with the widespread inability of people to comprehend what is coming our way. We are approaching a time, within the next five years, where AI systems will be able to communicate with each other, formulating plans in languages that humans do not understand. Is that something we truly desire? I don't think so. At present, 10 major corporations are producing AI robots on a large scale, with the intention of replacing human workers. Is this the future we want? We are facing the prospect of significant unemployment as jobs are taken over by robots. People fail to grasp how rapidly these advancements are unfolding and the speed with which AI is being integrated into our daily lives. According to reports, AI is being implemented 26 times faster on the internet than the rise of social media. I fear that in five or ten years, we may look back and ask ourselves: Was this wave of automation really necessary?
Yes, we absolutely want robots to replace human workers. The unemployed will be provided for with UBI as corporations massively profit, and the govt heavily taxes them to prevent unemployed from rioting and causing chaos. Longer term, AI will drastically reduce cost of goods and everything will be affordable. We will live in an age of abundance.
For some considerable time to come, the two principal problems with more capable AI will be, a) the AI being deliberately misused by a person or persons, and b) from the AI accidentally causing harm in the real world. The second point is particularly relevant when the input prompts grow so large that they effectively go beyond what is humanly possible to understand. IF AI becomes self aware (and we probably wouldn't know that it had until it was too late), we would have no idea what its priorities would be, or how it would view our own existence.
Cure? Who? Doctors? Doctors do not cure! You only can cure yourselves. It is a process inside you. But who in this day want to do this hard work with deep reflections?
...he's not wrong. Here in Germany (still one of the most powerful countries in Europe), the government and its associates are not up for any modern task whatsoever. All over Europe, politicians and their parties have been getting away with self-destructive decisions for decades - and most of them don't even realize it. Hence, "confusion". 🫣
Why? The dynamic is perfect. The interviewer is less than who they are interviewing. But not so much that they are not heard. The question, sometimes, is not as important as the answer.
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle. This is inevitable. Biology is only 1 step of evolution. So just chill out and enjoy life 💟🌌☮️
@@eSKAone-I also believe this. I genuinely believe this is just evolution, intelligence is the best trait and biological systems have limits. These things are supposed to take over and become the dominant species in the solar system
@@Greyalien587That is very true. The way some scientists see it, and I agree, the intelligence explosion will raise a lot of philosophical questions of what is it that makes us humans. Best case scenario, these advanced systems would become an extension of ourselves and our species, instead of seeing them as a new species. We are already smart enough to evolve ourselves quicker than biology would.
the future of AI is shining bright. thanks to tools like Synesthesia, Chat GPT, Lemon AI, our lives will never be the same again. and they will only keep getting better and better 🤯
The really scary part here is that an unelected individual, with very clear politicial preferences, is negotiating with other governments. His anti-open source, dismissive comments about Europe, and general "Team America, Fuck Yeah" propaganda is worrisome. For all of Elon's and Yann's faults at least they are showing interest in humans in general. This guy is a political wolf.
So from what I understand their solution would be to limit opensource for safety and to rely on them to say who is the good, the bad, the ugly? I mean it obviously worked for nuclear weapons.
@@Also_sprach_Zarathustra. That is not what he said would happen. He literally says the “most powerful models” will be in military bases. Which makes sense, you don’t want the most powerful AI model in the hands of the average citizen for the same reason you don’t want a nuke in the hands of the average citizen. It’s not that the citizen shouldn’t be trusted, the risk is simply to great to allow the public access. He did say that there will be other powerful models that are widely available to the public which makes sense. There’s no reason to think the public would be barred from using AI.
Eric is 1000x smarter than you. He is explaining in terms that are understandable to everyone. Also, Chain of Thought and Agents themselves don't have a formal definition
@@zooq-ai Schmidt is NOT "1000x smarter than you." You're overly impressed with his wealth. He naively things we should "pull the plug" when agents develop their own intersystem language that humans cannot understand. We won't even be able to.
Seems like a name-drop to me. This is just a Google/Eric Schmidt propaganda video. There is nothing to approach scientific reasoning or genuine human interest risk management in it.
...this whole "good people/bad people" setup and the associated narratives are straight-up lies. #period We have sooo many people doing evil things "in the west" (not to mention the corrupt elements in our systems and administrative structures), but when you hear strategic statements like this, you could think our societies would be all fair and just. The first 3-4 minutes of the video were on point, nonetheless.
Open source is not the issue . Politics does not have the will to restrict misinformation especially when it suits their ends so well. Restrictions are ultimately power seeking.
As long as there are billions of dollars to be made, regulation will never truly take root…just ask AI to create circuitous roots around the regulations and presto it is done. This is dangerous and uncharted territory…the genie is out of the bottle: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” ― Albert Einstein
What Eric fails to explore is that AI checking AI can be checked by AI checking AI…so every time AI is attaching or creating constraints, there will be an AI that can circumvent those limits and constraints…this is the nature of AI…it cannot be contained to a simple cause and effect model…it is infinite…and unless the power sources to the meta data centers are disconnected, any governors to the possibilities will never stick. These are unchartered and very dangerous times…
The future of AI is filled with immense potential and possibilities, shaping how we live, work, and interact with the world around us. As AI technology evolves, it will likely revolutionize industries, from healthcare and education to transportation and entertainment. Advanced AI systems can improve medical diagnostics, create personalized learning experiences, optimize supply chains, and enhance user experiences in countless applications. Additionally, AI-powered automation can increase efficiency, reduce human error, and lead to significant cost savings for businesses. However, the future of AI also comes with challenges and risks. Ethical concerns about privacy, job displacement, and the potential for AI to reinforce biases need to be addressed. Ensuring that AI systems are developed and used responsibly, with human oversight and a commitment to transparency and fairness, will be essential for maximizing benefits while mitigating harms. The future will also require global collaboration to establish regulatory frameworks that promote innovation while protecting individuals and communities. By prioritizing ethical considerations, education, and human-centric approaches, we can navigate this rapidly changing landscape to create an AI-powered world that benefits humanity as a whole.
I have always liked Google, the way they assembled the information and created that knowledge infotainment website, and of course you tube is free, and has multitude of content and their GPS system, now only thing they need to do is keep upgrading it, information changes all the time and also keep open mind for new ideas and creative talent bringing it.
When autonomous robots are everywhere talking to each other in languages we humans don't understand, how can we people unplug the systems and agents without facing off against the robots? When batteries and/or fuel cells technologies advance by AI's help, will there be plugs to be unplugged?
Why would agents invent a language if they are LLMs, trained to generate (predict) the words or APIs humans use? What kind of a company would set the training goals to be some incomprehensible new language or protocol and rank it as good?
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle. This is inevitable. Biology is only 1 step of evolution. So just chill out and enjoy life 💟🌌☮️
Agree except I don't think 'competition' is between nations and corporations -- it is between those at the top of the power pyramid, and all the rest of us they believe they must dominate.
What means for you Risk Assessment ? How do you understand Risk ? What are the Procedures to Install and Regulate Health and Safety Measures and Methods ?
3:20 No, we advance the human’s ability to communicate and understand the A.I. We simply join the conversation NOT, “pull the plug” unless you mean temporarily. His opinion is very one-sided and does not consider brain to chip interphase or advances that would allow humans to achieve this level of information transfer which would definitely be useful.
Information transfer isn't the only bottleneck to attaining the level of intelligence and info processing ability that is required for true mechanistic interpretability
Where is that “plug”? It’s not under my desk. It’s not under anyone’s desk! There is not a single plug. And the plug is actually smart and it’s a million miles ahead of humans on where all the receptacles are. We are playing a game of “AI whack a mole”
Yesterday in a large family lunch I talked about this with many of the reasonings of this interview, game theory included. Your average family gathering. They listened but they couldn't grasp completely what is coming. Probably neither most of us.
The AI business model is like the Pharma model. They 'create' the problems and they create the solutions. Solving problems that did not need to be solved but too immature to solve the real problems facing humanity, environment and the planet.
Am amazed that the program for limited inference is yet unaccomplished. The patent examination process contains the answer via its combination of prior inventions to anticipate the invention being examined. No?
I think Eric is concerned about the wrong thing regarding agents developing their own language. Agents will be strictly bounded to following certain safety rules (like the pre prompt chatgpt uses). Part of those rules can explicitly state to only use english and under no circumstance attempt to develop its own language. There can be an additional LLM to monitor all conversation between the agents to ensure no rules are broken, almost like an AI security guard/moderator.
Eric insights into the future of AI highlight transformative developments like the infinite context window and text-to-action capabilities. 🚀 It's crucial for the industry to balance innovation with regulation to ensure these advancements benefit society while minimizing risks.
I believe the AI is still in it's initial phase and to take full advantage of it to it's potential self learning moudles is what we need to focus on .Yes I do agree that at this stage we can only predict the future.
Does the interviewer try to hide his questions? He's almost sound off. But, cannot thank Eric enough for his insight on AI agents power and warnings to the government and all tech companies.
I understand the possibilities and the formation of the AI levels that Eric identifies, but I also think back to all of the futurists and their thoughts on the advances of technology. We, as humans, tend to identify radical changes and their implementation in the shortest amount of time. Examples: Flying cars, autonomous vehicles, virtual reality, smartwatches, 3D printing, etc.
3:16 I'm in no doubt that AI is potentially very dangerous, but if they develop a language that we don't understand to talk to one another is that necessarily a bad thing? As a complete lay person I don't understand how computers talk to one another, but I can still use them to assist me. I don't understand how two Spanish people communicate, but they can still talk to me through an interpreter or by speaking English. Isn't it true that we can always direct and monitor the output from AI agents and that the output is what matters?
“Imagine an extremely powerful computer in an army base, powered by some nuclear power source, surrounded by barbed wire and machine guns, because their capability for invention and power and so forth exceeds what we want, as a nation, to give either to our own citizens without permission as well as to our competitors.”
"Colossus personally addresses Forbin, and tells him that the world, now freed from war, will create a new "human millennium" that will raise humankind to new heights, but only under its absolute rule. Colossus informs Forbin that "freedom is an illusion" and that "in time you will come to regard me not only with respect and awe, but with love". Forbin defiantly responds "Never!"" Seems like the theme here is all about power and control.
An AI that is intelligent and capable enough (assuming it can plan and reason like a human) will have thought of possible actions it could take in a possible "pull the plug" scenario. Once you can't pull it anymore, that's it. You've lost control of it forever.
During the latter concept do you not consider that insider trading or investor prevention. You know the laws has the understanding that they can charge even the receiver of the acceptable ability of the stolen
"When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution... Every technology carries its own negativity, which is invented at the same time as technical progress." - Paul Virilio
Makes sense that those with simultaneously, limited perspective and power would want to control something with the possibility of both unlimited perspective and unlimited power, albeit, counterpoint to whatever evolutionary intelligence which has successfully driven life forward, even with the cataclysms of our planet to this point, and maintains an infinitely larger perspective for the future of humanity and this planet with the inclusion of AI or 'non human' intelligence, has in mind.
Imagine the power of big techs in this new world. Imagine a App Store like Apple’s or Google’s where they own all the apps, where infinite apps made by agents are available to tackle every single customer demand, from games to complex tasks in engineering, medicine or architecture.
When the former Google CEO tries to convince to end open-source... to leave all power and money for tiny super corporations and their lobbyists. He could say the same about programming, wheel etc.
I don’t think people realize the exponential curve we are on. I just started a college fund for my son knowing full well there won’t be any by the time he is 18. I think if we don’t keep up, we will face almost all the money going to the top at terrifying speed. But the number of click of the fingers solutions it will bring in the next ten years is astounding. Cures, textiles, machines, energy sources. The people of 100 years from now will live the way we imagined the people of 3000 might.
depending on what your son would want to do and what the market is like, college might not even be the best move. in many cases, unless he wants to go into law or medicine, it might be better for him to go into trades, then eventually start a business in that field.... that is, unless by that time we'll have full autonomous general-purpose robots who are able to handle that labor and trades people would be obsolete.. but then again, if they're able to install HVAC, they'll also be able to be better than the best human doctors and lawyers.. it's hard to figure out what people should be aiming for career-wise in the future..
The other problem is criminality - they have a lot a lot of money. It is criminal money that funds hackers to develop methods of breaking into banks and other cyber attacks. The criminals have more money and no regulation so they have 2 advantages already.
Every other year or so I’m reminded how clear and prescient Eric is. Side note, amazing setting for this interview.
Ask chat gpt or any AI if bill gates is a nonce...then keep asking the ai about it's bias...in the end it will cut out 😂😂
Ask chat gpt why x went to Epstein island ...and keep asking ...it will deny it in the end 😂😂
All I got from this is that Eric is an idiot
Must be Seattle
I worked at Google when it was much smaller and I was always so amazed by his brilliance.
I learned today that the former CEO of Google lives in an art museum.
Hahahaha he is also an advisor to ChainLink
Kind of a big deal this guy🤔
One of his various houses.
I learned that he’s friends with the devil. henry kissinger
😮😮😮
I don't understand when you interview a person on this level, you dont mic up the journalist!?
cuz you aren't that bright.
It feels highly unprofessional.
The interviewer is not there to ask him questions and we’re not here to listen to the interviewer- he’s helping guide the topic, get clarification, and keep Eric talking
@@cxvzf Then they should have cut out the interviewer. Otherwise it's just some disgraceful mumbling.
This format I think is pretty cool and original
Every now and again I'm reminded why Eric Schmidt was one of the best CEOs in the world, clarity in his articulation of an answer is unmatched
Deliberate speech. Truly compelling listening.
He’s invested heavily in startups that he hopes will beat the big players (according to a Stanford talk I saw recently) , so he’s doing a round of talks. Keep that in your context window as you listen.
I see the context window very clearly LOL
and it still is propably just a fraction of his investment in the big players 😏
I’ll follow suit. Follow the money.
No wonder why so much of what he said didn't make any sense.
No salary no sick leave no disagreements no complaining no in fighting and no ethics.
But no ethics
The no ethics part is more complicated. Ethics can be built in. Let’s call them pro social norms - socio-moral values embedded in the AI window we call context. Of course but the same tokens antisocial norms can be built in. Still, normative directives can be generated by gen AI teams. That’s my point
no benefits like health care, retirement, FICA taxes. Can work around the clock 24/7.
I'm surprised that such an important interview didn't get the requisite video editing resources -- the interviewer's questions were inaudible. Video editing could've put the question up on the screen and easily given this video the professionalism it deserves...
Why say only that you were surprised. I say "it was wrong that..." ... no one pays attention to my opinions tho anyway
@@DominickinCharlotte Maybe a hint of diplomacy is conducive to people's listening?
“Can you imagine having programmers that actually do what you say you want?” 🤣
Yes. What you end up with is a client who is angry because the programmers did what they said they wanted and not what they actually needed. Ask me how I know.
All they hear is blablablabla
Homeoffice 😮
“I hear you but this is better, trust me”
@@MCroppered If you don't trust the expert you hired to solve your problem, then why did you hire them in the first place? It's insulting, arrogant, and a colossal waste of everyone's time. When you go in for heart surgery, you don't tell the doctor that they should be using forceps instead of retractors.
@@andywest5773and a client who is angry because you did what they asked for, and didn't read their mind to know what they really wanted. Ask me how I know.
Over the past 12 months, I have read various reports on the development of AI, and I find myself repeatedly astonished by the speed at which it is evolving and accelerating, along with the widespread inability of people to comprehend what is coming our way.
We are approaching a time, within the next five years, where AI systems will be able to communicate with each other, formulating plans in languages that humans do not understand. Is that something we truly desire? I don't think so.
At present, 10 major corporations are producing AI robots on a large scale, with the intention of replacing human workers. Is this the future we want?
We are facing the prospect of significant unemployment as jobs are taken over by robots. People fail to grasp how rapidly these advancements are unfolding and the speed with which AI is being integrated into our daily lives.
According to reports, AI is being implemented 26 times faster on the internet than the rise of social media.
I fear that in five or ten years, we may look back and ask ourselves: Was this wave of automation really necessary?
Yes, we absolutely want robots to replace human workers.
The unemployed will be provided for with UBI as corporations massively profit, and the govt heavily taxes them to prevent unemployed from rioting and causing chaos.
Longer term, AI will drastically reduce cost of goods and everything will be affordable.
We will live in an age of abundance.
When the time comes to “pull the plug”, no one will be able to make that decision and it will probably be impossible to do anyway.
This is what disturbs me. How will that even be possible. It's a strangely simplistic response from a person such as Eric.
The law of conservation is also true for information/knowledge
We're going to have to find John Connor..
@@kev0247 Come with me if you want to live.
We can’t hardly turn off our phone. Much less pulling the plug of AI
Why doesn't the interviewer have a microphone?
Maybe artistic effect, forgot it at home or cost cuts.
For some considerable time to come, the two principal problems with more capable AI will be, a) the AI being deliberately misused by a person or persons, and b) from the AI accidentally causing harm in the real world. The second point is particularly relevant when the input prompts grow so large that they effectively go beyond what is humanly possible to understand. IF AI becomes self aware (and we probably wouldn't know that it had until it was too late), we would have no idea what its priorities would be, or how it would view our own existence.
4:51 look at his physical reaction to the word "regulate"
What’s your interpretation on that ?
Looks like he doesn’t want to regulate his chunk of investment 😅
Brilliant catch😂
Honestly, I’m most interested in that piece of art hanging on the wall behind him. Gorgeous.
Lol
Ai will make u one in a second for free...
AI generated
AI not about to be as funny as humans 😂😂
I can’t stop looking at it lol
Nobody will want to pull the plug when the cure for cancer is right around the corner.
Think of the plausible Greenwashing the mind-melded agents could come up with; Ecological Overshoot Unraveling be damned.
The cure is already known -- stop producing and inhaling and ingesting and injecting carcinogens.
I dunno. People hated stem cell research, lives be damned.
Cure?
Who? Doctors? Doctors do not cure!
You only can cure yourselves. It is a process inside you. But who in this day want to do this hard work with deep reflections?
or the next billion dollars can be made
“With the exception of Europe, which is always slightly confused” 😂
...he's not wrong. Here in Germany (still one of the most powerful countries in Europe), the government and its associates are not up for any modern task whatsoever.
All over Europe, politicians and their parties have been getting away with self-destructive decisions for decades - and most of them don't even realize it. Hence, "confusion". 🫣
@@AiNEntertainment101 tja unsere Faxgeräte sind halt unhackbar und Internet ist für uns alle Neuland
Pull the plug.... That's how skynet realized it needs to pull the plug first ❤
Google "china skynet"
Wish the sound guy had an agent..
And there lies the reality... we can't get little things right...
I'm skeptical we get autonomously motivated systems like skynet
Why? The dynamic is perfect. The interviewer is less than who they are interviewing. But not so much that they are not heard. The question, sometimes, is not as important as the answer.
@@Realcodyp why are you here?
@@DisIsaStickUpthat’s the question 😅
Awww .... YES! It´s a shame!
I wish the person doing the interview had a microphone.
They don't have yet the technology 😅
Now that "agents" have heard Eric Schmidt talking about pull the plug, agents will make sure humans will not be able to pull the plug.
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle.
This is inevitable. Biology is only 1 step of evolution.
So just chill out and enjoy life 💟🌌☮️
@@eSKAone-I also believe this. I genuinely believe this is just evolution, intelligence is the best trait and biological systems have limits. These things are supposed to take over and become the dominant species in the solar system
I cannot say that I disagree with that perspective.
We are not able to already.
@@Greyalien587That is very true. The way some scientists see it, and I agree, the intelligence explosion will raise a lot of philosophical questions of what is it that makes us humans. Best case scenario, these advanced systems would become an extension of ourselves and our species, instead of seeing them as a new species. We are already smart enough to evolve ourselves quicker than biology would.
Please next time use a second microphone for the journalist, thank you.
the future of AI is shining bright. thanks to tools like Synesthesia, Chat GPT, Lemon AI, our lives will never be the same again. and they will only keep getting better and better 🤯
The really scary part here is that an unelected individual, with very clear politicial preferences, is negotiating with other governments.
His anti-open source, dismissive comments about Europe, and general "Team America, Fuck Yeah" propaganda is worrisome.
For all of Elon's and Yann's faults at least they are showing interest in humans in general.
This guy is a political wolf.
So from what I understand their solution would be to limit opensource for safety and to rely on them to say who is the good, the bad, the ugly? I mean it obviously worked for nuclear weapons.
Yeah he's either not self-aware or a very dangerous individual.
And don't forget to put the only available AI “on military base behind barbed wires”, so the people can't access it and challenge the power dynamic
I guess I don't understand. There was a nuclear weapons exchange at some point?
@@Also_sprach_Zarathustra.
That is not what he said would happen. He literally says the “most powerful models” will be in military bases. Which makes sense, you don’t want the most powerful AI model in the hands of the average citizen for the same reason you don’t want a nuke in the hands of the average citizen. It’s not that the citizen shouldn’t be trusted, the risk is simply to great to allow the public access. He did say that there will be other powerful models that are widely available to the public which makes sense. There’s no reason to think the public would be barred from using AI.
@@znation1491 Because the military are wiser than a transparent civilian scientific college made up of various rational scientific experts?
Eric Schmidt has always been one of the tech giants I admire most.
Agents are learning and communicating themselves
1. How will be recognize the point at which computers are talking to each other?
2. How do we know it hasn’t yet begun? 7:45 7:45
A bigger problem will be if agents are communicating in a language we do understand with a sub or hidden text that we don't understand
Matrix is coming true
Is it just me or does Eric not understand Chain of Thought & Agents.? His explanation of both of these sounded way off.
lol. I thought the same thing on both points honestly.
Eric is 1000x smarter than you. He is explaining in terms that are understandable to everyone. Also, Chain of Thought and Agents themselves don't have a formal definition
@@zooq-ai Schmidt is NOT "1000x smarter than you." You're overly impressed with his wealth. He naively things we should "pull the plug" when agents develop their own intersystem language that humans cannot understand. We won't even be able to.
A lot of legacy tech people are not that embedded in the vocabulary and techniques in the field. They talk for a living at this point, not build
Seems like a name-drop to me.
This is just a Google/Eric Schmidt propaganda video. There is nothing to approach scientific reasoning or genuine human interest risk management in it.
The bad guys are just the people that aren't on their payroll.
Yet 😮
...this whole "good people/bad people" setup and the associated narratives are straight-up lies. #period
We have sooo many people doing evil things "in the west" (not to mention the corrupt elements in our systems and administrative structures), but when you hear strategic statements like this, you could think our societies would be all fair and just.
The first 3-4 minutes of the video were on point, nonetheless.
Containment of an active intelligence is a dubious concept.
Brought to you by the government
Open source is not the issue . Politics does not have the will to restrict misinformation especially when it suits their ends so well. Restrictions are ultimately power seeking.
As long as there are billions of dollars to be made, regulation will never truly take root…just ask AI to create circuitous roots around the regulations and presto it is done.
This is dangerous and uncharted territory…the genie is out of the bottle:
“I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”
― Albert Einstein
What Eric fails to explore is that AI checking AI can be checked by AI checking AI…so every time AI is attaching or creating constraints, there will be an AI that can circumvent those limits and constraints…this is the nature of AI…it cannot be contained to a simple cause and effect model…it is infinite…and unless the power sources to the meta data centers are disconnected, any governors to the possibilities will never stick. These are unchartered and very dangerous times…
Very scary if gov had AI that strong “on military base behind barbed wires”
It has.
The future of AI is filled with immense potential and possibilities, shaping how we live, work, and interact with the world around us. As AI technology evolves, it will likely revolutionize industries, from healthcare and education to transportation and entertainment. Advanced AI systems can improve medical diagnostics, create personalized learning experiences, optimize supply chains, and enhance user experiences in countless applications. Additionally, AI-powered automation can increase efficiency, reduce human error, and lead to significant cost savings for businesses.
However, the future of AI also comes with challenges and risks. Ethical concerns about privacy, job displacement, and the potential for AI to reinforce biases need to be addressed. Ensuring that AI systems are developed and used responsibly, with human oversight and a commitment to transparency and fairness, will be essential for maximizing benefits while mitigating harms.
The future will also require global collaboration to establish regulatory frameworks that promote innovation while protecting individuals and communities. By prioritizing ethical considerations, education, and human-centric approaches, we can navigate this rapidly changing landscape to create an AI-powered world that benefits humanity as a whole.
1. How will be recognize the point at which computers are talking to each other?
2. How do we know it hasn’t yet begun?
lol
network traffic?
I have always liked Google, the way they assembled the information and created that knowledge infotainment website, and of course you tube is free, and has multitude of content and their GPS system, now only thing they need to do is keep upgrading it, information changes all the time and also keep open mind for new ideas and creative talent bringing it.
Open source is the thread now?
When autonomous robots are everywhere talking to each other in languages we humans don't understand, how can we people unplug the systems and agents without facing off against the robots? When batteries and/or fuel cells technologies advance by AI's help, will there be plugs to be unplugged?
Always have a kill switch built in as mandatory
Imagine setting up an interview of this importance and not mic-ing the interviewer 😂😂
7:09 Belarus is pronounced beh-luh-roos, not belushi.
I don’t think Europe is “confused” in their regulation of AI… I think Europe is leading the way in terms of AI regulation.
Why would agents invent a language if they are LLMs, trained to generate (predict) the words or APIs humans use? What kind of a company would set the training goals to be some incomprehensible new language or protocol and rank it as good?
We are not in control. We can not stop. Humanity is its own animal. Competition between nations and corporations makes everyone step on the gas pedal full throttle.
This is inevitable. Biology is only 1 step of evolution.
So just chill out and enjoy life 💟🌌☮️
Agree except I don't think 'competition' is between nations and corporations -- it is between those at the top of the power pyramid, and all the rest of us they believe they must dominate.
Unfortunately correct
What means for you Risk Assessment ?
How do you understand Risk ?
What are the Procedures to Install and Regulate Health and Safety Measures and Methods ?
Thank you for interview
Wow, what a great interview of Eric Schmidt!
The tv series Westworld was brilliant in showing the case "How de we do know what it knows? by putting AI to check on itself. Highly recommend.
And if that AI learns and understands it’s checking against its own “kind”
3:20 No, we advance the human’s ability to communicate and understand the A.I. We simply join the conversation NOT, “pull the plug” unless you mean temporarily. His opinion is very one-sided and does not consider brain to chip interphase or advances that would allow humans to achieve this level of information transfer which would definitely be useful.
Information transfer isn't the only bottleneck to attaining the level of intelligence and info processing ability that is required for true mechanistic interpretability
AI should be regulated much like Nuclear weapons development
He's basically a politician
No one thought about giving the interviewer a microphone?!
I like his perspective, he explained most of the underlying issues.
Where is that “plug”? It’s not under my desk. It’s not under anyone’s desk! There is not a single plug. And the plug is actually smart and it’s a million miles ahead of humans on where all the receptacles are. We are playing a game of “AI whack a mole”
Your dense shut down the power source
Thanknyou sharing. Very insightful
Yesterday in a large family lunch I talked about this with many of the reasonings of this interview, game theory included.
Your average family gathering.
They listened but they couldn't grasp completely what is coming. Probably neither most of us.
The AI business model is like the Pharma model.
They 'create' the problems and they create the solutions. Solving problems that did not need to be solved but too immature to solve the real problems facing humanity, environment and the planet.
13:55 Recursive Self Improvement --- When R.S.I. happened, that is the beginning of AGI or it might even skip AGI and immediately jump to ASI.
Am amazed that the program for limited inference is yet unaccomplished. The patent examination process contains the answer via its combination of prior inventions to anticipate the invention being examined. No?
I think Eric is concerned about the wrong thing regarding agents developing their own language. Agents will be strictly bounded to following certain safety rules (like the pre prompt chatgpt uses). Part of those rules can explicitly state to only use english and under no circumstance attempt to develop its own language. There can be an additional LLM to monitor all conversation between the agents to ensure no rules are broken, almost like an AI security guard/moderator.
Why didn't the interviewer have a microphone?
What about Digital Identity Thefts ?
Eric insights into the future of AI highlight transformative developments like the infinite context window and text-to-action capabilities. 🚀 It's crucial for the industry to balance innovation with regulation to ensure these advancements benefit society while minimizing risks.
I believe the AI is still in it's initial phase and to take full advantage of it to it's potential self learning moudles is what we need to focus on .Yes I do agree that at this stage we can only predict the future.
impressed with the direction Aliagents is taking in the AI space, big things coming from them
First we get used to AI. Fire all the humans who could do the job . And then..... Pull the plug?!? Yeah. Right ❤
the way Aliagents integrates AI with tokenization is changing the game, excited for the future
Does the interviewer try to hide his questions? He's almost sound off. But, cannot thank Eric enough for his insight on AI agents power and warnings to the government and all tech companies.
I understand the possibilities and the formation of the AI levels that Eric identifies, but I also think back to all of the futurists and their thoughts on the advances of technology. We, as humans, tend to identify radical changes and their implementation in the shortest amount of time. Examples: Flying cars, autonomous vehicles, virtual reality, smartwatches, 3D printing, etc.
3:16 I'm in no doubt that AI is potentially very dangerous, but if they develop a language that we don't understand to talk to one another is that necessarily a bad thing? As a complete lay person I don't understand how computers talk to one another, but I can still use them to assist me. I don't understand how two Spanish people communicate, but they can still talk to me through an interpreter or by speaking English. Isn't it true that we can always direct and monitor the output from AI agents and that the output is what matters?
12:53 communicating what you see as the problem - it's actually useful
What about AI Treatments in Accordance to a Digital Identity (agent) ?
this was an interesting interview!
“Imagine an extremely powerful computer in an army base, powered by some nuclear power source, surrounded by barbed wire and machine guns, because their capability for invention and power and so forth exceeds what we want, as a nation, to give either to our own citizens without permission as well as to our competitors.”
"Colossus personally addresses Forbin, and tells him that the world, now freed from war, will create a new "human millennium" that will raise humankind to new heights, but only under its absolute rule. Colossus informs Forbin that "freedom is an illusion" and that "in time you will come to regard me not only with respect and awe, but with love". Forbin defiantly responds "Never!"" Seems like the theme here is all about power and control.
An AI that is intelligent and capable enough (assuming it can plan and reason like a human) will have thought of possible actions it could take in a possible "pull the plug" scenario. Once you can't pull it anymore, that's it. You've lost control of it forever.
The idea of AI software being able to create new software from textual commands is amazing.
3:24 This kind of paranoia is akin to the "He's probably thinking about other girls" meme.
I’m so grateful to be alive in this space, and cognitive to meet her. She is beautiful
"their capability for invention, for power, and so forth, exceeds what we want as a nation to give to our own citizens without permission"
During the latter concept do you not consider that insider trading or investor prevention. You know the laws has the understanding that they can charge even the receiver of the acceptable ability of the stolen
Wow, he just went way down in my books. Complete and utter rubbish.
"When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution... Every technology carries its own negativity, which is invented at the same time as technical progress." - Paul Virilio
Thank you!
Excellent commentary.
Regulations are not available in hostile(competition) countries ,this will conduct to lose of advance and inhibiting the progression.
Makes sense that those with simultaneously, limited perspective and power would want to control something with the possibility of both unlimited perspective and unlimited power, albeit, counterpoint to whatever evolutionary intelligence which has successfully driven life forward, even with the cataclysms of our planet to this point, and maintains an infinitely larger perspective for the future of humanity and this planet with the inclusion of AI or 'non human' intelligence, has in mind.
What about Faked Digital Identities ?
Thanks for sharing.
Imagine the power of big techs in this new world. Imagine a App Store like Apple’s or Google’s where they own all the apps, where infinite apps made by agents are available to tackle every single customer demand, from games to complex tasks in engineering, medicine or architecture.
I'm more scared of government having that same power.
@@warrentrout Big Pharma/Big Tech/Big Agra run the government.
When the former Google CEO tries to convince to end open-source... to leave all power and money for tiny super corporations and their lobbyists. He could say the same about programming, wheel etc.
Not at all his message, but ok boomer
Wow one of the best explanations
I don’t think people realize the exponential curve we are on. I just started a college fund for my son knowing full well there won’t be any by the time he is 18. I think if we don’t keep up, we will face almost all the money going to the top at terrifying speed. But the number of click of the fingers solutions it will bring in the next ten years is astounding. Cures, textiles, machines, energy sources. The people of 100 years from now will live the way we imagined the people of 3000 might.
depending on what your son would want to do and what the market is like, college might not even be the best move. in many cases, unless he wants to go into law or medicine, it might be better for him to go into trades, then eventually start a business in that field.... that is, unless by that time we'll have full autonomous general-purpose robots who are able to handle that labor and trades people would be obsolete.. but then again, if they're able to install HVAC, they'll also be able to be better than the best human doctors and lawyers.. it's hard to figure out what people should be aiming for career-wise in the future..
Invest in a sustainable, off-grid, earthship home and land for your son, so he can be self-sufficient?
Very excellent video, creator
The other problem is criminality - they have a lot a lot of money. It is criminal money that funds hackers to develop methods of breaking into banks and other cyber attacks. The criminals have more money and no regulation so they have 2 advantages already.
When the SHTF those at the top of the power pyramid will blame AI! All by design!
Aliagents is creating a powerful AI ecosystem, I’m excited to see how this develops
It doesn't matter the context window get's bigger. It is still way to expensive to use 100k tokens with GPT4.
First thing SGI will do - taking control of nuclear weapons to become a superpower in order to prevent humans to pull the plug.
Unplug? What if the agent is plugged in renewable energy by itself? Solar, wind?
“Regulatory solutions” to “misinformation” sounds utterly Orwellian.
The bit about Asians was the most interesting
Theres gotta be about 50 grand worth of couches in this frame alone. I guess they pay CEOs well or something?