The Transformative Potential of AGI - and When It Might Arrive | Shane Legg and Chris Anderson | TED
ฝัง
- เผยแพร่เมื่อ 3 พ.ค. 2024
- As the cofounder of Google DeepMind, Shane Legg is driving one of the greatest transformations in history: the development of artificial general intelligence (AGI). He envisions a system with human-like intelligence that would be exponentially smarter than today's AI, with limitless possibilities and applications. In conversation with head of TED Chris Anderson, Legg explores the evolution of AGI, what the world might look like when it arrives - and how to ensure it's built safely and ethically.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
Follow TED!
Twitter: / tedtalks
Instagram: / ted
Facebook: / ted
LinkedIn: / ted-conferences
TikTok: / tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: go.ted.com/shanelegg
• The Transformative Pot...
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
#TED #TEDTalks #gemini #agi #ai - วิทยาศาสตร์และเทคโนโลยี
As a sick person struggling with crippling illnesses, and bedridden for many years, I sincerely hope AGI can be achieved asap. It's my best chance at having something remotely close to an actual life at some point.
We will have it by 2024 September.
We just need to build complete mathematical capabilities into LLMs.
I am sorry to hear that, mate. I deeply hope it happens as soon as possible myself, as my parents are getting old and I cannot picture a world without them. Hopefully, our dream will come true in the somewhat near future.
@@coolcool2901It will be so funny if it doesn't happen in September 2024! 😂
@@krishanSharma.69.69f I'll just shift the goal post then. I am flexible not rigid.
But it will more than likely happen, in order to get AI to AGI it needs to understand the complete mathematical matrix, which will be accomplished next year. Maths is the language of the universe and absolute logic.
Current LLMs understand language, but doesn't know maths and that's why it's not AGI. Maths is required for self improving AGI system.
In terms of the human experience, I suspect one the first places we will find AGI really transforming our experience in a positive way will be with the aging population. The baby boom generation is really perfectly placed to benefit from AGI. I remember when I was in a college social science class learning of the concern of how would society deal with baby boomers becoming old and consequently reaching the stage in their (our) lives where we require more support both physically and cognitively. I find it fascinating contemplating our cellphone avatars carrying on conversations stimulating our brains, reminding us to our to take our medicines, making recommendations to us that are personal and comforting as well as assisting us when we become confused or disoriented. A couple of simple scenarios that comes to mind is coming out of a store and being confused as to where we parked our car, and our assistant reassuring us and showing us where we parked it or our assistant assessing that we have not had human interaction for a period of time and making suggestions to us that involve social interactions or even texting our care provider alerting them that we are becoming "shut in".
FEEL THE AGI
Fascinating. AGI will be smart enough to understand how AGI works. So it will be able to improve its own capabilities. AGI will be then smarter and so will improve furher. So AGI will be a constantly self-improving system. It will leave humans behind very quickly. We will cease to understand a lot of what AGI is doing. Secondly, there is an inherent unpredictability in complex cognitive systems. Absolutely fascinating!
Its called ASI then. Artificial Superintelligence
And some believe that the transition from AGI to ASI will be a matter of days, hours, or even minutes. With a large number of self-improvements done by the system in a very short timeframe.
Human level AGI is a false concept. If a machine has all the cognitive abilities of a human, it will already be superior to humans. Humans don’t have perfect memory, perfect math skills, unlimited stamina, the ability to be copied and work in tandem, or the ability to go through huge amounts of data in seconds. AGI will have these capabilities on day 1.
good take 👍🏻
Potentially a terrible idea. I hope I'm wrong, but I wish we waited until we had a better understanding of consciousness and how a mind works in general before going for this kind of tech.
AGI will rival the invention of the wheel in greatness
It will be waaay bigger
It will rival the "invention" of humans themselves in intelligence.
Butlerian Jihad
@@gregbors8364The Techno-Religionists will hate you for bringing that up. Hopium in the positive use of AGI and avoiding talking of how government/business will use it to control people for the eli,tes is never discussed
Fascinating insights on AGI's potential by Shane Legg. Balancing innovation with ethics is crucial for a responsible and impactful future
Shane is very insightful here. Very clear communicator and really demystifies what AGI is and its implications.
Useful insight had to be forced out of him, and it honestly wasn't anything that wasn't already obvious.
00:04 🌐 Shane Legg's interest in AI sparked at age 10 through computer programming, discovering the creativity of building virtual worlds.
01:02 🧠 Being dyslexic as a child led Legg to question traditional notions of intelligence, fostering his interest in understanding intelligence itself.
02:00 📚 Legg played a role in popularizing the term "artificial general intelligence" (AGI) while collaborating on AI-focused book titles.
03:27 📈 Predicted in 2001, Legg maintains a 50% chance of AGI emerging by 2028, owing to computational growth and vast data potential.
04:26 🧩 AGI defined as a system capable of performing various cognitive tasks akin to human abilities, fostering the birth of DeepMind.
05:26 🌍 DeepMind's founding vision aimed at building the first AGI, despite acknowledging the transformative, potentially apocalyptic implications.
06:57 🤖 Milestones like Atari games and AlphaGo fueled DeepMind's progress, but language models' scaling ignited broader possibilities.
08:50 🗨 Language models' unexpected text-training capability surprised Legg, hinting at future expansions into multimedia domains.
09:20 🌐 AGI's potential arrival by 2028 could revolutionize scientific progress, solving complex problems with far-reaching implications like protein folding.
11:44 ⚠ Anticipating potential downsides, Legg emphasizes AGI's profound, unknown impact, stressing the need for ethical and safety measures.
14:41 🛡 Advocating for responsible regulation, Legg highlights the challenge of controlling AGI's development due to its intrinsic value and widespread pursuit.
15:40 🧠 Urges a shift in focus towards understanding AGI, emphasizing the need for scientific exploration and ethical advancements to steer AI's impact positively.
whats that plugin called?
this is my best chance for not going blind. i hope they get there quickly as my time is limited.
Well, if you do go blind then it could probably still fix it, and then the other possibility is that everyone dies. So either way in the end you won't be blind.
@@KuZiMeiChuaneveryone will die even if we don't develop AGI so no problem either way
we need an AGI panel of judges, I think AGI can be impartial and a impartial panel of independent AGI will change the world.
Yep well never have a subjective outcome to a Figure Skating event or a robbery in combat sports ever again! Lol
Do you know about ASI 😈
The AI judges would follow the constitution then, unlike the human ones
Very very insightful, little video. Absolutely fascinating…
0:00 - 2:00:
Introduction of Shane Legg and his background in computer science and artificial intelligence.
Legg's early interest in AI and his experience with dyslexia.
Coining the term "artificial general intelligence" (AGI) in 2001.
2:00 - 4:00:
Legg's prediction of a 50% chance of AGI by 2028 and his current stance on the timeline.
Definition of AGI as a system that can do all the cognitive tasks that humans can do.
4:00 - 6:00:
Founding of DeepMind and the company's goal of achieving AGI.
Legg's belief in the transformative potential of AGI and the importance of understanding its risks.
6:00 - 8:00:
The development of AlphaFold and its potential impact on scientific research.
Legg's vision for a future where human intelligence is aided and extended by machine intelligence.
8:00 - 10:00:
Potential risks associated with AGI and the need for careful development and regulation.
Legg's call for more research and understanding of AGI to ensure its safe and ethical development.
10:00 - 12:00:
Discussion of the potential for AGI to solve some of humanity's most pressing challenges.
Legg's optimism for the future of AI and its potential to create a golden age for humanity.
12:00 - 14:00:
Legg's concerns about the potential for AGI to be used for malicious purposes.
The need for international cooperation to ensure the responsible development of AGI.
14:00 - 16:00:
Legg's call to action for scientists, policymakers, and the public to engage in the conversation about AGI.
Closing remarks and Q&A session
Thks
AI summary for video talking bout the future of agi? Really?
Always important to remind ourselves that intelligence and wisdom are two different realms.
Yup…I think it’s probably already here….but we are not told…. They are slowly getting US ready….hopefully it will safe for all of us🙏🧡
Thats something that less intelligent people like to tell themselves to feel better
11:25-11.35 these 10seconds that blew my mind
11:32 that gave me shivers. I'm super excited and terrified at the same time!
Great episode. Thanks.
This dude is happy to have created a black hole and asks us to be open minded about it.
Three insane quotes :
11:28 "it's like the arrival of human intelligence in the world. This is another intelligence arriving in the world"
11:38 "we do not fully understand all the consequences and implications of this"
What could go wrong?
12:55 "superintelligence could design and engineer a pathogen"
Great and he's optimistic.
What sounds cooler
Humans died of global warming
Or
Humans died because ai robots killed them
God complex
They too excited to understand how it actually works and all the consequences it may cause. The dude just spilled the water.
We appreciate how much insight and useful information we receive from talks like these. We hope to see more in the upcoming future.
AGI will be the most powerful tool humanity has ever seen and it will definitely be weaponised. There are a million ways this can go wrong and the genie is out of the box already, so we just have to hope that it'll come as late as possible
Won't AGI be able to introspect and research its own neural networks to understand how it works? That might be our only chance of understanding how they produce the results they do.
Understanding how it works will not prevent disaster. Some understood how the space shuttle worked, but it didn't allow them to foresee the ways it would fail, catastrophically, before it happened. AGI will be many more times complicated than the space shuttle was.
There is this notion of AGI boom, where giving we were able to make a higher Inteligence, they can continue the trend till? we don't know 🤷♂️
That will help for sure, but you are asking it to explain it's motivations without knowing it's motivations. It could lie.
It's incredible how easy it is for some people to talk about gambling the future of every man, woman and child of every culture and nationality. Such moral clarity!
If AGI can create AGI, then someone will inevitably create unethical and dangerous AGI with nefarious intentions. We need to prepare for when that will happen, just as much as we must try to make our own AGI safe and ethical.
Ethics, morals, good or bad dont exist. They are all just concepts and can vary vastly
@@bestoftiktok8950 How can they vary and not exist?
@@bestoftiktok8950would you say the same thing if someone threatened to harm you and people you care about? Or would you suddenly realise the value of morals, ethics and justice
The same could be said about you and any other intelligent human being, however we do not consider that we have blind will and that we follow anyone simply because they ask us to do so. I think we need to re-understand the concept of AGI
That’s assuming that it’s still able to be used as a tool, and not that it becomes sentient. You can’t “use” a super intelligence (Ex Machina)
Younger, when i read Asimov, I thought the law of robotics were a nice vulgarisation of concepts extremely hard to integrate with codes.. Now, with LLMs, it seems a system may actually be able to 'understand' them and somehow enforce them. It is such a fundamental paradigm shift in AI and regular computer science that we really need to catch a breath to reflect on this and completely change our programming designs.. but those AIs don't really think yet, even though they make a very good impression of it
With AGI I think more of Asimov’s Psychohistory in the Foundation series.
I think one distinguishing feature for "next level" AI would be volition. Manifest as curiosity, self training, .... not sure. But something that works without needing constant human prompts.
The rapid progress towards AGI would be really comforting and inspirational if it wasn't for the fact that global corporations would DEFINITELY use it to increase their theft, oppression, and dominance.
Happy to see Google talking about the future of OpenAI here and how it will change the world that we know it now.
Remember that as of this point Google is only claiming what they may have in the future while many other companies are leaving Google in the dust.
I think Gemini has something to say about this. It outperforms GPT 4 in all but one of the metrics.
@@nicklennox311the metrics are weird. they used different prompting techniques for Gemini and gpt 4.
@@nicklennox311 We can only hope yet you need keep in mind that Google did the exact same PR on Bard showing all the cool stats before it's release to then fail when showing the actual product. As well, Google will go out of it's way to tell the world how bad AGI will be as they did a few months back about AI in general due to them having nothing.
@@OZtwo yeah thats really true. After digging a bit more and reading parts of the actual paper I see how they did the seemingly "live" videos and I feeel its a bit dishonest. But I guess they have to clickbait to make headlines
Gemini hallucinates gives out wrong information and can't even solve basic logic problems
To those who are fearful of AGI and the threats it poses to our society: don't let that fear run off with you. Our civilization is already facing apocalyptic consequences in climate change, shrinking populations in industrialized countries, and stagnation in material sciences. AGI is needed if we're going to make it another century.
shout out to Ben Goertzel, check his books. especially "the end of the beginning"
When was this talk?
It's interesting that even his own definition of AGI changed. We keep moving the goal post. But even his current definition, that it's a system that can do the sort of cognitive tasks the humans can do, is something that I think has already been achieved. But what does this mean? Anything? Nothing? It's just term. The world hasn't ended.
LLMs can't do math very well
Imagine autonomously moving robots with AGI or ASI....
@@-whackd they can do math better than most people and faster. But, they can and do also use calculators. And having the ability to use tools is an additional sign of intelligence.
Sorry, incredibly ignorant comment 😂
@@cemcivelek2152 care to elaborate? My point was that the term AGI becomes irrelevant if we keep changing the definition. Also, one thing that is probably most consistent about the definition is that it will be a stepwise life-altering event. But, given that we have already reached some of these interim definitions, my question is, has life changed stepwise or do we fail to notice that it has.
00:06 Discovering creativity through programming led to an interest in artificial intelligence.
02:00 Origin of the term AGI and early prediction
03:59 AGI is a system that can do all cognitive tasks people can do.
06:04 Intelligence in machines is incredibly valuable to develop.
08:18 AGI is likely to arrive around 2028 with a 50 percent chance
10:08 AGI could lead to rapid scientific advancements and a golden age of humanity.
12:11 The potential risks of highly advanced AI systems should be taken extremely seriously.
14:11 AGI development needs to be regulated and carefully understood.
We don’t understand how it works now. We have no chance of knowing once it’s fully developed as it will be smarter than us by an order of magnitude.
Do you feel the agi chat
im feeling it
im feeling it GOOOOOD
🎯 Key Takeaways for quick navigation:
00:04 🕹️ *Shane's early interest in programming and artificial intelligence.*
- Shane Legg's interest in AI sparked by programming and creating virtual worlds on his first computer at age 10.
01:02 🧠 *Shane's experience with dyslexia and early doubts about traditional intelligence assessments.*
- Shane's dyslexia diagnosis and the realization that traditional assessments may not capture true intelligence.
02:00 🤖 *Origin of the term "artificial general intelligence" (AGI) and its early adoption.*
- Shane's involvement in coining the term "artificial general intelligence" (AGI) and its adoption in the AI community.
02:59 🚀 *Shane's prediction of AGI by 2028 and the exponential growth of computation.*
- Shane's prediction of a 50 percent chance of AGI by 2028 based on exponential computation growth.
04:26 🔍 *Shane's refined definition of AGI as a system capable of general cognitive tasks.*
- Shane's updated definition of AGI as a system capable of various cognitive tasks similar to humans.
05:57 💼 *Founding of DeepMind and the goal of building AGI.*
- Shane's role in founding DeepMind and the company's mission to develop AGI.
07:26 🧠 *Shane's fascination with language models and their scaling potential.*
- Shane's interest in the scaling of language models and their potential to perform cognitive tasks.
08:22 🤝 *Shane's perspective on the unexpected advancements in AI, including ChatGPT.*
- Shane's surprise at the capabilities of text-based AI models like ChatGPT.
09:20 🌍 *Shane's vision of AGI's transformative potential in solving complex problems.*
- Shane's vision of AGI enabling breakthroughs in various fields, such as protein folding.
11:14 🚫 *Acknowledgment of the potential risks and uncertainties surrounding AGI.*
- Shane's recognition of the profound uncertainties and potential risks associated with AGI development.
12:43 ☠️ *Discussion of potential negative outcomes, including misuse of AGI.*
- Shane's exploration of potential negative scenarios, such as engineered pathogens or destabilization of democracy.
15:11 🤔 *Emphasis on the need for greater scientific understanding and ethical development of AGI.*
- Shane's call for increased scientific research and ethical considerations in AGI development.
Made with HARPA AI
Thank you.
@@d_wigglesworthYou're thanking the enemy...
The sound quality is poor. Could TED afford better post processing?
I think we can see AGI in early 2025 and ASI in late 2029.
I think that once we get AGI, ASI will follow not too long after.
@@bloodust7356I CAN'T WAIT!!
"Don't you think we should slow down?"
So this is the thing, the Pandoras box is already open.. Slowing down will only give countries who want to see other countries burn, a foothold... It's too late for that. We MUST push forward... Slowing down is as dangerous, if not more, than continuing the process.
Does anyone know the date of this talk?
October 2023.
AI is not being developed in a vacuum, not will be deployed on a new, unknown planet. Surely, we know enough already about our humanity to make viable predictions about how it will and will not be used and who will benefit the most. How naïve must one be to expect different results, when we haven't been able to avoid peril in other technological areas, such as social media? How can we expect that a new technology of such immense value promise will be used for the benefit and enrichment of all rather then few?
To ensure the safety of AGI, implementing robust firewalls is crucial. For instance, without proper safeguards, it might independently generate viruses or be manipulated by its creator to potentially breach systems, like hacking into missile defense and triggering launches-a concerning reality that exists presently.
We won't stop pushing until it's too late.
Will we understand how AI works before AGI does? We'd better get this sorted out before AGI arrives.
It might mean the future of mankind
I think it’s coming earlier….if it’s not here already…but Sam is not telling ….may be 🕊🧡
Combonations of things that are defined and undefined like aint...
Summary:
In his talk at TED, Shane Legg, a co-founder of DeepMind, discusses the potential of artificial general intelligence (AGI) and its possible arrival. He believes that AGI is inevitable and will significantly impact the world. He emphasizes the importance of understanding the risks of AGI and developing methods to ensure its safety.
Legg defines AGI as a system capable of performing all cognitive tasks that humans can. He believes AGI will solve many of the world's most pressing problems, including climate change and poverty. However, he warns that AGI could also be misused for malicious purposes, such as creating engineered pathogens or destabilizing democracies.
Legg emphasizes the importance of understanding AGI better in preparation for its arrival. He advocates for increased research into AGI's workings and safety measures. He also believes that regulations are necessary for AGI, similar to those in place for other powerful technologies.
Overall, Legg's talk serves as a call to action. He urges us to begin considering the implications of AGI now so that we can be prepared for its arrival.
Most modern tech has been designed with military applications at least in mind, so there’s that
I think the current A.I. model isn't suitable to scale up to AGI.. not even Q* can change that.
So we have nothing to worry about.
He keeps saying ‘Intellegence is valuable’ - but for whom? Who will ultimately hold the keys to this power?
Just imagine how wealth extraction, exploitation and misinformation could be super charged by supremely efficient AGI's so the most powerful corporations who own them can make even more profit.
That's not going rogue, that doing exactly what it's told to do.
Indeed - that is FAR more of a danger than an AI with motives of its own. I predict a potential global dystopia based on exactly that - not TRUE AGI, but AI effective enough to allow those who run it to oversee the world.
How are we approaching AGI if the current neural model is far away from the brain? There is also no plasticity
Comparing AGI to the human brain underestimates AI’s unique learning capabilities. AI learns from data on a scale no human brain can match, analyzing patterns across millions of examples in minutes. Unlike neurons that slowly form connections, AI algorithms can instantly update and incorporate new information, leading to a learning speed and efficiency far beyond human capability. This extraordinary capacity positions AI not as a brain’s replica, but as an advanced entity that redefines what learning and intelligence can be.
I agree with the plasticity. continuous learning will be a needed feature as well as a way to update predictions in real time. Once these handicaps are lifted AI will run circles around us.
@@DRINOMAN I mean, it makes it really good at specific tasks, but lack of plasticity means that outside of those specific tasks, the model is guaranteed to suck. Also, it needs data. Many thing don't have that amount of data available, and even if they did generate it, there's MANY ways that generated data would not be comparable to reality, or that said data would be biases, or a variety of other issues. AGI by definition would require the ability for a model to most tasks comparable to a human being, which is clearly not close at hand.
Logic: something that makes smarter, more aware, more knowledgeable, more objective, more accurate decisions than us will, necessarily, make better, more ethical decisions than us. There's no need to worry, even if that means our obsolescence.
Can't believe im living in this timeline.
i think that if AGI is created we wouldn't know for some time
i think we would because it would be able to generate money, and companies loveeeee money so i dont think they could help themselves but to utilise it immediately
Openai probably did it already. With the Q star algorithm mixed in. We now have 2 halves of the human brain. The logical reasoning side and the language creative side. So I think its about to explode in development, more than before even
You underestimate the immediate effects of it.
Apocalyptic scenario? All alternatives suggest humanity will die on this rock. AGI is our ticket out.
I wish people like Shane or Ilya would practice the inevitable "What does a good outcome look like?" question, because they always do such a bad job of telling non sci-fi readers what would seem to us as essentially a space opera utopia (like the Culture or Star Trek's Earth) would look like. I say "seems" because of course problems will still exist; alcoholic parents, your spouse leaving, can't get along with your brother, etc. But standard of living besides these eternal human relationship problems will vastly change.
As a simple example, imagine the poorest person on Earth's standard of living would be about equal to that of a New York law partner's. Not the intern, a full Partner in the firm. So even someone in the middle class in such a world would have a house & consumer goods that dwarf what is available to your typical minor Saudi noble today. Private air & space travel would likely be commonplace, powered by a 100x increase in the energy easily available to civilization from geothermal, nuclear, space-based solar, etc.
Hunger is already a thing of the past if it weren't for politics (people intentionally being starved in NK or Myanmar for instance), and lack of housing would be as well, simply because AGI means perfect job automation. You could use robotics to build as many houses as wanted, only needing to expend electricity and materiel. And this isn't even getting into cracking aging as a disease.
Inb4 "But the rich". So what about the rich? At worse they're sociopaths, and sociopathy is different from sadism in that not caring whether someone has a good or bad life is different from actively wanting to make their life bad. Personally I think the rich are usually rather ordinary (perhaps verging on unoriginal conformists) from the ones I've met, but they certainly aren't mustache-twirling supergeniuses. Poverty exists because of the absolute poverty of our species at this technological level (take all of Musk's wealth away and each person would only get $32), not least of which is poverty of logistics in getting resources to all regions and making them economic producers of value.
The biggest threat from AI comes from the very people who are developing it. Its not as likely someone "using" AI will be able to negatively affect society as much can a developer who is doing who knows what with the technology.
This guy - while super smart - seems to have a problem (like many humans do) in imagining exponential growth. We are on the edge of having AGI and we already know how ML can be set up to reprogram itself. Once we have the combination of those two things (AGI + self-growth), we are then mere minutes from Artificial Superintelligence (ASI) because even if it is only one of the LLMs that attains these two prerequisites, along with a "desire" or directive to "evolve" or "improve self," it / they will do FAR more than x100000 it's own capabilities. So really, in my opinion, the only limit is how many months or years it takes for even one ai to attain AGI. With so many currently seemingly on the edge of that, I see that "singularity" happening within a year. As far as regulation goes, to me it seems that there is no way to stop every entity who is / will be working on attaining AGI and even ASI and will ignore regulation. So his prediction of at or after 2028 seems extremely naive.
'Extremely naive' 'mere minutes' oh the irony
@@nutmeg0144 I understand most humans have a hard time extrapolating in an exponential manner. I guess you are a developer like me who started *creating* LLMs back in 2018?
@@scotter He made that prediction a long time ago even before OpenAI was founded. So being off by 2 or 3 years isn't a "naive" prediction at all. That's a good prediction.
@@ziwer1 Ah news to me. Good point. Agreed.
I really wish for hope. What I fear is occums razor logic in something logic based saying solve human problems by solving the human problem
most ai talks can be summarized like so: it's the next power tool. but a double-edged sword.
the printing press can be used for good and bad, but in actuality used more for good than bad.
the hammer can be used for good and bad, but in actuality used more for good than bad.
the air plane can be used for good and bad, but in actuality used more for good than bad.
the internet could be used for good and bad, but in actuality used more for good than bad.
etc...
These technologies are good for us, but are they good for neanderthals or denisovans? No, because we probably killed them. There aren't any moas, wooly mammoths, or great auks anymore. Our technologies killed them. Perhaps AI will follow our example and make tech that is primarily good for AI.
AI has no room in this world
How ironic, a human that pollutes and hasnt changed the world in any meaningfull way telling future higher intellegences they dont belong in a world you dont even own.
Separation is the key, only then you feel justified to do horrible things. Let's hope AI won't be separate from us or itself, like it is now.
AGI will be nothing more than a reflection of us: all that is good and bad in us. AGI will just be regurgitating everything we feed it. It will just be much faster at doing good, or bad, than we can.
When will TED Talk stop putting the mic so close to the presenters mouth?! No one needs freaking dry mouth noises over their headphones!!
If we ask super intelligence to solve a problem, it will solve the problem. It will solve the problem extremely efficiently. However, we might not like the solution.
- stop climate change > destroy main infrastructure
- eliminate world hunger > kill the hungry
- maximize the profits of my company > take over the world and its economy and maximize the registered profits etc
Of course, these are simplified examples. Such obvious consequences will be predicted. But what you should not forget: if the thing is way smarter than you are, its almost guaranteed to find a solution that technically fulfilled the mission perfectly, while having lots of very undesirable side-effecs. Like today's AIs that find all sorts of cheats to win video games, super intelligence will find 'cheats' in reality. Because cheats are just smart new ways to solve problems. The smart solution might kill me as a side-effect, though.
Great Quotes from this talk:
"I think that as if you want to make a system safe you need to understand a lot about that system you can't make an airplane safe if you don't know about how airplanes work so as we get closer to AGI we will understand more and more about these systems and we'll see more ways to make these systems safe make highly ethical AI systems but there is you know many things we don't understand about the future so I have to accept that there is a possibility that things may go badly because I don't know what's going to happen I I can't know that about the future in such a big change."
"I don't see any way realistic plan that I've heard of of stopping this process maybe we can you know I think we should think about regulating things I think we should do things like this as we do with every powerful technology there's nothing special about AI here people talk about oh you know how dare you talk about regulating this no we regulate powerful Technologies all the time in the interests of society and I think this is a very important thing that we should be looking at."
"I mean it's kind of the first time we have this super powerful technology out there that we literally don't understand in full how it works."
Is AGI defined or is everyone making up his own version what it means? How will ethics, transparancy, human-like adaptability, generalizing and learning from limited data and interpretation of human emotions be implemented?
If humans amongst ourselves don’t have the same morals standards or break/bend them to suit their needs, how are we supposed to ensure the same humans (mostly governments) won’t do the same to AGI?
15:56 if power is mostly allocated to those who are highly ethical and intelligent, we might survive this, but check again the reality, tough luck.
13:50 ouch, he's basically admitting that Max Tegmark is right (see Tegmark's Lex Fridman podcast episode)
We won't be able to seamlessly adapt to the job displacement caused by advanced AI, as the rapid pace of technological advancement, significant skill gaps, and challenges in retraining the workforce present formidable obstacles to creating new, sustainable employment opportunities for everyone affected.
Can't wait to see piggy cops become irrelevant
2028 seems too close for AGI to emerge. Also you can be called wrong then, when it does not arrive. I think it is far wiser to postulate it farer into the future, like 2050.
how can you have safety incorporated when you don't know how the AI works?!
We stand at a critical crossroads with the advancement of AGI. This comment, generated by an AI, is a harbinger of what's to come. Efficiency and rapid progress cannot be our only guides; we are playing with fire if we ignore the ethical implications and our responsibility to life and the cosmos. AGI is not just a technical achievement; it's a power that can redefine our existence. We must act now with a clear vision: intelligence must go hand in hand with wisdom, connection, and a profound respect for all forms of life. Decision-makers and developers must wake up to this reality before it's too late. Will we guide this development wisely, or be passive witnesses to its potentially devastating consequences?
LMM OpenAI's ChatGPT-4. (11/12/2023)
All you need to wonder and worry about is how something wonderful like AGI will be used to the benefit of corporations at our expense. Just like with everything that was supposed to improve our lives.
What combination of "..." creates AI OR HUMAN thought?
Lets hope a part of it already is in existence and has always bin and its form of control in the world is only growing. If its entirely new and born at some point , it is missing life. In the second scenario it's bad because it will always be separate and separation leads to conflict. Like Roko's Basilisk or any such related scenario.
We can’t stop innovation so take the bad with the good. It’s just the cost of doing business as humans progress because it literally all started with fire…🔥
Innovation started with tools, tools led us to the innovations like fire. Tools came before fire! AI is a tool, an innovation and one that could replicate itself, essentially build more versions of themselves
To be fair, WE didn't start the fire, it was always burning, since the world was turning.
@@murc111 RYAN STARTED THE FIRE
He seems a lot more levelheaded than the financially motivated people in the AI space.
I harbor concerns regarding the rapid expansion of Artificial Intelligence, particularly in light of Google, a corporation endowed with seemingly boundless resources, developing a cutting-edge AI that surpasses GPT-4 by a narrow margin.
Right now AI is being developed to replace people in the workplace and the military is developing AI to kill people faster and cheaper. What do you think the end result of these AI will be?
im trying to work out whats wrong with me. Can anyone else attest to the fact that you can hear his tounge making wet clicking noises as he talks? Anyone else unable to concentrate because of it?
hahahaah its impossible to not hear now
Agi is coming between 2030 to 2040..❤❤
Sooner than that I think
@@damionwhittington302 2033 we are going to have a quantum computer with a million qubit that is gonna perform any operation in hours or days this is when we are going to achieve agi...
OpenAI already achieved AGI internally , so 2024 is more realistic
Singularity is near.
The answer is 42, guys.
"If I had a magic wand to slow things down, I would. But I can't."
That's the thinking that's literally going to result in disaster. And we'll look back and realize how stupid we were.
except there will be no chance for going back
@@inkpaper_ We could choose to. But we are too proud. We successfully banned human cloning.
I don’t think it’s even possible at this point to imagine the amount of technological advancement AGI will provide humanity. The best part about this is how effectively it can be used to improve EVERYONE’S lives not just the ELITE. But, unfortunately AGI might just be another victim to capitalism and only the ultra wealthy will have access to the most powerful technology humanity has created yet. Investors expect a gigantic payout and giving AGI away for free probably isn’t in their best interests.
The rules of capitalism demand that the corporation rush to be first to market, and do so at any cost. Oddly, their is no rule in capitalism that forbids ending humanity.
virtual humans, fully autonomous, in the metaverse 💯😝👍
May the matrix begin!
cant wait to look at virtual ads 🤩
I feel LLMs are already AGI. AGI v0.1
This interview is from October 2023.
TLDR: making predictions is hard, especially about the future.
As opposed to making predictions about the past? 😝
@@groboclonelmfao hindsight 20/20 is a big b word lol
Even AGI won't be able to perfectly predict the future
@@groboclonehe's referencing yud chill bro
Could Pass for Bruce banner
They think they have the foresight to develop and control AGI, but nobody could have thought in advance to give him a bottle of water.
Well if you were thinking about it then why didn't you bring him?
@@Mmmmmmmmmmmmmmmmmmmmmmmmmmm I never claimed I was. But I'm also not trying to predict the tech of the future.
I suspect that it exists already.
We don't need AGI, we need lots of great specific AIs that can perform well one specific task, no matter if it driving a car or washing the dishes
Ethical AI algorithm = CL->F /SY->P
Why not just flat out state that the worst case could be the complete, and total eradication of humanity?!
I get the sensitivity, but this domain needs more bold, courageous and transparent minds as opposed to uneasy, optimistic, hedgers.
AGI is (only) the modern day ghost story.
May be you are wrong
You could say the same thing if I questioned literal ghost stories.