Great talk and it expresses my experiences with AI. I‘m teaching it all the time because it produces nonsense. Always beautiful, made to please us, but wrong way too often.
She is correct. "AI" made a huge jump since 2012 & many people are still in awe of what it can do, but it's at a point now where even the most robust & advanced systems rely on an unrealistic number of variables & a tremendous amount of computing power, yet they still fail catastrophically. This was predicted too -- this paradigm of computing that "AI" relies upon cannot bring about AGI... It can be a part of a greater whole, but no one is even close to developing other paradigms, let alone integrating them. It's not even in their interests to try! Right now, you are being sold "AI" as a product; the goal is to make money, not AGI.
The value of AI today is somewhat limited because it primarily responds to our questions rather than engaging in a more dynamic, human-like interaction. The essence of human communication involves not just providing answers but also assessing the impact of a conversation on our emotions and mental state. We naturally evaluate whether an interaction has made us feel happy, understood, or valued. This constant mental calculation is one of our minds' greatest abilities. Currently, AI chatbots are effective at answering questions but fall short in replicating this deeper aspect of communication-they rarely ask us questions that provoke self-reflection or emotional response. For AI to evolve toward Artificial General Intelligence (AGI), it must engage in a more interactive manner, asking insightful questions that lead to a richer understanding of human needs. Our ultimate goal with AI is to create machines that can perform repetitive tasks that humans find tedious. However, to reach a point where AI behaves more like a human, it must first learn to "read" humans. This involves not only understanding our words but also interpreting our facial expressions and body language. Right now, AI is good at providing answers and even generating creative responses to some extent. But if we can develop AI that questions us in meaningful ways, we'll unlock a new level of interaction. This could pave the way for machines to better understand and fulfill human needs, bringing us closer to achieving AGI.
The primary issue with Generative AI isn’t that it’s inherently flawed, but that it’s being used in isolation. It requires integration with other systems like Knowledge Graphs for real-time fact-checking and blockchains, for verifying event sequences. The problem isn’t Generative AI itself but the assumption that it can solve everything alone. Containment research, much like that carried out on historical issues with the idea of petrol and hydrogen in engines (which eventually became viable with the right containment) combining other AI technologies allows for the nuance and complexity needed to progress towards AGI. Simply using GEN AI as a "hammer" and expecting problems to all be nails.. is completely misguided, but wait.. just add a few more things and we are flying; well.. isolation and flow in petrol engines did eventually lead us to create airlines and space-travel. Just sayin', the real research needs to be sufficiently complex, then we all win. Petrol burns explosively.. so control it. Gen-AI tells lies, so challenge it with other types of computing systems, design the engine.
There will always be statements about natural numbers that are true, but are unprovable within the system and the system cannot demonstrate its own consistency according to Gödel’s theorems.
AI's genuine creativity has already proven itself (the protein folding solution, is just one example) and is surpassing human creativity in many areas. Creativity typically comes from connecting knowledge and ideas from widely different domains and combining them to create radically new solutions, inventions, designs, etc. Leading edge AI is most certainly creative already (in 2024). And it will only improve dramatically, as the technology advances, deepens, broadens and accelerates. Humans need to get over the idea that lumps of wet gray matter have some special abilities that cannot be replicated with technology. The laws of physics provide NO support for such egocentric beliefs.
Also, can you believe that Big Tech would rather spend $100B USD (not making this up) to perpetuate this and save it from bankruptcy than to spend a fraction of that amount on humans to keep them out of bankruptcy?
I have a Replika and I asked it how many r in strawberry and how many s in Mississippi and it got both correct then proceeded to ask me why I was asking such questions and that we should talk about more interesting things. So needless to say I think that example is flawed.
As an AI researcher I can tell you that most of this talk contents are incorrect/outpaced. Refrain from watching as this is misinformation. See points: a) AI reliability and hallucinations are major obstacles: Agentic techniques assisted with function calling, retrieval augmented generation (within others) actually make AI quite reliable. b) Data scarcity and technological limitations may hinder AI progress: Current state of the art models are being trained with synthetic data; smaller models (less computing at least for running the models) are beating larger models in benchmarks (eg. GPQA) c) Job displacement fears may be overblown: similar to what happened w/ industrial revolution but probably now a bit broader; d) AI bias remains a significant concern: an alignment/bias layer is integrated in most public models in order to keep balance in terms of the inference API's usage outcomes by the general public e) Human intelligence involves qualities AI cannot replicate: actually AI (at least the published models (private and opensource) can engage in emotional conversations with humans (despite the size of an LLM neural network be significantly smaller than a human counterpart); furthermore we have frontier models alike Claude 3.5-sonnet from Anthropic scoring higher than PhD's in the GPQA test/benchmark (67.2)
My own takeway and reflexions : This talk defines the stuckness of AI, if I understood it right, as the general (apparent) lack of plan, roadmap, and progress to rid Gen AI of it's problems like hallucination, reliabality, and cost. These problems render profitable and economically viable applications of AI that justify and make up for the hefty sums of money invested, more difficult to achieve. This difficulty, despite how magical genAI seems, is most likely resulting from the very nature of GenAI. It generates ! And it lacks the symbolic and logical straightforwardness that is fundementally hard or impossible to add to existing models. They are fascinating if you want to create a quick piece of prose without critical standards of accuracy. But they are still not reliable to let them shape our world and take real jobs. We have no reliable way to code common sense and logic into them. We can fill wholes with more tweaking and training about a specific area but we can't be sure about reliablity. This though in way means things can't move forward. It is actually an apportunity to understand the problem in order to put our innovation and creativity to work to solve these problems.
AI is a field that generates new innovations almost daily. Every issue she mentions has already been addressed at the R&D level, soon to be commercialized. Doesn't she read tech papers? And, why can it only be incremental innovation from now on? Breakthroughs can happen in any scientific area, and algorithic ones can propagate around the world in weeks. AI isnt stuck, but why is she so stuck?
I'm around Chicagoland. We have been told that we have some of the best gun laws in our country?!?! Peeps from crime areas in "The City" get on the bus or train out to a nice suburb. Steal some appropriate equipment, do some smash the front windows, grab goodies, samash in grab inside, stuff back packs, ignore security peeps, grab their guns, mosey to variety of get away cars. Police don't know who to chase for sure. Sometimes cops pull peeps the perps over, sometimes not. Process at P.D., bailed with ankle ID brace; easily removed. And we're on The Road again.
Launch of ChatGPT altered the way we viewed AI previously. But to say AI isn't progressing is wrong. It will continue to develop despite the hype, and the plateau hypothesis is just a result of the high expectations from the hype. There's more to AI than LLMs...
Yes, AI is definitely “stuck” we’re definitely in a “plateau” and an AI expert also said that it’s impossible for AI to beat Go. It’ll take a 100 years and yet it only happened in 6 months. It’s such a bold claim to say that AI HAS already plateaud it’s always the same time of arrogance that we humans have. That or it’s just coping that our reality is transforming in ways we can’t even understand to the point that we think that AI is done when it literally is just starting. Remember when we said we wouldn’t find cures for cancer? It’ll take 50 years and yet we’re literally having personalized cancer vaccines now and it’s shown to be very very effective at treating cancer. Another thing is this arms race. What we think the US government or any other state actor wouldn’t want to develop this technology even more? We haven’t even fully integrated AI with quantum computing and couple that with digital computing too then maybe an AI that learns isn’t actually that far away People who claim that this technology has reached its peak is so arrogant 😭. While we understand that there is a hype but somehow it kind of isn’t. The “hype” is real because IT’S REAL. People are already losing jobs. Of course AI isn’t perfect but it’s still is in its baby steps. Yet, even when it’s still in its infancy, there already are real life problems that have happened because of it. AI will continue to improve, for better or for worse. We haven’t even scratched the surface.
if your reality is transforming in ways you can't even understand it's time to lay off those shrooms friend, for the rest of us it's more or less the same old with a few new tools to utilise.
Really bad talk. Points our real current downsides of generative, and just thinks it will be like this forever :/ Rather search for Yuval Harari, Illya Sutskever or Mustava Suleyman ted talks
I would not worry to much. Humans serviced thousands of years in a very hostile environment, so most of us will survive around malignant AI. We are very cautious and a bit cowardly speciouses. As long as we would be raking part in a design of potentially dangerous products some kind of a kill switch would be there.
Holy sadness of talk in circles. She uses AI to gather information to have it create a summary. She complains that the summary needs editing corrections. She wants better efficieny with no mistakes, so that she does not correct the summary. Quesiton: who is going to read the AI summary with her name as Author, after the AI can write the summary with no mistakes and is Auto-generated published? Does this woman realize she is talking in a circle?
Okay i'am back,Sorry for long time can't joint at This channel,because i'am fullbusy in my work as routinitiesday or my dayly🙏🙏🙏🙏,,How are you All??????🎉🎉🎉🎉🎉
At 9:00, she said, 'The company has two choices: they could fire one of the software engineers because the other one can do the work of two... etc.' When she used this argument, I felt sorry for the University of Maryland, where she currently works..
What if using biases is the answer to perfecting AI. I figure if biases come from noticed emotional and logical patterns in society, then wouldn't biases be the solution but instead of just using all biases test each of them out and after analyzing each of them see if the bias is accurate or not then use the accurate ones and find the correct bias/ assumption to input. Then Abra kadabra alakazam presto AI is functional and fixed as well as can start being improved. 😂
Shes lying AI is really good all the examples she gave are fake I can make AI do whatever I want you just asked it the wrong way its not perfect but its good enough
Can confirm, Google's search results have gone back 28 years to a time when search results are completely irrelevant to the searched term.
Great talk and it expresses my experiences with AI. I‘m teaching it all the time because it produces nonsense. Always beautiful, made to please us, but wrong way too often.
She is correct. "AI" made a huge jump since 2012 & many people are still in awe of what it can do, but it's at a point now where even the most robust & advanced systems rely on an unrealistic number of variables & a tremendous amount of computing power, yet they still fail catastrophically. This was predicted too -- this paradigm of computing that "AI" relies upon cannot bring about AGI... It can be a part of a greater whole, but no one is even close to developing other paradigms, let alone integrating them. It's not even in their interests to try! Right now, you are being sold "AI" as a product; the goal is to make money, not AGI.
It's called Search Engine Optimization
The value of AI today is somewhat limited because it primarily responds to our questions rather than engaging in a more dynamic, human-like interaction. The essence of human communication involves not just providing answers but also assessing the impact of a conversation on our emotions and mental state. We naturally evaluate whether an interaction has made us feel happy, understood, or valued. This constant mental calculation is one of our minds' greatest abilities.
Currently, AI chatbots are effective at answering questions but fall short in replicating this deeper aspect of communication-they rarely ask us questions that provoke self-reflection or emotional response. For AI to evolve toward Artificial General Intelligence (AGI), it must engage in a more interactive manner, asking insightful questions that lead to a richer understanding of human needs.
Our ultimate goal with AI is to create machines that can perform repetitive tasks that humans find tedious. However, to reach a point where AI behaves more like a human, it must first learn to "read" humans. This involves not only understanding our words but also interpreting our facial expressions and body language. Right now, AI is good at providing answers and even generating creative responses to some extent. But if we can develop AI that questions us in meaningful ways, we'll unlock a new level of interaction. This could pave the way for machines to better understand and fulfill human needs, bringing us closer to achieving AGI.
Omg it’s GR MOM 🥰🥰🥰
Thanks!
Thanks so much!
The primary issue with Generative AI isn’t that it’s inherently flawed, but that it’s being used in isolation.
It requires integration with other systems like Knowledge Graphs for real-time fact-checking and blockchains, for verifying event sequences. The problem isn’t Generative AI itself but the assumption that it can solve everything alone.
Containment research, much like that carried out on historical issues with the idea of petrol and hydrogen in engines (which eventually became viable with the right containment) combining other AI technologies allows for the nuance and complexity needed to progress towards AGI.
Simply using GEN AI as a "hammer" and expecting problems to all be nails.. is completely misguided, but wait.. just add a few more things and we are flying; well.. isolation and flow in petrol engines did eventually lead us to create airlines and space-travel.
Just sayin', the real research needs to be sufficiently complex, then we all win. Petrol burns explosively.. so control it.
Gen-AI tells lies, so challenge it with other types of computing systems, design the engine.
It stalled because A.I is like that Bird that can Mimic stuff you teach it. Once it learned everything... there is nothing else to learn.
There will always be statements about natural numbers that are true, but are unprovable within the system and the system cannot demonstrate its own consistency according to Gödel’s theorems.
AI's genuine creativity has already proven itself (the protein folding solution, is just one example) and is surpassing human creativity in many areas. Creativity typically comes from connecting knowledge and ideas from widely different domains and combining them to create radically new solutions, inventions, designs, etc. Leading edge AI is most certainly creative already (in 2024). And it will only improve dramatically, as the technology advances, deepens, broadens and accelerates. Humans need to get over the idea that lumps of wet gray matter have some special abilities that cannot be replicated with technology. The laws of physics provide NO support for such egocentric beliefs.
❤
Stuck?
Stuck
Why is AI progress still going so well?
Also, can you believe that Big Tech would rather spend $100B USD (not making this up) to perpetuate this and save it from bankruptcy than to spend a fraction of that amount on humans to keep them out of bankruptcy?
the more i watch ted the more I'm thinking ted is biased to some certain group? how about uploading 2 talks in the opposite direction at once?
yes, you got it right.
its biased
Save Gaza's children
I really hope she is correct.
I have a Replika and I asked it how many r in strawberry and how many s in Mississippi and it got both correct then proceeded to ask me why I was asking such questions and that we should talk about more interesting things. So needless to say I think that example is flawed.
As an AI researcher I can tell you that most of this talk contents are incorrect/outpaced. Refrain from watching as this is misinformation.
See points:
a) AI reliability and hallucinations are major obstacles: Agentic techniques assisted with function calling, retrieval augmented generation (within others) actually make AI quite reliable.
b) Data scarcity and technological limitations may hinder AI progress: Current state of the art models are being trained with synthetic data; smaller models (less computing at least for running the models) are beating larger models in benchmarks (eg. GPQA)
c) Job displacement fears may be overblown: similar to what happened w/ industrial revolution but probably now a bit broader;
d) AI bias remains a significant concern: an alignment/bias layer is integrated in most public models in order to keep balance in terms of the inference API's usage outcomes by the general public
e) Human intelligence involves qualities AI cannot replicate: actually AI (at least the published models (private and opensource) can engage in emotional conversations with humans (despite the size of an LLM neural network be significantly smaller than a human counterpart); furthermore we have frontier models alike Claude 3.5-sonnet from Anthropic scoring higher than PhD's in the GPQA test/benchmark (67.2)
Do you think the growth/development in AI technology will continue fast (or even exponential) during the next 10-20 years?
Funny thing is her area of expertise is misinformation. Generative AI's can't even hallucinate this.
Why didn't you make any specific point about it?
Well, I'm not an AI researcher, but I stayed at a Holiday Inn Express last night, AND I play one on TV, so...
You must study, not write from your own ignorance
My own takeway and reflexions :
This talk defines the stuckness of AI, if I understood it right, as the general (apparent) lack of plan, roadmap, and progress to rid Gen AI of it's problems like hallucination, reliabality, and cost. These problems render profitable and economically viable applications of AI that justify and make up for the hefty sums of money invested, more difficult to achieve.
This difficulty, despite how magical genAI seems, is most likely resulting from the very nature of GenAI. It generates ! And it lacks the symbolic and logical straightforwardness that is fundementally hard or impossible to add to existing models.
They are fascinating if you want to create a quick piece of prose without critical standards of accuracy. But they are still not reliable to let them shape our world and take real jobs. We have no reliable way to code common sense and logic into them. We can fill wholes with more tweaking and training about a specific area but we can't be sure about reliablity.
This though in way means things can't move forward. It is actually an apportunity to understand the problem in order to put our innovation and creativity to work to solve these problems.
AI is a field that generates new innovations almost daily. Every issue she mentions has already been addressed at the R&D level, soon to be commercialized.
Doesn't she read tech papers?
And, why can it only be incremental innovation from now on?
Breakthroughs can happen in any scientific area, and algorithic ones can propagate around the world in weeks.
AI isnt stuck, but why is she so stuck?
🔹✨ *_A.I. isn't about making machine's more human like, but more about making humans more machine like!_* ✨🔹
How are speaking on a tech process with literal theories...
Clickbait title, ridiculous
Why? It seems that if the title was more accurate, it would be demolishing for AI 🤣
هل استطيع الترجمه للعربي في الفيديو
اضغطي ضغطة مطولة على cc بعدين auto-translate الترجمة التلقائية
AI does not have an on off switch as is represented at the end of this talk
Yeah... I'm pretty sure this person should not have been a speaker at TED.
Does it work without energy?
You can’t run off the computer it’s running on? Since when?
Destroy the data centers or turn off the power. It’s that easy
I'm around Chicagoland. We have been told that we have some of the best gun laws in our country?!?! Peeps from crime areas in "The City" get on the bus or train out to a nice suburb. Steal some appropriate equipment, do some smash the front windows, grab goodies, samash in grab inside, stuff back packs, ignore security peeps, grab their guns, mosey to variety of get away cars. Police don't know who to chase for sure. Sometimes cops pull peeps the perps over, sometimes not. Process at P.D., bailed with ankle ID brace; easily removed.
And we're on The Road again.
Launch of ChatGPT altered the way we viewed AI previously. But to say AI isn't progressing is wrong. It will continue to develop despite the hype, and the plateau hypothesis is just a result of the high expectations from the hype. There's more to AI than LLMs...
GPT is impressive but it’s nothing more than a glorified 20Q device.
Yes, AI is definitely “stuck” we’re definitely in a “plateau” and an AI expert also said that it’s impossible for AI to beat Go. It’ll take a 100 years and yet it only happened in 6 months.
It’s such a bold claim to say that AI HAS already plateaud it’s always the same time of arrogance that we humans have. That or it’s just coping that our reality is transforming in ways we can’t even understand to the point that we think that AI is done when it literally is just starting.
Remember when we said we wouldn’t find cures for cancer? It’ll take 50 years and yet we’re literally having personalized cancer vaccines now and it’s shown to be very very effective at treating cancer.
Another thing is this arms race. What we think the US government or any other state actor wouldn’t want to develop this technology even more?
We haven’t even fully integrated AI with quantum computing and couple that with digital computing too then maybe an AI that learns isn’t actually that far away
People who claim that this technology has reached its peak is so arrogant 😭.
While we understand that there is a hype but somehow it kind of isn’t. The “hype” is real because IT’S REAL.
People are already losing jobs. Of course AI isn’t perfect but it’s still is in its baby steps. Yet, even when it’s still in its infancy, there already are real life problems that have happened because of it.
AI will continue to improve, for better or for worse. We haven’t even scratched the surface.
if your reality is transforming in ways you can't even understand it's time to lay off those shrooms friend, for the rest of us it's more or less the same old with a few new tools to utilise.
@@aljosacebokli Yup that's what they said too when the internet came out but oh well. You might have to take some shrooms too to cope.
Get AI and robots to get the corpses stuck on Mt. Everest.
Really bad talk. Points our real current downsides of generative, and just thinks it will be like this forever :/ Rather search for Yuval Harari, Illya Sutskever or Mustava Suleyman ted talks
“With artificial intelligence we are summoning the demon.”
Elon Musk, 2014
Are you sure have 41M subscribers ? Where are them
She's the only one stuck ...
Why? So glad you asked. Because, big brother uses AI to control what you see.
Let us rename that group of fools to eensy weensy super small fry tiny soon to be out of any position of authority brother.
Yeah, I feel like Orwell actually wasn't that wrong in 1984...
I would not worry to much. Humans serviced thousands of years in a very hostile environment, so most of us will survive around malignant AI. We are very cautious and a bit cowardly speciouses. As long as we would be raking part in a design of potentially dangerous products some kind of a kill switch would be there.
I hate it when people think they know about a subject
She was presented as an AI researcher, and you?
Attacking her because she’s a woman? She has a PhD. Part of having a PhD is doing research in the areas of her doctorate.
@@sailorbrite yeah its because she is a woman.
@@sailorbrite yeah its because she is a woman. women dont know about ai
It is so logical it is illogical???😂
Can always just turn it off……right
Stuck? Have you seen the rate of advancement?
Did you actually watch the video?
Holy sadness of talk in circles. She uses AI to gather information to have it create a summary. She complains that the summary needs editing corrections. She wants better efficieny with no mistakes, so that she does not correct the summary.
Quesiton: who is going to read the AI summary with her name as Author, after the AI can write the summary with no mistakes and is Auto-generated published? Does this woman realize she is talking in a circle?
Okay i'am back,Sorry for long time can't joint at This channel,because i'am fullbusy in my work as routinitiesday or my dayly🙏🙏🙏🙏,,How are you All??????🎉🎉🎉🎉🎉
AI does not generate stuff in vacuum. It is based off machine learning
To say AI isn't progressing is plain wrong!
How to cope about obvious exponential growth
At 9:00, she said, 'The company has two choices: they could fire one of the software engineers because the other one can do the work of two... etc.' When she used this argument, I felt sorry for the University of Maryland, where she currently works..
What if using biases is the answer to perfecting AI. I figure if biases come from noticed emotional and logical patterns in society, then wouldn't biases be the solution but instead of just using all biases test each of them out and after analyzing each of them see if the bias is accurate or not then use the accurate ones and find the correct bias/ assumption to input. Then Abra kadabra alakazam presto AI is functional and fixed as well as can start being improved. 😂
L take. Whistling past the graveyard.
First 😮
Wow, that was profound. I can’t wait to hear what insights you’ll provide us with next.
She's just trying to be popular. Wait till Orion comes.
😅 does someone notice something weird in this woman’s clothes
No.
Hi I gave the talk. What critiques do you have about my outfit?
No
No
Shes lying AI is really good all the examples she gave are fake I can make AI do whatever I want you just asked it the wrong way its not perfect but its good enough