LeCun is spot on saying that pixel-based generative models don't learn deep world models. They generate images that 'look' good, since they are trained on appearances, but they do not learn world models. A great example that I saw recently was a 'space babe' in a spacesuit that did not have a join between the helmet and the suit - the AI generated something more like a motorcycle helmet, because it had no idea that the suit needed to hold air. Another example was a video that showed a first person viewpoint entering a library. Each image frame was consistent, but it was plain that the inside was larger than allowed by the outside view - the AI had no mental map of the library.
@17:53 That is overlapping Fuzzy membership values used in Fuzzy Logic. Richard Hamming, who worked on the Manhattan project talks about Fuzzy Logic in his "Learning to Learn" lectures. @25:38 Joint-Embedding Predictive Architecture seems very similar to this as well. The optical nerve in the human brain splits for each eye and routes the signal to both hemispheres, this is also a biological observation of the same concept.
@@SapienSpace no fuzzy logic mentioned in the entire video. on the other hand, all truths represented in fuzzy logic lies on the continuum, the complexity and approximate nature of those makes it impossible to work with in critical conditions.
@@nullvoid12 Yes, Fuzzy Logic is not mentioned in this video but it is mentioned by Hamming in his 1995 lecture series on "Learning to Learn" in TH-cam. I suspect, Fuzzy Logic, got "shoved under the bus" so to speak, as terminology because it is a sort of self incriminating terminology, i.e. admitting the opposite of high accuracy, and few like to admit low accuracy, but nothing is "perfect", everything has a tolerance, and Fuzzy accepts the tolerance. Looking at nature, our optical nerve splits between the two hemispheres of each eye, the brain itself, optically, merges two signals (fuzzy membership functions from each eye). Admitting fuzziness is like being on the intelligent side of the Dunning-Kruger effect. The generative "AI" process is much like a layered fuzzification and de-fuzzification process.
@@SapienSpace its useful in certain cases with the help of fuzzy decision trees, I'll give you that.. but there's no notion of proof in fuzzy logic hence no essence of truth.. it's all subjective all the way down. With no proper logical foundation, it can't take us anywhere. Cheers!
Interesting but nearly the same talk as the previous years. However, redundancy is essential to learn. One novelty was the Guardrail objective on the slide in 13:50.
With regards to the amount of data a human versus the amount of data an AI model has been trained on: It be really interesting to normalize the amount of data against the number of neurons available at time of that data’s incorporation to the model; If a lot of data is incorporated when there are fewer than, say, one billion neurons in a human, I think the information which can be extracted from that is different from the information that could be extracted by the same data being processed by, ie: a 100B parameter AI model. And likewise, I also think that the amount of data absorbed when a human has, say, a trillion neurons is very different than what can be learned by an 8B parameter model (assuming, of course, that neurons and parameters are broadly equivalent, which they appear to be).
I like hierarchical RL, but I think the "right" way to do it would require learning an almost algebraic structure that describes how big tasks should decompose into little ones. We'd also need to guarantee that the side effects of the different subtasks played nicely with one another, which has a similar flavor to the guardrail idea (I don't really like the guardrail idea, but I do think that computationally bounded agents should satisfice their values.)
The no side effects thing is also important for out of distribution generalization - ancillary features in the new domain need to not break what was already learned. I think better incorporation of constraints into high dimensional problem solving may be one of the keys for AGI.
These machines may surpass our intelligence, but we can still control them. How? They are objective-driven. We give them goals. So, Yann considers giving them goals to achieve is a way to control AI that is super-intelligent. I don't see that's going to work. Yann says that at the last minute of the talk. He needs to have another keynote to discuss mainly that part.
Given that "control" as humans commonly use it is a provably nonsensical concept (reality doesn't work that way), it is literally impossible for us to "control" anything. Humans can't even be said to "control" their own behavior in any meaningful sense. Brains follow the laws of physics. Self-awareness is not a control center.
Not everybody espouses the same good morals and civic virtues. We need regulation in this space so that we don’t suffer from runaway corporate greed & corrupt security apparatus. The founding fathers never imagined the rise of technocrats.
32:30 “… Repository of all human knowledge… more of an infrastructure than a product…” Key takeaways and early understandings of what it will be like to coexist alongside more Super-Genius life-forms than you could possibly imagine. 👏🏾👏🏾👏🏾
Identity is an illusion, this illusion will disappear as soon as we will be able to outsource most of our experience and decision making outside of the biology.
Already for long time I considered model thinking as the next required step in AI development. But I'm not sure that training models to predict world status changes is a good way to do that. I'd rather use analogue microchips to actually model world state transition caused by actions. We could start from generating as complex representation as we can afford by the available hardware (for example one chip), let it run and collect state data with certain time intervals, as well as track maximums and minimums continuously. This would be the first step that already could be delivered to production use cases. The next step would be to implement hierarchy. In this case first representation should be as simple as meaningfully possible, then take intervals with unacceptable level of certainty and go deeper with details, until uncertainty is acceptable. Of course we'd need models to encode and decode representations. But is it that hard? I think that 10 years time scale for this research is prohibitively long. Clumsy, energy hungry, but working systems based on existing architectures will appear much earlier. Text based systems are already capable of generating representations, even if not super accurate and great. Video generating models can already be utilized for predicting physical changes based on applied actions to some extend. It will only take to generate high quality purpose optimized specialized datasets to be able to achieve pretty decent results. So I think that traditional "pure" scientific processes with decades long planning would not be very productive for this task.
Yann has good ideas on how to make "human-level AI" but his ideas about the consequences are extremely unrealistic - I mean the part about how it will still be humanity's world, we'll all just have AI assistants. Human-level AI means nonhuman beings that are at least as smart as human, making their own choices, and it almost certainly means nonhuman beings much smarter than any human
Why is a world where human level ai exist under humans is unrealistic, as long as there is no concrete proof of counseousness in these systems his arguments are pretty valid.
The core error is this move towards agentic systems. So long as we use AIs as calculators we can avoid the worst harms by filtering the ideas of AI through human judgement.
Yann says that having independent agency isn't necessary for human-level intelligence, that's necessary for creatures that came from _evolution._ and evolved to create their own goals.
You can't seem to be able to imagine intelligence without animal instincts. No machine is interested in survival, reproduction, self-actualization, or anything you care about (unless some human programs them to).
LeCun is spot on saying that pixel-based generative models don't learn deep world models. They generate images that 'look' good, since they are trained on appearances, but they do not learn world models. A great example that I saw recently was a 'space babe' in a spacesuit that did not have a join between the helmet and the suit - the AI generated something more like a motorcycle helmet, because it had no idea that the suit needed to hold air. Another example was a video that showed a first person viewpoint entering a library. Each image frame was consistent, but it was plain that the inside was larger than allowed by the outside view - the AI had no mental map of the library.
@17:53 That is overlapping Fuzzy membership values used in Fuzzy Logic. Richard Hamming, who worked on the Manhattan project talks about Fuzzy Logic in his "Learning to Learn" lectures. @25:38 Joint-Embedding Predictive Architecture seems very similar to this as well. The optical nerve in the human brain splits for each eye and routes the signal to both hemispheres, this is also a biological observation of the same concept.
@@SapienSpace no fuzzy logic mentioned in the entire video. on the other hand, all truths represented in fuzzy logic lies on the continuum, the complexity and approximate nature of those makes it impossible to work with in critical conditions.
@@nullvoid12 Yes, Fuzzy Logic is not mentioned in this video but it is mentioned by Hamming in his 1995 lecture series on "Learning to Learn" in TH-cam. I suspect, Fuzzy Logic, got "shoved under the bus" so to speak, as terminology because it is a sort of self incriminating terminology, i.e. admitting the opposite of high accuracy, and few like to admit low accuracy, but nothing is "perfect", everything has a tolerance, and Fuzzy accepts the tolerance. Looking at nature, our optical nerve splits between the two hemispheres of each eye, the brain itself, optically, merges two signals (fuzzy membership functions from each eye). Admitting fuzziness is like being on the intelligent side of the Dunning-Kruger effect. The generative "AI" process is much like a layered fuzzification and de-fuzzification process.
@@SapienSpace its useful in certain cases with the help of fuzzy decision trees, I'll give you that.. but there's no notion of proof in fuzzy logic hence no essence of truth.. it's all subjective all the way down. With no proper logical foundation, it can't take us anywhere. Cheers!
he made the kool-aid, but seems nervous about drinking it
judgmentcallpodcast covers this. Keynote on AI's essential characteristics.
Thank For Sharing, You Speak Of Truths, My Brother French Bread.. I Loves Bread
啥意思?
Interesting but nearly the same talk as the previous years. However, redundancy is essential to learn. One novelty was the Guardrail objective on the slide in 13:50.
When was this?
Sept 10, 2024
With regards to the amount of data a human versus the amount of data an AI model has been trained on: It be really interesting to normalize the amount of data against the number of neurons available at time of that data’s incorporation to the model; If a lot of data is incorporated when there are fewer than, say, one billion neurons in a human, I think the information which can be extracted from that is different from the information that could be extracted by the same data being processed by, ie: a 100B parameter AI model.
And likewise, I also think that the amount of data absorbed when a human has, say, a trillion neurons is very different than what can be learned by an 8B parameter model (assuming, of course, that neurons and parameters are broadly equivalent, which they appear to be).
Enlightening presentation!
I like hierarchical RL, but I think the "right" way to do it would require learning an almost algebraic structure that describes how big tasks should decompose into little ones. We'd also need to guarantee that the side effects of the different subtasks played nicely with one another, which has a similar flavor to the guardrail idea (I don't really like the guardrail idea, but I do think that computationally bounded agents should satisfice their values.)
The no side effects thing is also important for out of distribution generalization - ancillary features in the new domain need to not break what was already learned. I think better incorporation of constraints into high dimensional problem solving may be one of the keys for AGI.
interesting insights.
Good to see he is able to talk about AI and not only about Elon or politics…
hahahaha
He's thinking about how we think
OODA is more sophisticated than what is presented here.
Fully agree. It appears most people are not aware of the OODA loop
@@jooberly2611 source please
These machines may surpass our intelligence, but we can still control them. How? They are objective-driven. We give them goals.
So, Yann considers giving them goals to achieve is a way to control AI that is super-intelligent. I don't see that's going to work. Yann says that at the last minute of the talk. He needs to have another keynote to discuss mainly that part.
Given that "control" as humans commonly use it is a provably nonsensical concept (reality doesn't work that way), it is literally impossible for us to "control" anything. Humans can't even be said to "control" their own behavior in any meaningful sense. Brains follow the laws of physics. Self-awareness is not a control center.
He doesn't have anything to say about it, I've looked repeatedly. Read Paul Christiano if you're interested in viable routes to safety.
Not everybody espouses the same good morals and civic virtues. We need regulation in this space so that we don’t suffer from runaway corporate greed & corrupt security apparatus. The founding fathers never imagined the rise of technocrats.
what?
"muh just regulate"
Founding fathers were also slave owners.
I've never heard of a CEO more evil than the government.
32:30 “… Repository of all human knowledge… more of an infrastructure than a product…” Key takeaways and early understandings of what it will be like to coexist alongside more Super-Genius life-forms than you could possibly imagine. 👏🏾👏🏾👏🏾
Identity is an illusion, this illusion will disappear as soon as we will be able to outsource most of our experience and decision making outside of the biology.
Not convinced
Already for long time I considered model thinking as the next required step in AI development. But I'm not sure that training models to predict world status changes is a good way to do that. I'd rather use analogue microchips to actually model world state transition caused by actions. We could start from generating as complex representation as we can afford by the available hardware (for example one chip), let it run and collect state data with certain time intervals, as well as track maximums and minimums continuously. This would be the first step that already could be delivered to production use cases.
The next step would be to implement hierarchy. In this case first representation should be as simple as meaningfully possible, then take intervals with unacceptable level of certainty and go deeper with details, until uncertainty is acceptable.
Of course we'd need models to encode and decode representations. But is it that hard?
I think that 10 years time scale for this research is prohibitively long. Clumsy, energy hungry, but working systems based on existing architectures will appear much earlier. Text based systems are already capable of generating representations, even if not super accurate and great. Video generating models can already be utilized for predicting physical changes based on applied actions to some extend. It will only take to generate high quality purpose optimized specialized datasets to be able to achieve pretty decent results. So I think that traditional "pure" scientific processes with decades long planning would not be very productive for this task.
Why the confident level is so low.
what makes you think he's not confident
His latest bets on world representations based on Jepa haven't really taken off
I believe he said we would get news about their progression next year? Tbf he always said Jepa is just the beginning and it would take time
why I'm not surprised with a progressist's failed estimate...
happy, good, like it
Fascinating, but I don't think this will lead to AGI. Understanding is much more than just predicting.
Can you give an example of where understanding cannot be broken down into the ability to predict.
@@Paul-rs4gd Think of Searle's Chinese room thought experiment. One can predict everything, and yet understand nothing.
AGI? 😂
Yann has good ideas on how to make "human-level AI" but his ideas about the consequences are extremely unrealistic - I mean the part about how it will still be humanity's world, we'll all just have AI assistants. Human-level AI means nonhuman beings that are at least as smart as human, making their own choices, and it almost certainly means nonhuman beings much smarter than any human
Why is a world where human level ai exist under humans is unrealistic, as long as there is no concrete proof of counseousness in these systems his arguments are pretty valid.
The core error is this move towards agentic systems. So long as we use AIs as calculators we can avoid the worst harms by filtering the ideas of AI through human judgement.
@@drxyd we avoid the best potentials too then
Yann says that having independent agency isn't necessary for human-level intelligence, that's necessary for creatures that came from _evolution._ and evolved to create their own goals.
You can't seem to be able to imagine intelligence without animal instincts. No machine is interested in survival, reproduction, self-actualization, or anything you care about (unless some human programs them to).
FAFO
If it isn't Twitter psycho Yann LeKook
Yer so kool
He's your dad
Still calling it Twitter I see...
he couldn't say x psycho, sounds too cool to be used as an insult
Yan doesn't know what he is talking about..
He's your grandpa
agree
Lone chad💯👏. Get them with their end of the world (theory's)🦾🤖💙