I believe that to say a child learns about gravity, liquids and so on without supervision is to take a narrow view of what constitutes supervision. The child's body, and the environment in which it lives, provides extremely rich "supervision" in the sense that, for example, a child who cannot manage basic operations involving gravity cannot move around much. A child who does not have some grasp of how liquids behave will not get a lot of juice from a cup -- most of it will be spilled. Thus questions in the nature of "Am I doing this right?" or "what can I do with this object" are actually answered in the strongest possible terms by the child's body and his/her environment -- which incidentally are not separate at that age and arguably in some sense cannot ever be disentangled from the point of view of the brain.
Thank you for your reply, and thank you for an excellent video, easily the best I've seen on the subject of machine learning. I suggest that -- outside the realm of circumscribed tasks such as photo identification or text classification -- "supervised" vs "unsupervised" constitutes a shallow dichotomy. Human and animal intelligence are still very different from the (impressive and useful) tasks we can contemplate assigning to computing apparatus now and in the near future. I raise the point not to nitpick but rather to wave a cautionary flag on the term "generalized intelligence" in open discourse. I think it is too ambiguous to be used without qualification in the context of introducing machine learning to the public.
I agree with your statement, pain, pleasure, gravity, etc. are also framing the way a baby is learning about the world around... I do believe in semi-unsupervised learning with a foundation layer that we could feed (like maybe doing too many examples and not getting any proper solution is "exhausting", of course mimicking the human way of learning but a of some forms of penalties and rewards is certainly a good way to tackle the semi-unsupervised learning) which could be like any human being or any basis system to push the learning process towards certain directions. Right now we still need millions of examples. A foundation layer like that (which I have really no idea how it could be implemented...) it could save time in the learning process and make the learning genuinely semi-unsupervised, just like us...
AN IDEA! ---> What if a child playing with LEGO, unsuperviced learning, the child learns fetures, but not just image fetures, deeper layers learn fetures representing abstractions, they look like images but are simple abstraction/ideas. Now when we learn something we get dopamine kick. Now allso, there are not an infinite amount of abstractions to learn at low level LEGO constructions. There are just a few ideas that the child will find, like putting gears together, learning how to exchange speed for power. So JUST THE PROCESS ITSELF OF FINDING THE FEW ABSTRACTIONS THAT EXIST, LOW LEVEL LEGO BUILDING, MAY FINALLY GENERATES A DOPAMINE KICK, and is motivation enough for child to do unsuperviced reinforcement learning.
could help me about my equation How extract higher level features from stack auto encoder i need simple explain I need to understand how input 30 and output 28 in stack auto - encoder
I believe that to say a child learns about gravity, liquids and so on without supervision is to take a narrow view of what constitutes supervision. The child's body, and the environment in which it lives, provides extremely rich "supervision" in the sense that, for example, a child who cannot manage basic operations involving gravity cannot move around much. A child who does not have some grasp of how liquids behave will not get a lot of juice from a cup -- most of it will be spilled. Thus questions in the nature of "Am I doing this right?" or "what can I do with this object" are actually answered in the strongest possible terms by the child's body and his/her environment -- which incidentally are not separate at that age and arguably in some sense cannot ever be disentangled from the point of view of the brain.
GAI need a generalized environment.That is deep reinforcement learning.
Thank you for your reply, and thank you for an excellent video, easily the best I've seen on the subject of machine learning.
I suggest that -- outside the realm of circumscribed tasks such as photo identification or text classification -- "supervised" vs "unsupervised" constitutes a shallow dichotomy. Human and animal intelligence are still very different from the (impressive and useful) tasks we can contemplate assigning to computing apparatus now and in the near future.
I raise the point not to nitpick but rather to wave a cautionary flag on the term "generalized intelligence" in open discourse. I think it is too ambiguous to be used without qualification in the context of introducing machine learning to the public.
I agree with your statement, pain, pleasure, gravity, etc. are also framing the way a baby is learning about the world around... I do believe in semi-unsupervised learning with a foundation layer that we could feed (like maybe doing too many examples and not getting any proper solution is "exhausting", of course mimicking the human way of learning but a of some forms of penalties and rewards is certainly a good way to tackle the semi-unsupervised learning) which could be like any human being or any basis system to push the learning process towards certain directions.
Right now we still need millions of examples. A foundation layer like that (which I have really no idea how it could be implemented...) it could save time in the learning process and make the learning genuinely semi-unsupervised, just like us...
Of course environment provides supervision but still the unaware of consequences of each action.
Bengio is so cool
you are also
Watching it at 2x speed cause I'm used to your videos.
I'm watching it at 2x speed as well xD
Great interview, thank you to you both.
gained a lot from this
Intéressant merci !
AN IDEA! ---> What if a child playing with LEGO, unsuperviced learning, the child learns fetures, but not just image fetures, deeper layers learn fetures representing abstractions, they look like images but are simple abstraction/ideas. Now when we learn something we get dopamine kick. Now allso, there are not an infinite amount of abstractions to learn at low level LEGO constructions. There are just a few ideas that the child will find, like putting gears together, learning how to exchange speed for power. So JUST THE PROCESS ITSELF OF FINDING THE FEW ABSTRACTIONS THAT EXIST, LOW LEVEL LEGO BUILDING, MAY FINALLY GENERATES A DOPAMINE KICK, and is motivation enough for child to do unsuperviced reinforcement learning.
Learned a lot
not simplifying or rounding of fractions opens infinite fractions. 1/3 .
why you keep touching Bengio hand ?
could help me about my equation How extract higher level features from stack auto encoder i need simple explain
I need to understand how input 30 and output 28 in stack auto - encoder
that moment when kids try to control their parents...
Just hotwire it to a couple braincells and we've got gold. Or you can wirelessly beam it to mine, and we've got fools gold 😎
New York not accent
About as much scientific rigor as alchemy
Om shanti k good day please
I downloaded this shit.