Wittgenstein went a few steps beyond saying that the meaning of a word is the contexts in which it is used. Yes, the passage from Augustine about learning words in childhood -- which Wittgenstein debunks (Chomsky seems to have missed the irony) -- is our entry into the Philosophical Investigations. But Wittgenstein goes on -- having begun in the Blue and Brown Books (chronology?) -- to discuss language games, a topic which would not have been necessary if Wittgenstein's only big idea was that meaning equals context. (Here I skip W's stonemason game because it has never seemed very clear to me.) Instead I want to look at W's allusions to counting. I don't have the Investigations at hand so I'm not sure exactly when and how this comes up, but it is something in the nature of this: a person who is counting, say, reeds, first picks one up in, say, the left hand and, while continuing to hold the reed, uses the right hand to make a mark with a small stone in a soft clay tablet. The person puts down the first reed onto a different pile, picks up another one, turns to the clay tablet and makes another mark, maybe making one little chanting sound for picking up the reed and perhaps a different sound for pressing the stone into the clay. This continues in alternation until the reeds have all been tallied in clay. The hardened clay tablet is later sent along to someone who perhaps cleans and cuts the reeds. Now imagine that the person counting must impart this skill to a young apprentice. If the youth watches the journeyman do this back-and-forth hand-to-eye-to-hand and sound-to sound (and here is the most important point) every awake human animal apprentice will master that dead-simple routine immediately and forever. And there, I claim, is a plausible first step in language -- to see an act, hear a sound, imitate both, and thus join the world of contextual meaning with the world of action -- not perception, but action. "Monkey see, monkey do." If the primate in question has dexterity, the right vocal apparatus and perhaps a certain kind of concentration, the worlds of language and action (or "meaning") touch.
And thats why true AGI will be embodied (in the sense of being able to simultaneously perceive and take actions in its environment). A chatbot stuck in a box falls shy of this; however, an AI agent will do exactly this, whether "embodiment" means a physical robot body, more amorphous or variable physical apparatus, or simply API-based computational tools.
Yann LeCun clip is fantastic! He was as wrong then as he's wrong today. Yeah, he provides an alternative perspective, which is valuable, but he's still quite wrong. You can ask that question to any SoTA LLM today, and they provide fantastic, logical, correct answers, suggesting correct real world mental models.
I don't think Yann is wrong. I think he likes to say things in provocative ways. We don't have a model that can do what a cat does, but we also don't have cats that have passed medical exams. Yann is someone with whom we need to use precision when listening to him.
@@dr.mikeybee Precision, or the ability to use your own knowledge to "read between the lines" and discern more than Yann even intended? He has made statements of incredible doubt regarding the Transformer attention mechanism, which he has not walked back but seems to have reverted to "well, we need other paradigms too". I mean, of course we do. But that doesn't mean "Attention is All You Need" didn't revolutionize the industry and spark a wave of AI innovation, without which we would not have the same forward strides hand in hand with economic backing.
So good! Thank you!
The Yann Lecun clip is hilarious
Wittgenstein went a few steps beyond saying that the meaning of a word is the contexts in which it is used. Yes, the passage from Augustine about learning words in childhood -- which Wittgenstein debunks (Chomsky seems to have missed the irony) -- is our entry into the Philosophical Investigations. But Wittgenstein goes on -- having begun in the Blue and Brown Books (chronology?) -- to discuss language games, a topic which would not have been necessary if Wittgenstein's only big idea was that meaning equals context. (Here I skip W's stonemason game because it has never seemed very clear to me.) Instead I want to look at W's allusions to counting. I don't have the Investigations at hand so I'm not sure exactly when and how this comes up, but it is something in the nature of this: a person who is counting, say, reeds, first picks one up in, say, the left hand and, while continuing to hold the reed, uses the right hand to make a mark with a small stone in a soft clay tablet. The person puts down the first reed onto a different pile, picks up another one, turns to the clay tablet and makes another mark, maybe making one little chanting sound for picking up the reed and perhaps a different sound for pressing the stone into the clay. This continues in alternation until the reeds have all been tallied in clay. The hardened clay tablet is later sent along to someone who perhaps cleans and cuts the reeds.
Now imagine that the person counting must impart this skill to a young apprentice. If the youth watches the journeyman do this back-and-forth hand-to-eye-to-hand and sound-to sound (and here is the most important point) every awake human animal apprentice will master that dead-simple routine immediately and forever. And there, I claim, is a plausible first step in language -- to see an act, hear a sound, imitate both, and thus join the world of contextual meaning with the world of action -- not perception, but action.
"Monkey see, monkey do."
If the primate in question has dexterity, the right vocal apparatus and perhaps a certain kind of concentration, the worlds of language and action (or "meaning") touch.
And thats why true AGI will be embodied (in the sense of being able to simultaneously perceive and take actions in its environment). A chatbot stuck in a box falls shy of this; however, an AI agent will do exactly this, whether "embodiment" means a physical robot body, more amorphous or variable physical apparatus, or simply API-based computational tools.
Yann LeCun clip is fantastic! He was as wrong then as he's wrong today. Yeah, he provides an alternative perspective, which is valuable, but he's still quite wrong. You can ask that question to any SoTA LLM today, and they provide fantastic, logical, correct answers, suggesting correct real world mental models.
I don't think Yann is wrong. I think he likes to say things in provocative ways. We don't have a model that can do what a cat does, but we also don't have cats that have passed medical exams. Yann is someone with whom we need to use precision when listening to him.
@@dr.mikeybee Precision, or the ability to use your own knowledge to "read between the lines" and discern more than Yann even intended? He has made statements of incredible doubt regarding the Transformer attention mechanism, which he has not walked back but seems to have reverted to "well, we need other paradigms too". I mean, of course we do. But that doesn't mean "Attention is All You Need" didn't revolutionize the industry and spark a wave of AI innovation, without which we would not have the same forward strides hand in hand with economic backing.
29:40 So we began with calculators, made PC's and then LLM's came around and we lost the function that calculators are famous for: calculating.
I feel bad for Noam. There's no Nobel for him.