This is a theory. The analogy was used in this short video to help explain it, but the full details and supporting evidence is outlined in the peer-reviewed paper the video supports: www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
For a human, there's more than simply locational touch; there's also temperature, surface texture, relative flexibility, and potentially "sharpness" since there might be prickly things out of sight. The temperature is interesting because humans can read not just the absolute temperature but also the loss or gain through thermal conduction, This might also work with surface texture to make a stronger signal. Relative flexibility, for the aluminum can, would mean you can tell it immediately from the ceramic cup which is inflexible. If you were allowed to pick up the object then you'd also get a sense of its mass. So you may have multiple neural types contributing to different columns to speed the analysis.
Hi Stephen, Yes, you're right in that humans take input through multiple senses. The purpose of this demo was to illustrate the concept of a location signal using a two-layer model, and to show that we're not only processing sensory input, but location input as well. When sensory input comes in, we know where that sensory information is, whether it's sharpness, texture, or flexibility. We encourage you to post this comment on the HTM Forum for further discussion: discourse.numenta.org
But how is the location estimated? It seems that the basic assumption is that there is some kind of mechanism which can follow the position of the finger. What is this? Maybe the neurons which control the motion itself?
Hi Arsalan, Our co-founder Jeff Hawkins recently gave a talk at MIT, where he goes deeper into sensorimotor inference with multiple columns at 30:22 of this video: th-cam.com/video/yVT7dO_Tf4E/w-d-xo.html
Things really are shifting fast on the AI front, feels like we just left the elbow of the exponential learning curve, makes me wonder how far behind the AI itself is.
The research by Jeff Hawkins, A Theory of How Columns in the Neocortex Enable Learning the Structure of the World, goes well hand in hand with what Doris Y Tsao presents in her work on the neural mechanisms of vision.
@Numeta Is this a theory or a hypothesis? What evidence supports it - besides the analogy?
This is a theory. The analogy was used in this short video to help explain it, but the full details and supporting evidence is outlined in the peer-reviewed paper the video supports: www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
For a human, there's more than simply locational touch; there's also temperature, surface texture, relative flexibility, and potentially "sharpness" since there might be prickly things out of sight. The temperature is interesting because humans can read not just the absolute temperature but also the loss or gain through thermal conduction, This might also work with surface texture to make a stronger signal. Relative flexibility, for the aluminum can, would mean you can tell it immediately from the ceramic cup which is inflexible. If you were allowed to pick up the object then you'd also get a sense of its mass.
So you may have multiple neural types contributing to different columns to speed the analysis.
Hi Stephen,
Yes, you're right in that humans take input through multiple senses. The purpose of this demo was to illustrate the concept of a location signal using a two-layer model, and to show that we're not only processing sensory input, but location input as well. When sensory input comes in, we know where that sensory information is, whether it's sharpness, texture, or flexibility. We encourage you to post this comment on the HTM Forum for further discussion: discourse.numenta.org
@@Numenta lovely answer
But how is the location estimated?
It seems that the basic assumption is that there is some kind of mechanism which can follow the position of the finger. What is this? Maybe the neurons which control the motion itself?
Need more elaborate videos. But this is also very good
Hi Arsalan,
Our co-founder Jeff Hawkins recently gave a talk at MIT, where he goes deeper into sensorimotor inference with multiple columns at 30:22 of this video: th-cam.com/video/yVT7dO_Tf4E/w-d-xo.html
@@Numenta your comments deserve gold medals
Brilliant!
Things really are shifting fast on the AI front, feels like we just left the elbow of the exponential learning curve, makes me wonder how far behind the AI itself is.
The research by Jeff Hawkins, A Theory of
How Columns in the Neocortex Enable
Learning the Structure of the World, goes
well hand in hand with what Doris Y Tsao
presents in her work on the neural
mechanisms of vision.