RIP Matt. I have watched all the videos in this series, your intro to Numenta's work inspired me to study neuroscience. Although I have never met you in person or ever communicated with you, it was sad and shocking to hear the bad news.
I recently finished the thousand brain book and just finished all of Matt's videos. The videos had great graphics and explained the book in much more detail. It's been a total mind-blow! Finally, a theoretical framework that seems to fit much of the neuroscience I learned back in Grad school. I have been re-inspired. Thank you Matt and Jeff!
"The hierarchies in our brains are not deep, They are wide..." Love it. Seems to me like deep architectures work well in highly supervised environments, and not much else.
Definitely. Actually I kind of feel like they don't work well at all, they're just the best working model we have at the moment haha. I think these guys are onto something though. Listening to Jeff Hawkins' talk.. They're starting to see the bigger picture... The recent sparse network results are very impressive too.
Thank you for this fantastic 16 episode trip. I am very enthusiast about Numenta theories. I heard about sensorimotors theories long ago (1997) and your theory just seems to make it all come into coherent pieces. You are a great youtuber also, you have incredibly good pedagogy (interactive figures, tone of voice, examples).
it seems like our vision is able to automatically scale things? like we can recognize the same shape at different sizes within our field of view. would this be achieved based on recognizing the presence of a collection of angles?
What I don't understand is how different neurons that need to communicate in this fashion if they are on opposite ends of the neocortex? Or will they always be in a similar location?
1:45 Wide and shallow within the brain? I am going to say wide and deep in so much as the real extent includes your entire nervous system. A cortical column is wide and shallow sure. Ie The inputs do matter a great deal and are themselves layered.
RIP matt..that last part about when you were not sure when you would have another video was just so bizarrely predictive of you in how life happens and you just don't know.
This theory reminds me of a resnet where you keep the information from the lower level and concatenate it with the deeper level to get the output from different 'resolution'.
In Matt's interview with Florian Fiebig, they talk about attractors. Some sensory input patterns might accidentally trigger attractors. Usually you just focus attention until you resolve the ambiguity, and it goes away (that's not a face, it's my coat). th-cam.com/video/_CBjb_kQElA/w-d-xo.html
So essentially it's an updated of Selfridge's pandemonium (and later Dennett's multiple drafts which paid no homage to Selfridge who was working in the same institution at the time) with some hierarchy.
@@NumentaTheory If I recall the original articles on pandemonium, yes there were feature demons, but there were also whole letter demons (it was targeted at character recognition). And I'm referring to it in particular, not for specifics. That's why I'm OK associating it with the multiple drafts model that takes that right up to consciousness with many areas of the brain generating drafts of it at the same time. It's really just a generalization of pandemonium though. In your case you're focusing on cortical columns instead of demons or drafters.
Hmmmm. So do I understand this?... Any input firing patterns or sequences that seem similar generate a new 'hierarchical layer' representation, which is just another input to another layer or column, so this recursive "bundling of similarity" becomes a hierarchy of consensus -- which generates columns that respond/vote on abstractions to "model" the world in more predictable ways (because predictions preserve connections). The utility of this to a biological species can go many ways, because the general mechanism/algorithm of a sparse hierarchical feedback loop (HTM) can be applied with different inputs and connections to do different things. How broad or deep the recursive bundling goes might be structural/innate or very different for different individuals or species. Recursive bundling of similarity becomes a hierarchy of consensus which leads to the ability to predict which is a key function of learning responses and motor skills, and memory of abstractions.
RIP Matt. I have watched all the videos in this series, your intro to Numenta's work inspired me to study neuroscience.
Although I have never met you in person or ever communicated with you, it was sad and shocking to hear the bad news.
He was a great guy, Matt was my uncle.
I just discovered this video series today. And now you tell me He is dead. How sad. He really seems a wonderful person and I learned so much.
What a wonderful teacher! 'thank you' cannot express my appreciation, Matt will be forever missed and remembered. Rest in peace.
I recently finished the thousand brain book and just finished all of Matt's videos. The videos had great graphics and explained the book in much more detail. It's been a total mind-blow! Finally, a theoretical framework that seems to fit much of the neuroscience I learned back in Grad school. I have been re-inspired. Thank you Matt and Jeff!
Miss you Matt. At least in a few hundred of my Thousand Brains, you continue living on and your legacy of education continues.
"The hierarchies in our brains are not deep, They are wide..." Love it. Seems to me like deep architectures work well in highly supervised environments, and not much else.
Definitely. Actually I kind of feel like they don't work well at all, they're just the best working model we have at the moment haha. I think these guys are onto something though. Listening to Jeff Hawkins' talk.. They're starting to see the bigger picture... The recent sparse network results are very impressive too.
Deep respect! And thank you very much for your hard work on this video series!
Thank you so much for sharing this! Please continue!
Thank you for this fantastic 16 episode trip. I am very enthusiast about Numenta theories. I heard about sensorimotors theories long ago (1997) and your theory just seems to make it all come into coherent pieces.
You are a great youtuber also, you have incredibly good pedagogy (interactive figures, tone of voice, examples).
Very nice of you to say, thank you!
Wonderful stuff. Very easy to understand.
Omg, Omg, Omg a new HTM School video. Squeeeeeeeeel!
Thanks for all Matt, i'm very grateful to you for all knowledge that you shared with us,. Wherever you are, my sincere thanks.
OMG, you made it so easy to understand, I can't believe it! Looking forward for more videos like that, thank you!
I feel that the HTM school videos were cut short. I think more videos were planned but something happened.
Anyone know about this?
Great idea and cool video!
Beautiful! Love the information and perspective. Thanks!
A great teacher. Thank you for your work. RIP.
it seems like our vision is able to automatically scale things? like we can recognize the same shape at different sizes within our field of view. would this be achieved based on recognizing the presence of a collection of angles?
Wonderful video!!!
What I don't understand is how different neurons that need to communicate in this fashion if they are on opposite ends of the neocortex? Or will they always be in a similar location?
Great presentation, gets my brain working.
Thank you very much! It is so interesting! why it may be the last video though?
I’m not sure what to do next.
1:45 Wide and shallow within the brain? I am going to say wide and deep in so much as the real extent includes your entire nervous system.
A cortical column is wide and shallow sure.
Ie The inputs do matter a great deal and are themselves layered.
And I live by a firehall and a highway. I can relate.
Will nupic.geospatial is available in python3?
Unfortunately, no. But it could be ported to htm.core. Join our forum to investigate. That's where people work on this stuff. discourse.numenta.org/
RIP matt..that last part about when you were not sure when you would have another video was just so bizarrely predictive of you in how life happens and you just don't know.
This theory reminds me of a resnet where you keep the information from the lower level and concatenate it with the deeper level to get the output from different 'resolution'.
I love this guy! The best man for this challeging work!
I love you too David :)
Excellent, where's my diploma? Matt, you have a gift.
Love you guys!
I am very curious as to how pareidolia factors into this.
In Matt's interview with Florian Fiebig, they talk about attractors. Some sensory input patterns might accidentally trigger attractors. Usually you just focus attention until you resolve the ambiguity, and it goes away (that's not a face, it's my coat). th-cam.com/video/_CBjb_kQElA/w-d-xo.html
Thank you, more than these words can say
So essentially it's an updated of Selfridge's pandemonium (and later Dennett's multiple drafts which paid no homage to Selfridge who was working in the same institution at the time) with some hierarchy.
But cortical columns are NOT doing feature detection, they are performing complete objects recognition.
@@NumentaTheory If I recall the original articles on pandemonium, yes there were feature demons, but there were also whole letter demons (it was targeted at character recognition). And I'm referring to it in particular, not for specifics. That's why I'm OK associating it with the multiple drafts model that takes that right up to consciousness with many areas of the brain generating drafts of it at the same time. It's really just a generalization of pandemonium though. In your case you're focusing on cortical columns instead of demons or drafters.
Hmmmm. So do I understand this?... Any input firing patterns or sequences that seem similar generate a new 'hierarchical layer' representation, which is just another input to another layer or column, so this recursive "bundling of similarity" becomes a hierarchy of consensus -- which generates columns that respond/vote on abstractions to "model" the world in more predictable ways (because predictions preserve connections). The utility of this to a biological species can go many ways, because the general mechanism/algorithm of a sparse hierarchical feedback loop (HTM) can be applied with different inputs and connections to do different things. How broad or deep the recursive bundling goes might be structural/innate or very different for different individuals or species. Recursive bundling of similarity becomes a hierarchy of consensus which leads to the ability to predict which is a key function of learning responses and motor skills, and memory of abstractions.
I will respond when I can on HTM Forum at discourse.numenta.org/t/thousand-brains-theory-hierarchy-episode-16/6908/3?u=rhyolight
Love it, thank you :)
The scattered minicolumns in the video are triggering my trypipgobia🥶🥶🥶🥶🥶
Blooper reel? A la Dr. Becky
I don't know Dr. Becky, but blooper reels are as old as television!
Seems like a crypto network?!