Fantastic seeing a more grounded explanation of the system. And I agree with Jeff that the way it was presented made it much easier to understand your work ,which its supper complex. Well done ;).
This was great, a very clear explanation. One thing I'm trying to understand is, how is the system learning? Is there something like gradient descent or other optimization being used? Also, is the common communication protocol something like a joint embedding space shared by all the modules?
Good question! This system is different than deep learning approaches and because this is a completely new form of AI there aren't a lot of parallels like gradient descent or other optimizations used in that field. Monty moves its sensors to explore an object and as it explores an object it builds a representation in the sensor's attached learning module. The common communication protocol (now called Cortical Messaging Protocol, CMP) has one goal in common with a joint embedding space, namely to allow multimodal transfer, but it also allows learning modules to vote together to form consensus about what is being sensed (as well as a host of other functionality). If you like, you can also ask more questions about the CMP over on our discourse forum - thousandbrains.discourse.group
As I understand it, it is associating patterns of displacements with objects based on evidence. As she mentioned in the video, I think they are still doing that supervised, but at some point it will be unsupervised. Right now, or back in 2022, they were focusing on the initial object recognition part of it. There is no gradient descent. Communication protocol is the voting protocol between mini-columns (each of which covers a little window on the object). They all have their own perception of the object based on evidence and they vote. The highest vote wins and that's perceived to be the object. At least that's what I understood. I may be wrong.
Sorry, I wrote a response but didn't post it! Good question! This system is very different than deep learning approaches and because this is a completely new form of AI there aren't a lot of parallels like gradient descent or other optimizations used in that field. Monty moves its sensors to explore an object and as it explores an object it builds a representation in the sensor's attached learning module. The common communication protocol (now called Cortical Messaging Protocol, CMP) has one goal in common with a joint embedding space, namely to allow multimodal transfer, but it also allows learning modules to vote together to form consensus about what is being sensed (as well as a host of other functionality). You can also ask more questions about the CMP over on our discourse forum - thousandbrains.discourse.group
Fantastic seeing a more grounded explanation of the system. And I agree with Jeff that the way it was presented made it much easier to understand your work ,which its supper complex. Well done ;).
Thanks Miguel! :)
Super presentation
Thank you
Hopefully we can expect the next in the sequel to know how far we have moved into the cloudy path :)
Haha yes, we do have Chapter II of the Legend of Monty :)
Lovely presentation!
Glad you liked it!
This was great, a very clear explanation. One thing I'm trying to understand is, how is the system learning? Is there something like gradient descent or other optimization being used? Also, is the common communication protocol something like a joint embedding space shared by all the modules?
Good question! This system is different than deep learning approaches and because this is a completely new form of AI there aren't a lot of parallels like gradient descent or other optimizations used in that field. Monty moves its sensors to explore an object and as it explores an object it builds a representation in the sensor's attached learning module. The common communication protocol (now called Cortical Messaging Protocol, CMP) has one goal in common with a joint embedding space, namely to allow multimodal transfer, but it also allows learning modules to vote together to form consensus about what is being sensed (as well as a host of other functionality). If you like, you can also ask more questions about the CMP over on our discourse forum - thousandbrains.discourse.group
As I understand it, it is associating patterns of displacements with objects based on evidence. As she mentioned in the video, I think they are still doing that supervised, but at some point it will be unsupervised. Right now, or back in 2022, they were focusing on the initial object recognition part of it. There is no gradient descent. Communication protocol is the voting protocol between mini-columns (each of which covers a little window on the object). They all have their own perception of the object based on evidence and they vote. The highest vote wins and that's perceived to be the object. At least that's what I understood. I may be wrong.
Sorry, I wrote a response but didn't post it!
Good question! This system is very different than deep learning approaches and because this is a completely new form of AI there aren't a lot of parallels like gradient descent or other optimizations used in that field. Monty moves its sensors to explore an object and as it explores an object it builds a representation in the sensor's attached learning module. The common communication protocol (now called Cortical Messaging Protocol, CMP) has one goal in common with a joint embedding space, namely to allow multimodal transfer, but it also allows learning modules to vote together to form consensus about what is being sensed (as well as a host of other functionality). You can also ask more questions about the CMP over on our discourse forum - thousandbrains.discourse.group