How can these networks be learned? Because if I make the connections beforehand then it would not much different from symbolic AI and their knowledge graphs, right? So the power of such a hebbian system would be that if the agent sees something red and I tell it that this color is red it needs to form these connections or adjust the synapse weights accordingly all by itself to create a relationship between the object and the color. But how would that be implemented so that it has these capabilities? I would imagine it like all neurons are very loosely connected (the agent knows nothing) and when exposed to the world, synapse weights start to adapt to form relationships and models of the objects the agent senses. But what are the underlying rules that dictate which synapses are strengthened and what neurons are used in certain tasks and when to adapt something in the network?
Excellent questions! Consider that a baby's neurons each have many thousands of synapses and new synapses can be grown between neurons which fire with simultaneity. Over time, the connections which represent true relationships will strengthen while others will atrophy. And, yes, I believe that a knowledge graph is a better representation of intelligence than an ANN and have shown that a graph can be implemented in neurons. It may be that the observed "cortical columns" represent the repeating pattern of the multiple neurons representing single graph nodes.
@@FutureAISociety May I know if you have ever tried to create such network in a "random" manner instead of having pre-designed? So the neurons and edges are randomly created with random properties that are adjusted based on the "response" accordingly? Maybe I am wrong but otherwise you have to create tons of neurons and edges for almost every possible combinations of the information it tries to handle. And also would like to know if instead of defining "order of brightness" in advance you have a "function" that based on properties may decide which color is brighter. In this way to solve such problems need less resource (neurons, edges, firing so computatioin) and generalize well. And these functions I think can be built up from atomic functions like add/multiply/etc. May I ask you to share your thoughts about this topic?
@@rekasil You get further by making defined clusters of neurons with explicit internal connections and more random connections between them. One significant between the brain and an artificial counterpart is that the artificial system can "create" new connection (or neurons) at only slightly more cost that adjusting the weight of an existing connection. Therefore pre-allocating oodles of random connections is less valuable.
@@FutureAISociety I see. Thank you! But so as to generalize and improve the clusters should not be pre-created, rather they should be built as a result of symulations. Otherwise it cannot evolve. But maybe I am wrong. Evolution is crucial I think, and in symulated environment it can be much faster than in real world. (I know this kind of AI is more dangerous than lego-ing building blocks together and keep them under controll.)
Great Idea. I just had a question if these kind of relationships could be implemented for other senses like hearing and then they would have different relationship neurons instead of color something like pitch or frequency.
Exactly... I didn't address the idea that because various areas of the brain handle different types of data, the pitch and volume relationships are local to the auditory cortex while the color and shape relationships are local to the visual cortex. From a programming theory point of view this makes no difference but from a practical point of view, it makes things easier by limiting the range of connections needed by any individual relationship cluster.
The two capabilities you mention or several layers more complex than the process presented in the video. But the nut is that with the simple capabilities, you can implement a Universal Knowledge Store and with a UKS, you can implement planning and decision making. BTW, *logical* decision making is just an extension of general reinforcement learning where one selects the best outcome based on previous experience. Logic is a set of learned "correct" outcomes.
Truly interesting! i believe my brain is creating new synapses with all these data 🧠
How can these networks be learned? Because if I make the connections beforehand then it would not much different from symbolic AI and their knowledge graphs, right? So the power of such a hebbian system would be that if the agent sees something red and I tell it that this color is red it needs to form these connections or adjust the synapse weights accordingly all by itself to create a relationship between the object and the color. But how would that be implemented so that it has these capabilities? I would imagine it like all neurons are very loosely connected (the agent knows nothing) and when exposed to the world, synapse weights start to adapt to form relationships and models of the objects the agent senses. But what are the underlying rules that dictate which synapses are strengthened and what neurons are used in certain tasks and when to adapt something in the network?
Excellent questions! Consider that a baby's neurons each have many thousands of synapses and new synapses can be grown between neurons which fire with simultaneity. Over time, the connections which represent true relationships will strengthen while others will atrophy.
And, yes, I believe that a knowledge graph is a better representation of intelligence than an ANN and have shown that a graph can be implemented in neurons. It may be that the observed "cortical columns" represent the repeating pattern of the multiple neurons representing single graph nodes.
@@FutureAISociety May I know if you have ever tried to create such network in a "random" manner instead of having pre-designed? So the neurons and edges are randomly created with random properties that are adjusted based on the "response" accordingly? Maybe I am wrong but otherwise you have to create tons of neurons and edges for almost every possible combinations of the information it tries to handle.
And also would like to know if instead of defining "order of brightness" in advance you have a "function" that based on properties may decide which color is brighter. In this way to solve such problems need less resource (neurons, edges, firing so computatioin) and generalize well. And these functions I think can be built up from atomic functions like add/multiply/etc. May I ask you to share your thoughts about this topic?
@@rekasil You get further by making defined clusters of neurons with explicit internal connections and more random connections between them.
One significant between the brain and an artificial counterpart is that the artificial system can "create" new connection (or neurons) at only slightly more cost that adjusting the weight of an existing connection. Therefore pre-allocating oodles of random connections is less valuable.
@@FutureAISociety I see. Thank you! But so as to generalize and improve the clusters should not be pre-created, rather they should be built as a result of symulations. Otherwise it cannot evolve. But maybe I am wrong. Evolution is crucial I think, and in symulated environment it can be much faster than in real world. (I know this kind of AI is more dangerous than lego-ing building blocks together and keep them under controll.)
Great Idea. I just had a question if these kind of relationships could be implemented for other senses like hearing and then they would have different relationship neurons instead of color something like pitch or frequency.
Exactly... I didn't address the idea that because various areas of the brain handle different types of data, the pitch and volume relationships are local to the auditory cortex while the color and shape relationships are local to the visual cortex. From a programming theory point of view this makes no difference but from a practical point of view, it makes things easier by limiting the range of connections needed by any individual relationship cluster.
I also have another question, and the question is how would logical decision making and generaly planning be implemented with hebbian synapses.
The two capabilities you mention or several layers more complex than the process presented in the video. But the nut is that with the simple capabilities, you can implement a Universal Knowledge Store and with a UKS, you can implement planning and decision making. BTW, *logical* decision making is just an extension of general reinforcement learning where one selects the best outcome based on previous experience. Logic is a set of learned "correct" outcomes.
🥳