To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription
Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive! I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions. Your help would be greatly appreciated! Can I contact you in any way?
That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤
The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.
15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in. Keep up the good work, really really nice visualizations!
Some questions: 1. If the history of past inputs is crucial, how far back (in seconds) does it still matter? Can the input let's say 10 seconds ago still matter for the neuron's output? 2. Will anything interesting happen if the external current is periodic? 3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered 4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?
I don't know much about neuroscience, but as a physicist I feel I'm qualified to at least give these a stab. 1. For a neuron with a 'resting state' stable attractor, this will depend on the decay time. It's probably different for different neurons. I, too, would love to know the answer for the types of neurons discussed in the video. For a bistable neuron, it could have started off in its resting state months or years ago, and then suddenly pushed into its firing state, and stayed there forever - so in theory, the history could matter as far back as you want if you pick a neuron that's been "neglected" like this. 2. Probably. Based on the animations in the video it looks like the neurons' dynamics have a range of possible frequencies based on the input, which would make it difficult to apply the theory of resonance directly. I suspect you can get a whole range of interesting behaviour by tuning the external current's frequency and amplitude. Personally I've done some work with phase-locked loops, an electronic dynamical system that has some cool emergent properties when you make the input periodic. 3. If it is a bistable neuron, it can escape back to the rest state given a well-timed input. Some neurons can't escape the limit cycle unless you change the external current, as you said. In your brain and around your body there are lots of neurons and plenty of them are firing repeatedly - just think about how many neurons have to fire when you see a still image (at least a few per cone cell in your retina). Each firing uses only a small amount of nutrients, because individual neurons are tiny. But the external current to a single neuron will depend on other neurons, so it might fire repetitively only while a certain computation is being made, and then suppressed. Idek if entire computations in the brain have been mapped to that level of detail, though. 4. Great question. The ML research community and I would also love to know.
Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.
Yay! This is great! Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.
Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that. I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D
I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models. While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers. I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.
I did my degree in neuroscience and this concept was always super difficult for me to visualize what was actually happening. Especially bc it was just undergrad and I wasn’t very practiced w differential equation and understanding what the math was actually describing. This video just blew my mind and literally made this concept so much clearer to me. Keep up the great work man 👍
Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.
Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool
I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE
Phenomenal video, as always! You do such a great job of directing the subjects, and the animations are amazing!! Keep up the great work! It's really inspiring.
Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.
I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!
@13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.
Hello Artem, your videos are great! If one does not already exist, could you make one on neuromodulators and their role in synaptic plasticity and how the components of the Limbic system use neuromodulators do Non-Hebbian learning (i.e. reinforcement leaning) of long-term goals? Thank you for your exceptional content!
The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.
As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.
There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory. What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations. This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?
Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯
Could you do another video on the added complexities of biological neurons? E.g. I'm really curious as to what the serotonin/cannabinoid (especially this one as I heard it is used to transmit data backwards)/opioid/etc systems do. Especially from a computational point of view.
Very nice! I learned a lot :) I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space? Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean… something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” . Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants. I guess what I was thinking was like, “if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?” This is a very cool video, thank you
Thank you so much. It is very useful for me, the dynamic analysis presentation made me feel clearer than the graph of papers, and let me further understand the HH equation. I would like to ask if you have any plans for an update for other neuronal models analysis like FHN model in the future video? Or make discussion of more bifurcation or fast-slow dynamics?
Reminds me of the tipping points of climate science. Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.
Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?
We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.
Wow 💚I LOVE this channel!!. It’s so fucking fascinating and validating in ways it explains my behavior( told in such an eloquent way compared to me trying to explain how I only intuitively think/feel shit works) I’m Learning so much !and a great job at making it simple and easy to follow with good visuals. I’m hooked . Really just loving every video I keep watching. My mom has a PhD in neuroscience, i think it’s really intriguing how much Im drawn to it as well. I just watched the free energy principle
Grateful for every video, it must take quite a bit of time to put these together! 🙏 In reality, are these dynamics influenced by genetic development (corresponding to neuron types etc) or can neurons adapt to an extent based on the type of activity around them & demand? Would love a video on neural "codes" and/or group representations and a dig into what we know and the contentions. I'm not sure if it's true or not, but my understanding is that V1 input from the LGN can kind of change detail based on focus/demand/feedback. But I wonder what we know and whether this qualifies as a type of compression (and how it might work) or if it's just a type of prioritisation/optimisation. This video reminded me of a couple of those things anyway. I'm not sure the answers are well known so no stress either way. Though I can imagine thalamic function must be interesting from a computational perspective.
Great video yet again! There is just one thing that i didnt understand. The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?
Thanks! Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc
I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?
So does n_infinity depend on V? It must because otherwise n would just approach a constant and there would be no dynamics... But none of your equations suggest that, so I'm confused.
Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?
In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?
Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).
I wonder what endocannabinoid based retrograde transmission during LTP has to do with all this. Regulatory properties of ECS, set phase space of different systems?
I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol
I wonder what it would sound like if someone turned these into synth modules with the time scales adjusted to the hearing range, and played around with the parameters and combinations....
Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.
So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.
How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology
Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....
There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about. (I found this video to be quite informative)
As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!
As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.
As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.
Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”
@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.
I don’t understand the point of your pedantry. Yes, if two neurons just so happened to be identical down to the atom, then technically they will produce the same output given the same input. But when in the real world will we ever expect such a thing? You are being critical of the video for introducing an abstract simplification, but you introduce one of your own.
If you watch the video, it actually isn’t the differences in their structures or connections that determines the different behavior so much as the current state. And also he’s explicitly describing biological neurons in this video, not computational neurons unless those have potassium channels and bind neuro modulators these days
Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant. Hopefully AI can do the more complex thinking for us.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription
Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive!
I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions.
Your help would be greatly appreciated!
Can I contact you in any way?
I am not a neurologist, but as a physicist I really enjoyed this and your previous video. It is always a great feeling to gather new knowledge.
That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤
I've always found in strange that phase space diagrams often resemble neurons themselves.
Don’t underestimate these allies within us:
m.th-cam.com/video/SbvAaDN1bpE/w-d-xo.html
@@erawanpencil Well because it is all signals. Encoding information into anything that oscillates is really easy.
The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.
Thank you!!
Absolutely 😊❤
What software is being used? The designs are very beautiful!
@@ArtemKirsanov Is this video made with Manim?
@@jm3279z3 I'm not 100% sure, but I think it's made with Manim, a python library made by youtuber @3blue1brown for making videos visualizing math.
15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in.
Keep up the good work, really really nice visualizations!
Thank you!!
I thought the same thing when editing the video, so I added the explosion :D
The timing of this one is impeccable.
Some questions:
1. If the history of past inputs is crucial, how far back (in seconds) does it still matter?
Can the input let's say 10 seconds ago still matter for the neuron's output?
2. Will anything interesting happen if the external current is periodic?
3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered
4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?
Those are some good questions 🍿
📌🧐
I don't know much about neuroscience, but as a physicist I feel I'm qualified to at least give these a stab.
1. For a neuron with a 'resting state' stable attractor, this will depend on the decay time. It's probably different for different neurons. I, too, would love to know the answer for the types of neurons discussed in the video. For a bistable neuron, it could have started off in its resting state months or years ago, and then suddenly pushed into its firing state, and stayed there forever - so in theory, the history could matter as far back as you want if you pick a neuron that's been "neglected" like this.
2. Probably. Based on the animations in the video it looks like the neurons' dynamics have a range of possible frequencies based on the input, which would make it difficult to apply the theory of resonance directly. I suspect you can get a whole range of interesting behaviour by tuning the external current's frequency and amplitude. Personally I've done some work with phase-locked loops, an electronic dynamical system that has some cool emergent properties when you make the input periodic.
3. If it is a bistable neuron, it can escape back to the rest state given a well-timed input. Some neurons can't escape the limit cycle unless you change the external current, as you said. In your brain and around your body there are lots of neurons and plenty of them are firing repeatedly - just think about how many neurons have to fire when you see a still image (at least a few per cone cell in your retina). Each firing uses only a small amount of nutrients, because individual neurons are tiny. But the external current to a single neuron will depend on other neurons, so it might fire repetitively only while a certain computation is being made, and then suppressed. Idek if entire computations in the brain have been mapped to that level of detail, though.
4. Great question. The ML research community and I would also love to know.
@@impal0r945 If I may ask, what kind of physics is your area of specialty?
Marvellous, the lanscape of neuronal dynamics opens up and shows off its fundamental secrets. Thankyou and thanks to Mr. Izhikevich for his studies.
I've recently found your channel and it is amazing, super inspiring in terms of the quality of the animations and presentation style - very cool!
This is the best applied dynamics presentation I've seen in video, text, books.. anywhere.
Kudos!
Thanks a lot!!
Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.
Thank you!!
had the same thought! the quality and frequency is amazing!
Maybe artificial neural computations are used to aid his neural computations here and there ;-)
the video is really helpful for understanding the big picture behind these diff equations
Yay! This is great!
Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.
Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that.
I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D
@@ArtemKirsanovNice! I'd also be interested in active inference and related topics like Fristons free energy principle!
I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models.
While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers.
I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.
I did my degree in neuroscience and this concept was always super difficult for me to visualize what was actually happening. Especially bc it was just undergrad and I wasn’t very practiced w differential equation and understanding what the math was actually describing. This video just blew my mind and literally made this concept so much clearer to me. Keep up the great work man 👍
That’s really great to hear! I’m happy you found it helpful!!
These video drops make my week
Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.
Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool
Thank you so much Artem,I can't explain how much value your videos have added to my life.
The book that mentioned in this video at 24:57, "Dynamical System in Neuroscience" by Eugine M. Izhikevich.
Amazing Demonstration! Hadn't seen such a clear demosntration of neuronal activity before!
The analogy of phase space being water and differential equations being the fluid dynamics of that water is brilliant.
I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE
I respect your clarity and professionalism in these topics.
Phenomenal series Artem! Genuinely impressive work. Please do RNNs next! 🤞
Phenomenal video, as always! You do such a great job of directing the subjects, and the animations are amazing!! Keep up the great work! It's really inspiring.
Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.
Super interesting and amazingly produced! Congrats!
Incredible presentation of some incredible information!
Thank you!
wonderful. Hughe thanks for this excellent explanation!
I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!
@13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.
You are brilliant at explaining and animating these things
Hello Artem, your videos are great! If one does not already exist, could you make one on neuromodulators and their role in synaptic plasticity and how the components of the Limbic system use neuromodulators do Non-Hebbian learning (i.e. reinforcement leaning) of long-term goals? Thank you for your exceptional content!
18:55 idk why but this made me think of it as the Redstone clock of neurons
The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.
As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.
There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory.
What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations.
This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?
Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯
Very nicely done,it makes me want to study neuronal phasespaces for days.
I barely understand anything going on in this video but it's interesting as hell
Another epic video - well done mate. It is absolutely worthy of your tattoo.
Could you do another video on the added complexities of biological neurons? E.g. I'm really curious as to what the serotonin/cannabinoid (especially this one as I heard it is used to transmit data backwards)/opioid/etc systems do. Especially from a computational point of view.
So cool to see my non linear dynamics class have a real world example
Absolutely mind-blowing! These videos are public service educational MASTERPIECES. Never stop teaching you absolute legend
Very nice! I learned a lot :)
I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space?
Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean…
something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” .
Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants.
I guess what I was thinking was like,
“if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?”
This is a very cool video, thank you
Thank you so much. It is very useful for me, the dynamic analysis presentation made me feel clearer than the graph of papers, and let me further understand the HH equation. I would like to ask if you have any plans for an update for other neuronal models analysis like FHN model in the future video? Or make discussion of more bifurcation or fast-slow dynamics?
Reminds me of the tipping points of climate science.
Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.
Fantastic video!
Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?
Cognitive neuroscientist here: Absolutely brilliant explanation. Thanks. Will recommend your vids.
We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.
Wow 💚I LOVE this channel!!. It’s so fucking fascinating and validating in ways it explains my behavior( told in such an eloquent way compared to me trying to explain how I only intuitively think/feel shit works) I’m Learning so much !and a great job at making it simple and easy to follow with good visuals. I’m hooked . Really just loving every video I keep watching. My mom has a PhD in neuroscience, i think it’s really intriguing how much Im drawn to it as well. I just watched the free energy principle
Beautiful animations!
Grateful for every video, it must take quite a bit of time to put these together! 🙏
In reality, are these dynamics influenced by genetic development (corresponding to neuron types etc) or can neurons adapt to an extent based on the type of activity around them & demand?
Would love a video on neural "codes" and/or group representations and a dig into what we know and the contentions.
I'm not sure if it's true or not, but my understanding is that V1 input from the LGN can kind of change detail based on focus/demand/feedback. But I wonder what we know and whether this qualifies as a type of compression (and how it might work) or if it's just a type of prioritisation/optimisation.
This video reminded me of a couple of those things anyway. I'm not sure the answers are well known so no stress either way. Though I can imagine thalamic function must be interesting from a computational perspective.
This is the best channel on the internet
Great video yet again!
There is just one thing that i didnt understand.
The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?
Thanks!
Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc
great intro to dynamical systems tbh
I can't help but think about heart rhythm neurons more than about brain neurons. About fibrillation, arrhythmia, cardiac arrest, defibrillation etc
I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?
Absolutely! The book is really self-contained explaining all necessary background (including electrophysiology in chapter 2)
Very interesting videos. Thanks!
Let’s go 🔥🙌🏻
DUDE I SWEAR TO GOD YOUR VIDEOS ARE TOOO F***ING INTRESTING!!
So does n_infinity depend on V? It must because otherwise n would just approach a constant and there would be no dynamics... But none of your equations suggest that, so I'm confused.
Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?
In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?
Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).
Love your videos!
and calcium levels in cells can tweak voltage response levels - and may be thought to be part of long term memory retention.
This why I went to school wanting to study ML but ended up studying math and geometry
Very interesting!!
I wonder what endocannabinoid based retrograde transmission during LTP has to do with all this. Regulatory properties of ECS, set phase space of different systems?
Merci paske pout cet direction
Great video! Peace out!
what happened to n's 4th power?
These videos are an autodidacts dream
Amazing!!!
Can you please do a video on neuromorphic computing
I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol
What is your opinion on the Bienenstock-Cooper Munro theory?
Another Nobel grade research video ❤
All the best 💐
Yes. Just Yes. This video is awesome
i just had my mind blown
I wonder what it would sound like if someone turned these into synth modules with the time scales adjusted to the hearing range, and played around with the parameters and combinations....
Hi Artem,
Animations are sick! What tools do you use for these animations?
Manim, Illustrator and Python?
Thank you!
Yep, all three! Mostly creating individual animation video files in matplotlib, and then composing them in after effects
Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.
Just uploaded the code to Github :)
github.com/ArtemKirsanov/TH-cam-Videos/tree/main/2024/Elegant%20Geometry%20of%20Neural%20Computations
Ditto here! :)
Can you define the phase space using only the nullclines?
That's awesome.
So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.
How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology
Is synaptic plasticity involved in changing neuronal dynamics?
Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....
There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about.
(I found this video to be quite informative)
As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!
As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.
Are neurons often on the boundaries of different "phase space types" (like you described in the video you made on Ising models)?
Yes!! Neurons typically compute on the verge of bifurcations
@@ArtemKirsanov Are there regularisation forces that push them to the bifurcation boundaries?
As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.
Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”
@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.
I don’t understand the point of your pedantry. Yes, if two neurons just so happened to be identical down to the atom, then technically they will produce the same output given the same input. But when in the real world will we ever expect such a thing? You are being critical of the video for introducing an abstract simplification, but you introduce one of your own.
If you watch the video, it actually isn’t the differences in their structures or connections that determines the different behavior so much as the current state. And also he’s explicitly describing biological neurons in this video, not computational neurons unless those have potassium channels and bind neuro modulators these days
why are u recommended at way too late on friday? this is perfect morning coffee material?
As the great Baba Brinkman would say "once you bust them up into pieces / It’s tough to go back, ‘cause... hysteresis"
bro can you make a batchnorm expalnation ?
Great ❤❤❤
Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant.
Hopefully AI can do the more complex thinking for us.
So SupH and SNIC are a mathemathical model for seizures?
phreecken mindblowing
can networks of neurons have these properties too?