Elegant Geometry of Neural Computations

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 178

  • @ArtemKirsanov
    @ArtemKirsanov  หลายเดือนก่อน +10

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov . You’ll also get 20% off an annual premium subscription

    • @joaoGabriel_3
      @joaoGabriel_3 หลายเดือนก่อน

      Artem, first of all, what a great video! I love the animations and how you are able to make this topic so intuitive!
      I would like to chat with you about an idea I'm developing about creating a new physics-derived mathematical model of a neuron's physiology in 3 dimensions.
      Your help would be greatly appreciated!
      Can I contact you in any way?

  • @JackDespero
    @JackDespero หลายเดือนก่อน +79

    I am not a neurologist, but as a physicist I really enjoyed this and your previous video. It is always a great feeling to gather new knowledge.

  • @anton9690
    @anton9690 หลายเดือนก่อน +161

    That meta-revelation when realizing that an aggregation of billions of neurons like these enter in those multidimensional manifolds to understand themselves, to write that book about themselves, to create this amazing video about themselves, etc. ❤

    • @erawanpencil
      @erawanpencil หลายเดือนก่อน +2

      I've always found in strange that phase space diagrams often resemble neurons themselves.

    • @ruudh.g.vantol4306
      @ruudh.g.vantol4306 หลายเดือนก่อน +1

      Don’t underestimate these allies within us:
      m.th-cam.com/video/SbvAaDN1bpE/w-d-xo.html

    • @jakublizon6375
      @jakublizon6375 หลายเดือนก่อน

      ​@@erawanpencil Well because it is all signals. Encoding information into anything that oscillates is really easy.

  • @tau9632
    @tau9632 หลายเดือนก่อน +136

    The animations are so gorgeous. This is what I always dreamed of as a kid - I wanted to *see* into things and see their hidden structures and dynamics.

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +6

      Thank you!!

    • @spiralsun1
      @spiralsun1 หลายเดือนก่อน +1

      Absolutely 😊❤

    • @jm3279z3
      @jm3279z3 หลายเดือนก่อน +3

      What software is being used? The designs are very beautiful!

    • @keylime6
      @keylime6 หลายเดือนก่อน +3

      @@ArtemKirsanov Is this video made with Manim?

    • @keylime6
      @keylime6 หลายเดือนก่อน +1

      @@jm3279z3 I'm not 100% sure, but I think it's made with Manim, a python library made by youtuber @3blue1brown for making videos visualizing math.

  • @AgusVallejoV
    @AgusVallejoV หลายเดือนก่อน +21

    15:05 I was just thinking it'd be really funny if the two points just outright exploded upon merging, but thought nah, this video seems really serious to do something silly like that. Had to laugh out loud when the explosion really came in.
    Keep up the good work, really really nice visualizations!

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +3

      Thank you!!
      I thought the same thing when editing the video, so I added the explosion :D

  • @I_am_who_I_am_who_I_am
    @I_am_who_I_am_who_I_am หลายเดือนก่อน +39

    The timing of this one is impeccable.

  • @GeoffryGifari
    @GeoffryGifari หลายเดือนก่อน +25

    Some questions:
    1. If the history of past inputs is crucial, how far back (in seconds) does it still matter?
    Can the input let's say 10 seconds ago still matter for the neuron's output?
    2. Will anything interesting happen if the external current is periodic?
    3. After a neuron's state enters a limit cycle, how can it escape? Surely that repetitive firing can't be sustained forever, especially if nutritional requirements are considered
    4. What kinds of new feature would arise if this "memory effect" is incorporated into artificial neural networks?

    • @dharveyftw7349
      @dharveyftw7349 หลายเดือนก่อน +1

      Those are some good questions 🍿

    • @TheYahmez
      @TheYahmez หลายเดือนก่อน

      📌🧐

    • @impal0r945
      @impal0r945 หลายเดือนก่อน +4

      I don't know much about neuroscience, but as a physicist I feel I'm qualified to at least give these a stab.
      1. For a neuron with a 'resting state' stable attractor, this will depend on the decay time. It's probably different for different neurons. I, too, would love to know the answer for the types of neurons discussed in the video. For a bistable neuron, it could have started off in its resting state months or years ago, and then suddenly pushed into its firing state, and stayed there forever - so in theory, the history could matter as far back as you want if you pick a neuron that's been "neglected" like this.
      2. Probably. Based on the animations in the video it looks like the neurons' dynamics have a range of possible frequencies based on the input, which would make it difficult to apply the theory of resonance directly. I suspect you can get a whole range of interesting behaviour by tuning the external current's frequency and amplitude. Personally I've done some work with phase-locked loops, an electronic dynamical system that has some cool emergent properties when you make the input periodic.
      3. If it is a bistable neuron, it can escape back to the rest state given a well-timed input. Some neurons can't escape the limit cycle unless you change the external current, as you said. In your brain and around your body there are lots of neurons and plenty of them are firing repeatedly - just think about how many neurons have to fire when you see a still image (at least a few per cone cell in your retina). Each firing uses only a small amount of nutrients, because individual neurons are tiny. But the external current to a single neuron will depend on other neurons, so it might fire repetitively only while a certain computation is being made, and then suppressed. Idek if entire computations in the brain have been mapped to that level of detail, though.
      4. Great question. The ML research community and I would also love to know.

    • @GeoffryGifari
      @GeoffryGifari หลายเดือนก่อน

      @@impal0r945 If I may ask, what kind of physics is your area of specialty?

  • @MultiNeurons
    @MultiNeurons หลายเดือนก่อน +18

    Marvellous, the lanscape of neuronal dynamics opens up and shows off its fundamental secrets. Thankyou and thanks to Mr. Izhikevich for his studies.

  • @DrTrefor
    @DrTrefor หลายเดือนก่อน +3

    I've recently found your channel and it is amazing, super inspiring in terms of the quality of the animations and presentation style - very cool!

  • @PabloMayrgundter
    @PabloMayrgundter หลายเดือนก่อน +3

    This is the best applied dynamics presentation I've seen in video, text, books.. anywhere.
    Kudos!

  • @emiel2712
    @emiel2712 หลายเดือนก่อน +12

    Wow I'm impressed at how fast you are making videos with this level of animation and editing. Great work, hope you don't get burnt out though like I've seen some other youtubers do from the pressure of wanting to satisfy their audience. Maybe you've gotten efficient in your workflow so it doesn't take that much time. Anyway, cool video.

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +1

      Thank you!!

    • @ehfik
      @ehfik หลายเดือนก่อน

      had the same thought! the quality and frequency is amazing!

    • @joonasmakinen4807
      @joonasmakinen4807 20 วันที่ผ่านมา

      Maybe artificial neural computations are used to aid his neural computations here and there ;-)

  • @anastassiya8526
    @anastassiya8526 หลายเดือนก่อน +3

    the video is really helpful for understanding the big picture behind these diff equations

  • @guzzagrizzly372
    @guzzagrizzly372 หลายเดือนก่อน +9

    Yay! This is great!
    Have you ever considered doing a video topic on Active Inference? Seems like a cool topic to add visualisation to how statistical physics combines with theoretical neuroscience.

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +10

      Yes!! In fact, the idea about Active Inference started the mini-series on Hopfield nets, Boltzmann machines and Cross-Entropy, as stepping stones for that.
      I’ll for sure make the active inference video at some point in future, currently I’m still doing background literature mining to understand it myself :D

    • @gabberwhacky
      @gabberwhacky หลายเดือนก่อน +3

      ​@@ArtemKirsanovNice! I'd also be interested in active inference and related topics like Fristons free energy principle!

  • @Faptimus420
    @Faptimus420 หลายเดือนก่อน +2

    I have recently finished my MSc in Computer Science, with a focus on data science, and a particular interest on deep learning. While I do find those interesting too, I have, as an extracurricular activity, spent time learning about spiking neural networks and neuromorphic computing, as I find these more biologically-plausible models much more fascinating than the "run-of-the-mill" Hebbian/rate-coding-based models.
    While there are many educational videos on the former, intended to help you with visualizing their behaviors, for the likes of 3b1b and many others, there is a severe lack of videos on the latter, and all the learning I've done had to rely only on textbooks and papers.
    I'd like to thank you for spending so much time on righting that terrible wrong with videos of such high quality, and allowing me and others to gain a much better intutiotion into these topics.

  • @Jm-wt1fs
    @Jm-wt1fs 23 วันที่ผ่านมา

    I did my degree in neuroscience and this concept was always super difficult for me to visualize what was actually happening. Especially bc it was just undergrad and I wasn’t very practiced w differential equation and understanding what the math was actually describing. This video just blew my mind and literally made this concept so much clearer to me. Keep up the great work man 👍

    • @ArtemKirsanov
      @ArtemKirsanov  23 วันที่ผ่านมา

      That’s really great to hear! I’m happy you found it helpful!!

  • @PackMowin
    @PackMowin หลายเดือนก่อน +5

    These video drops make my week

  • @MrDNWave
    @MrDNWave หลายเดือนก่อน +3

    Wow, this was an amazing production with its own uniqueness of presentation. I wish at times you were a little bit slower when presenting key concepts or gave more variations to understand them, just to be able to savor them better.

  • @aschroed
    @aschroed หลายเดือนก่อน +5

    Love these videos, I'm currently taking a class called Complex Adaptive Systems where we simulate these dynamical systems and visualise them in the phase plane! Super cool

  • @tarunkumar1091
    @tarunkumar1091 หลายเดือนก่อน +2

    Thank you so much Artem,I can't explain how much value your videos have added to my life.

  • @mlab3051
    @mlab3051 23 วันที่ผ่านมา +1

    The book that mentioned in this video at 24:57, "Dynamical System in Neuroscience" by Eugine M. Izhikevich.

  • @sahandkhoshdel6278
    @sahandkhoshdel6278 16 วันที่ผ่านมา

    Amazing Demonstration! Hadn't seen such a clear demosntration of neuronal activity before!

  • @kellymoses8566
    @kellymoses8566 2 วันที่ผ่านมา

    The analogy of phase space being water and differential equations being the fluid dynamics of that water is brilliant.

  • @sepro5135
    @sepro5135 หลายเดือนก่อน +1

    I just read a Book about system dynamics (on the theoretical math side of things), it’s always stunning to me how beautiful dynamic systems described only by ODEs are, second only to the amazing results of PDEs, mainly the NSE

  • @PeacefulAnxiety
    @PeacefulAnxiety หลายเดือนก่อน

    I respect your clarity and professionalism in these topics.

  • @666shemhamforash93
    @666shemhamforash93 หลายเดือนก่อน +3

    Phenomenal series Artem! Genuinely impressive work. Please do RNNs next! 🤞

  • @Froany
    @Froany หลายเดือนก่อน

    Phenomenal video, as always! You do such a great job of directing the subjects, and the animations are amazing!! Keep up the great work! It's really inspiring.

  • @Xanoxis
    @Xanoxis หลายเดือนก่อน

    Fascinating stuff. It gives a rough idea how some kinds of learning emerges in neurons and body, when those patterns change as you exercise, tuning them better.

  • @Patapom3
    @Patapom3 หลายเดือนก่อน

    Super interesting and amazingly produced! Congrats!

  • @lunafoxfire
    @lunafoxfire หลายเดือนก่อน

    Incredible presentation of some incredible information!

  • @renerekers9158
    @renerekers9158 หลายเดือนก่อน +1

    wonderful. Hughe thanks for this excellent explanation!

  • @vastabyss6496
    @vastabyss6496 หลายเดือนก่อน +1

    I hope that advances in neuroscience will allow us to build more efficient artificial neural networks and neuromorphic computers. Thanks for sharing all this cool knowledge with the world!

  • @atomicgeneral
    @atomicgeneral หลายเดือนก่อน

    @13:03 : "under the same current": Is the current visible anywhere in the state diagram? Does it correspond to a magnitude of (horizontal) perturbation? If so, that would make me understand your comment: small perturbations cause the state to return to the stable equilibrium point; larger perturbations cause it go into cycling behavior.

  • @Apodeipnon
    @Apodeipnon หลายเดือนก่อน

    You are brilliant at explaining and animating these things

  • @DaJourChristophe
    @DaJourChristophe หลายเดือนก่อน

    Hello Artem, your videos are great! If one does not already exist, could you make one on neuromodulators and their role in synaptic plasticity and how the components of the Limbic system use neuromodulators do Non-Hebbian learning (i.e. reinforcement leaning) of long-term goals? Thank you for your exceptional content!

  • @HemiHalfCentury
    @HemiHalfCentury 3 ชั่วโมงที่ผ่านมา

    18:55 idk why but this made me think of it as the Redstone clock of neurons

  • @justblank0
    @justblank0 หลายเดือนก่อน +1

    The analysis of phase planes is fascinating. It seems to rekindle an exploratory feeling inside me, similar to data exploration via regular graphs. Wondering if this is something I can apply in my day-to-day.

    • @John-c4r1o
      @John-c4r1o หลายเดือนก่อน +2

      As a kid at junior high (year 8) our overly qualified and very elderly math teacher explained how parameters in engineering were shown on coordinate graphs and thus in the old days solutions for aircraft dynamics sat within the intersectional area of multi paramters. I've used that thought process a lot in my life to very good effect.

  • @kgblankinship
    @kgblankinship หลายเดือนก่อน +1

    There are a class of artificial neural networks that have internal memory, known as recurrent networks. There is a sizable body of theory of these structures. An important subset of this theory is that of Content Addressable Memory.
    What's good about this method is that it marks off regions in the state space where the behavior is either stable or tends toward a limit cycle. One would think that these could be specified by formulas taken from the differential equations.
    This work is similar to that which was done in aerospace vehicle flight mechanics during the 1980s. But there are larger questions that beg for answers: What are the feedback mechanisms in a neuron that are associated with learning? How are individual neurons assigned to a given function? And one more immediate to Artem's work: How stable (repeatable) is the operation of a neuron once it's learned how to support a given function? How does learning affect this picture? Also, how do the H-H equations tie into learning and memory?

  • @pirminborer625
    @pirminborer625 หลายเดือนก่อน

    Would have been a nice to have to visualise gradients as slope of a 3d surface. Didn't know anything about neuro dynamics. Very interesting and well explained. One can only start grasping the complex behaviour of linking different types of neurons together and how that changes their activation threshold and patterns 🤯

  • @WCKEDGOOD
    @WCKEDGOOD หลายเดือนก่อน

    Very nicely done,it makes me want to study neuronal phasespaces for days.

  • @keylime6
    @keylime6 หลายเดือนก่อน +2

    I barely understand anything going on in this video but it's interesting as hell

  • @tau9632
    @tau9632 หลายเดือนก่อน +1

    Another epic video - well done mate. It is absolutely worthy of your tattoo.

  • @lost4468yt
    @lost4468yt หลายเดือนก่อน +1

    Could you do another video on the added complexities of biological neurons? E.g. I'm really curious as to what the serotonin/cannabinoid (especially this one as I heard it is used to transmit data backwards)/opioid/etc systems do. Especially from a computational point of view.

  • @pulseworks1663
    @pulseworks1663 22 วันที่ผ่านมา

    So cool to see my non linear dynamics class have a real world example

  • @potatoonastick2239
    @potatoonastick2239 หลายเดือนก่อน

    Absolutely mind-blowing! These videos are public service educational MASTERPIECES. Never stop teaching you absolute legend

  • @drdca8263
    @drdca8263 หลายเดือนก่อน +2

    Very nice! I learned a lot :)
    I wonder, if you took the whole 4D dynamics, if you simulated the behavior while having a variety of different external current sources, would the points in the 4D space mostly stay around a particular 2D surface in that 4D space?
    Well, I suppose if the system can be well approximated with only one of those 3 channels, then the answer sort of has to be yes, but, I guess I mean…
    something like “If you wanted to be a little more precise than the version with voltage and one ion channel, but still wanted to stick to a 2D space, could you do better by picking a surface which is slightly deformed compared to the one being used here?” .
    Though, I guess with how the m channel fraction is set to m_\infty , that is already picking a somewhat complicated surface in the 4D space, rather than just setting 2 of the variables to constants.
    I guess what I was thinking was like,
    “if you set an auto-encoder which tried to encode (V,n,m,h) as (V,x) , and decode back to (V,n,m,h) with minimal error, and where V in the encoding is forced to be the same as the V that is encoded, would the best encoding for this be much different that x=n ?”
    This is a very cool video, thank you

  • @夏和半山
    @夏和半山 หลายเดือนก่อน

    Thank you so much. It is very useful for me, the dynamic analysis presentation made me feel clearer than the graph of papers, and let me further understand the HH equation. I would like to ask if you have any plans for an update for other neuronal models analysis like FHN model in the future video? Or make discussion of more bifurcation or fast-slow dynamics?

  • @JasonCummer
    @JasonCummer หลายเดือนก่อน

    Reminds me of the tipping points of climate science.
    Any loving your neuroscience videos. Helping me get back into the space. Students have such great resources today.

  • @thiagoborduqui
    @thiagoborduqui หลายเดือนก่อน +1

    Fantastic video!

  • @atomicgeneral
    @atomicgeneral หลายเดือนก่อน

    Wonderful. Some things that weren't clear: @11:10 : Saddle pt trajectories: they all seem to be getting pushed AWAY (not some towards, others away). @11:30 : why is there a gap in the separatrix (white line) at the bottom? @11:59 The limit cycle and separatrix both seem to be trajectories? I dont see what is the difference between them: is the separatrix a geometric trajectory ? In that case, how is the limit cycle something distinct from the separatrix?

  • @seneketh
    @seneketh 20 วันที่ผ่านมา

    Cognitive neuroscientist here: Absolutely brilliant explanation. Thanks. Will recommend your vids.

  • @jafetriosduran
    @jafetriosduran หลายเดือนก่อน

    We could probably use a state space with fractional derivatives, because these have the property of requiring the entire past to calculate a new state, this unlike a classical state space, where if two state trajectories reach the same value and at that moment become zero, the next instant the answer will be the same, however in a fractional space, if two state trajectories reach the same value and become zero again, the trajectories at the next instant will be different, this due to the history effect of the Riemann Liouville integral. A pseudo fractional space is actually a state space of integer order with an uncountable and infinite number of states, that is, it has a distributed variable state.

  • @evewrubel3000
    @evewrubel3000 3 วันที่ผ่านมา

    Wow 💚I LOVE this channel!!. It’s so fucking fascinating and validating in ways it explains my behavior( told in such an eloquent way compared to me trying to explain how I only intuitively think/feel shit works) I’m Learning so much !and a great job at making it simple and easy to follow with good visuals. I’m hooked . Really just loving every video I keep watching. My mom has a PhD in neuroscience, i think it’s really intriguing how much Im drawn to it as well. I just watched the free energy principle

  • @marcgehring9530
    @marcgehring9530 หลายเดือนก่อน

    Beautiful animations!

  • @alexharvey9721
    @alexharvey9721 24 วันที่ผ่านมา

    Grateful for every video, it must take quite a bit of time to put these together! 🙏
    In reality, are these dynamics influenced by genetic development (corresponding to neuron types etc) or can neurons adapt to an extent based on the type of activity around them & demand?
    Would love a video on neural "codes" and/or group representations and a dig into what we know and the contentions.
    I'm not sure if it's true or not, but my understanding is that V1 input from the LGN can kind of change detail based on focus/demand/feedback. But I wonder what we know and whether this qualifies as a type of compression (and how it might work) or if it's just a type of prioritisation/optimisation.
    This video reminded me of a couple of those things anyway. I'm not sure the answers are well known so no stress either way. Though I can imagine thalamic function must be interesting from a computational perspective.

  • @justinlloyd3
    @justinlloyd3 หลายเดือนก่อน

    This is the best channel on the internet

  • @sebstr8382
    @sebstr8382 หลายเดือนก่อน +1

    Great video yet again!
    There is just one thing that i didnt understand.
    The effect of the 4 types made sense to me, in how they interact and output due to a current. But im not sure i understand how they relate to each other, is there some variable that effect which one of the 4 types a neuron will be?

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +1

      Thanks!
      Which particular kind is realized is determined by the biophysical parameters - how fast the channels are opening, what their voltage-dependence looks like, the value of reversal potential for different ions, etc

  • @coffeeicecubes2419
    @coffeeicecubes2419 หลายเดือนก่อน

    great intro to dynamical systems tbh

  • @phdnk
    @phdnk หลายเดือนก่อน +1

    I can't help but think about heart rhythm neurons more than about brain neurons. About fibrillation, arrhythmia, cardiac arrest, defibrillation etc

  • @Filup
    @Filup หลายเดือนก่อน +4

    I am studying a double BA in mathematics and computer science. I studied dynamical systems of ODEs. Is that a good enough requirement for that book? Or will I need some kind of understanding in biology (which is zero haha)?

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +5

      Absolutely! The book is really self-contained explaining all necessary background (including electrophysiology in chapter 2)

  • @BlueBirdgg
    @BlueBirdgg หลายเดือนก่อน

    Very interesting videos. Thanks!

  • @icandreamstream
    @icandreamstream หลายเดือนก่อน +4

    Let’s go 🔥🙌🏻

  • @ScienceNerdAditya
    @ScienceNerdAditya 2 วันที่ผ่านมา +1

    DUDE I SWEAR TO GOD YOUR VIDEOS ARE TOOO F***ING INTRESTING!!

  • @SafetySkull
    @SafetySkull หลายเดือนก่อน +1

    So does n_infinity depend on V? It must because otherwise n would just approach a constant and there would be no dynamics... But none of your equations suggest that, so I'm confused.

  • @GeoffryGifari
    @GeoffryGifari หลายเดือนก่อน +1

    Oh and not exactly about the video topic, but still neuroscience: Is human memory encoded just in the physical geometry of neurons connected with each other, or does the firing pattern of neurons also matter in determining what we remember?

    • @GeoffryGifari
      @GeoffryGifari หลายเดือนก่อน +1

      In other words, should a network of neurons keep firing to be able to store memory? and should the neurons fire in a consistent, specific way for the encoded memory to be unchanged?

    • @PeppoMusic
      @PeppoMusic หลายเดือนก่อน +1

      Both, IIRC. Since "information" can be effectively stored in any kind of "state" of sufficiently stable configurable properties (at any level), including continuous firing patterns, various biochemical balances and electronegativity. But the distinction might be that of short term and long term memory (however "biophysical"changes like the opening and closing of ion channels kind of bridge the gap, in stability and being chemical and physical).

  • @andrepenteado
    @andrepenteado หลายเดือนก่อน

    Love your videos!

  • @andytroo
    @andytroo 21 วันที่ผ่านมา

    and calcium levels in cells can tweak voltage response levels - and may be thought to be part of long term memory retention.

  • @randomchannel-px6ho
    @randomchannel-px6ho หลายเดือนก่อน +1

    This why I went to school wanting to study ML but ended up studying math and geometry

  • @marcomonti5758
    @marcomonti5758 หลายเดือนก่อน

    Very interesting!!

  • @F5S7N9
    @F5S7N9 24 วันที่ผ่านมา +1

    I wonder what endocannabinoid based retrograde transmission during LTP has to do with all this. Regulatory properties of ECS, set phase space of different systems?

  • @JohnbelMahautiere
    @JohnbelMahautiere หลายเดือนก่อน

    Merci paske pout cet direction

  • @Pedritox0953
    @Pedritox0953 หลายเดือนก่อน

    Great video! Peace out!

  • @SafetySkull
    @SafetySkull หลายเดือนก่อน +1

    what happened to n's 4th power?

  • @potatoonastick2239
    @potatoonastick2239 หลายเดือนก่อน +1

    These videos are an autodidacts dream

  • @raiso9759
    @raiso9759 หลายเดือนก่อน

    Amazing!!!

  • @nananou1687
    @nananou1687 หลายเดือนก่อน

    Can you please do a video on neuromorphic computing

  • @CaarabaloneDZN
    @CaarabaloneDZN หลายเดือนก่อน

    I am making my thesis in fluid simulation, and its imcredible how similar these topics are. It feels like I'm watching a video on fluid mechanics but with another skin lol

  • @franh9833
    @franh9833 หลายเดือนก่อน

    What is your opinion on the Bienenstock-Cooper Munro theory?

  • @prasadkandra
    @prasadkandra หลายเดือนก่อน

    Another Nobel grade research video ❤
    All the best 💐

  • @markener4316
    @markener4316 8 วันที่ผ่านมา

    Yes. Just Yes. This video is awesome

  • @EkShunya
    @EkShunya หลายเดือนก่อน +1

    i just had my mind blown

  • @tiagotiagot
    @tiagotiagot หลายเดือนก่อน +1

    I wonder what it would sound like if someone turned these into synth modules with the time scales adjusted to the hearing range, and played around with the parameters and combinations....

  • @gayan9121
    @gayan9121 หลายเดือนก่อน

    Hi Artem,
    Animations are sick! What tools do you use for these animations?
    Manim, Illustrator and Python?

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +1

      Thank you!
      Yep, all three! Mostly creating individual animation video files in matplotlib, and then composing them in after effects

  • @Johnnius
    @Johnnius หลายเดือนก่อน +2

    Do you happen to publish the animations as open-source? I would like to play with the dynamical system on my own, without having to code the entire thing.

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน +2

      Just uploaded the code to Github :)
      github.com/ArtemKirsanov/TH-cam-Videos/tree/main/2024/Elegant%20Geometry%20of%20Neural%20Computations

    • @Julian-tf8nj
      @Julian-tf8nj หลายเดือนก่อน

      Ditto here! :)

  • @maxe624
    @maxe624 หลายเดือนก่อน

    Can you define the phase space using only the nullclines?

  • @rafa_br34
    @rafa_br34 หลายเดือนก่อน

    That's awesome.

  • @davidhand9721
    @davidhand9721 หลายเดือนก่อน +1

    So is there evidence that these resonators are involved with what you've previously described as phase-sensitive neurons in the hippocampus? I remember I was having a difficult time imagining a configuration of more conventional integrator neurons that could lead to such a behavior in a robust way.

  • @banzaipiegaming
    @banzaipiegaming หลายเดือนก่อน

    How well does this model scale? Is there an upper limit to the number of neurons before accuracy loss is statistically significant? I know nothing about modelling in neurology

  • @lucaferlisi2486
    @lucaferlisi2486 หลายเดือนก่อน

    Is synaptic plasticity involved in changing neuronal dynamics?

  • @tau9632
    @tau9632 หลายเดือนก่อน +2

    Now I also understand just how different in-silico neural nets are from the OG biological neural nets - they completely miss all this time dynamic stuff. And who knows that implications that has for its reasoning/thinking/existing as a consciousness abilities....

    • @drdca8263
      @drdca8263 หลายเดือนก่อน +2

      There are people working on “spiking neural nets” which try to imitate the spiking behavior to some extent, but I don’t know how closely they match the behavior here, which I certainly didn’t know about.
      (I found this video to be quite informative)

  • @TheFinalFrontiersman
    @TheFinalFrontiersman หลายเดือนก่อน +1

    As someone who loves the idea of being a cyborg, the idea that we understand neurons on this level is so cool... On the other hand, it's terrifying to see how much computational power each individual neuron has when we have almost 90 billion to deal with!!!

    • @Curiosiate_
      @Curiosiate_ หลายเดือนก่อน +1

      As someone who loves being a cyborg (via basic means) we aren't even close to pushing the limits of our brains I think. We are hungry for structure, meaning, and use any of it we can to abstract, predict, and navigate reality and ourselves.

  • @guidosalescalvano9862
    @guidosalescalvano9862 หลายเดือนก่อน

    Are neurons often on the boundaries of different "phase space types" (like you described in the video you made on Ising models)?

    • @ArtemKirsanov
      @ArtemKirsanov  หลายเดือนก่อน

      Yes!! Neurons typically compute on the verge of bifurcations

    • @guidosalescalvano9862
      @guidosalescalvano9862 หลายเดือนก่อน

      @@ArtemKirsanov Are there regularisation forces that push them to the bifurcation boundaries?

  • @davidhand9721
    @davidhand9721 หลายเดือนก่อน +18

    As a technicality, yes, there _is_ a biophysical and biochemical difference between two neurons with different histories, so the first statement of the video is sort of wrong. Two _identical_ neurons, down to the atom, given identical inputs, _will_ give you the same outputs, because the history of the neuron is stored in its biochemical state, environment, connections, and other cellular variables. I get that you're trying to introduce a computational topic, but it's important to remember that a computational neuron is much, much simpler than a real one because real neurons have an extremely rich, high-dimensional internal state. Not trying to be negative, I'm a big fan of your work.

    • @drdca8263
      @drdca8263 หลายเดือนก่อน +1

      Didn’t he say *visually* identical, or something like that? I thought I remembered something that specifically implied that he didn’t mean atom-for-atom identical, but just like, “the same kind of neuron, without any visual differences in like, the length of the axons or dendrites etc.”

    • @PeppoMusic
      @PeppoMusic หลายเดือนก่อน

      ​@@drdca8263 That would mostly be the biophysical state I'd say. In contrast with the biochemical state, including environmental biochemical factors, which are not fully observable (as they exist at the limit of what is directly observable). This is also where quantum effects can get started to get more involved, so things get a lot more difficult to model and observe.

    • @kurkenfruit
      @kurkenfruit หลายเดือนก่อน

      I don’t understand the point of your pedantry. Yes, if two neurons just so happened to be identical down to the atom, then technically they will produce the same output given the same input. But when in the real world will we ever expect such a thing? You are being critical of the video for introducing an abstract simplification, but you introduce one of your own.

    • @Jm-wt1fs
      @Jm-wt1fs 23 วันที่ผ่านมา

      If you watch the video, it actually isn’t the differences in their structures or connections that determines the different behavior so much as the current state. And also he’s explicitly describing biological neurons in this video, not computational neurons unless those have potassium channels and bind neuro modulators these days

  • @ravenecho2410
    @ravenecho2410 หลายเดือนก่อน

    why are u recommended at way too late on friday? this is perfect morning coffee material?

  • @charliesteiner2334
    @charliesteiner2334 หลายเดือนก่อน

    As the great Baba Brinkman would say "once you bust them up into pieces / It’s tough to go back, ‘cause... hysteresis"

  • @surajsamal4161
    @surajsamal4161 หลายเดือนก่อน

    bro can you make a batchnorm expalnation ?

  • @azharalibhutto1209
    @azharalibhutto1209 หลายเดือนก่อน

    Great ❤❤❤

  • @ShpanMan
    @ShpanMan หลายเดือนก่อน

    Such stunning visualizations of 2D to 1D projections. Humans are so blind, we have a tiny viewing port into the world, no wonder we are so ignorant.
    Hopefully AI can do the more complex thinking for us.

  • @luisisaurio
    @luisisaurio หลายเดือนก่อน

    So SupH and SNIC are a mathemathical model for seizures?

  • @AlexanderGolovatiy
    @AlexanderGolovatiy 19 วันที่ผ่านมา

    phreecken mindblowing

  • @rabia1180
    @rabia1180 หลายเดือนก่อน

    can networks of neurons have these properties too?