Logarithmic nature of the brain 💡

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ม.ค. 2025

ความคิดเห็น • 317

  • @ArtemKirsanov
    @ArtemKirsanov  2 ปีที่แล้ว +25

    Join Shortform for awesome book guides and get 5 days of unlimited access! shortform.com/artem

    • @adityakulkarni4549
      @adityakulkarni4549 ปีที่แล้ว +1

      @Artem Kirsanov the text at 15:03 doesn't seem to correspond to the biorxiv paper you have linked in the description 😅

  • @defenestrated23
    @defenestrated23 2 ปีที่แล้ว +477

    Log-normal distributions are closely related to pink noise (power is 1/freq), since d(log) = 1/x. This is said to be the hallmark of self-organization. It shows up everywhere you have fractal symmetry: brains, turbulence, finance, weather, even migration patterns.

    • @w花b
      @w花b 2 ปีที่แล้ว +3

      The everything

    • @Maouww
      @Maouww 2 ปีที่แล้ว +6

      yep I was totally thinking of Quantitative Linguistics the moment log-normal distribution cropped up

    • @Simonadas04
      @Simonadas04 2 ปีที่แล้ว +4

      D(ln)=1/x

    • @luker.6967
      @luker.6967 2 ปีที่แล้ว +4

      @@Simonadas04 some people prefer log to denote ln, since log base e is more common in pure mathematics.

    • @Simonadas04
      @Simonadas04 2 ปีที่แล้ว

      @@luker.6967 i see

  • @aitor9185
    @aitor9185 2 ปีที่แล้ว +73

    Great video!
    Super happy to see my paper about neuron densities made it into this video 15:12 :)

    • @isaac10231
      @isaac10231 2 ปีที่แล้ว +6

      wow, the TH-cam algorithm is crazy

    • @DylanKeelingOfficial
      @DylanKeelingOfficial หลายเดือนก่อน +1

      Congratulations, :) A phenomenal paper :)

  • @elismith4040
    @elismith4040 2 ปีที่แล้ว +28

    As an electrical engineer who has also always been extremely interested in neuroscience, stumbling across this channel is pure gold.

    • @crimsonghoul8983
      @crimsonghoul8983 6 หลายเดือนก่อน +1

      Same here, buddy. Are you planning to become a neuroengineer?

  • @BiancaAguglia
    @BiancaAguglia 2 ปีที่แล้ว +135

    Thank you for all the effort you put into your videos, Artem. You're doing a great job taking complex topics and making them easy to visualize and to understand.
    In case you're looking for topic suggestions for future videos, I have a few:
    1. curriculum you would follow if you had to start from scratch and wanted to teach yourself neuroscience (computational or, if you prefer, a different concentration)
    2. sources of information neuroscientists should follow in order to stay current with the research in the field (e.g. journals, labs, companies, people, etc)
    3. list of open problems in neuroscience
    Thank you again for your videos. Keep up the great work. 😊

    • @ArtemKirsanov
      @ArtemKirsanov  2 ปีที่แล้ว +26

      Thank you for wonderful suggestions!
      Right now, I'm actually preparing the script for a video about getting started with computational neuroscience! So stay tuned ;)

    • @BiancaAguglia
      @BiancaAguglia 2 ปีที่แล้ว +3

      @@ArtemKirsanov Thank you. I look forward to it. 🙂

    • @leif1075
      @leif1075 2 ปีที่แล้ว +1

      @@ArtemKirsanov Can you clarify how exsctly normal.dostrobtions arise eve tally even when you have wildly extreme and different values? Is it basically just evening out?

    • @iwanttwoscoops
      @iwanttwoscoops 2 ปีที่แล้ว +1

      @@leif1075 pretty much! look at height; there's a wide variance, and in any town you can find a tiny person and a giant. But overall, most people are average height, and these outliers are rare. Hence normal

  • @ImBalance
    @ImBalance ปีที่แล้ว +3

    The best explanation of logarithms I've ever seen. How surprising that a neuroscience TH-cam video managed to describe the concept and its application so much more completely than any of the math classes I've ever taken. Well done!

  • @lbsl7778
    @lbsl7778 2 ปีที่แล้ว +4

    This channel is the most beautiful thing that has happened in my life this week, maybe even this month. Thank you for your effort, greetings from Mexico!

  • @95VideoMan
    @95VideoMan 2 ปีที่แล้ว +4

    Thanks! This is fascinating and useful information. You presented it so clearly, and the visuals were top notch. Really appreciate this work.

  • @umerghaffar4686
    @umerghaffar4686 ปีที่แล้ว +1

    I can’t believe this valuable information is available on YT for free!! I just finished my a level studies and am keen on biology and neuroscience so I loved the fact I got to see a computational perspective on the brain. Makes me wonder where else can the log-normal distributions be seen in the body or what other mathematical models can be deduced in Biological systems.
    Keep up!

  • @lakshminarayananraghavendr319
    @lakshminarayananraghavendr319 ปีที่แล้ว

    Thanks for the informative video

  • @threethrushes
    @threethrushes 2 ปีที่แล้ว +3

    I studied statistics for biologists at university some 25 years ago.
    Your explanations are logical and intuitive. Good job Artem.

  • @trentfellbootman932
    @trentfellbootman932 3 หลายเดือนก่อน

    Great job Artem, you made these advanced stuff readily understandable to people from all backgrounds!

  • @fabiopakk
    @fabiopakk 2 ปีที่แล้ว +32

    Excellent video, Artem! I enjoy a lot watching your videos, they are incredibly well done and explained. I particularly liked the ones involving topology.

  • @omaryahia
    @omaryahia 2 ปีที่แล้ว +2

    I am happy I didn't skip this video, and now I know another great channel for math and science
    thank you Artem
    great quality, and topics I am interested in

  • @sytelus
    @sytelus 2 ปีที่แล้ว

    Thanks!

  • @Boringpenguin
    @Boringpenguin 2 ปีที่แล้ว +9

    On a completely unrelated note, the lognormal distribution also pops up in the field of mathematical finance! In particular, it is used to model the stock prices in the Black-Scholes model.

    • @ArtemKirsanov
      @ArtemKirsanov  2 ปีที่แล้ว +3

      Wow, cool info! Thanks for sharing

    • @BiancaAguglia
      @BiancaAguglia 2 ปีที่แล้ว +9

      The wikipedia page on log-normal distribution has some examples too:
      - city sizes
      - number of citations of journal articles and patents
      - surgery durations
      - length of hair, nails, or teeth
      - length of chess games
      - length of comments in forums, etc.
      It's an interesting read.

  • @exdiegesis
    @exdiegesis ปีที่แล้ว

    This is definitely one of my favourite channels now. Up there with 3B1B. You explain things really well, and the topics you cover are just my cup of tea.

  • @emmaodom7201
    @emmaodom7201 2 ปีที่แล้ว +4

    Wow you are such an effective communicator!!! Your insights were very clear and easy to understand

  • @giacomogalli2448
    @giacomogalli2448 2 ปีที่แล้ว +3

    Your videos are fantastic for anyone interested in neuroscience!
    I never studied it in depth but it's fascinating and I'm discovering it

  • @artscience9981
    @artscience9981 6 หลายเดือนก่อน

    Fantastic video! Two of my interests, probability, and brain operation, in one video. Very well-done explanation. Thank you Artem!

  • @uquantum
    @uquantum 9 หลายเดือนก่อน

    Terrific video, Artem. Mind-blowing: not only the production values, but in particularly highly engaging content. Thank you for sharing with us. Fantasti❤

  • @horizn9982
    @horizn9982 ปีที่แล้ว

    Wow man amazing videos, I wanna do research as a computational neuroscientist and your content is really what I was looking for!

  • @4shotpastas
    @4shotpastas 2 ปีที่แล้ว +4

    Me, knowing nothing about how the brain works: "Pfft, of COURSE it's logarithmic. Why wouldn't it be?"

  • @mukul98s
    @mukul98s 2 ปีที่แล้ว +19

    I had studied advanced mathematics in my last semester but never understand the concept of random variables and distribution with that much clarity.
    Amazing video with great explanation.

  • @SudhirPratapYadav
    @SudhirPratapYadav 2 ปีที่แล้ว

    one word, EXCELLENT!!! So happy to watch this.

  • @lucascsrs2581
    @lucascsrs2581 2 ปีที่แล้ว

    This channel is a hidden gem. +1 subscriber

  • @Tabbywabby777
    @Tabbywabby777 2 ปีที่แล้ว +1

    Great animations and explanations. However, as a fellow scientist and learner I wish that you had presented the central limit theorem and the derivation of the log-normal distribution in it's full mathematical glory. I feel that half the power of MANIM is in it's ability to concisely represent both the graphical and textual aspects of mathematics, to avoid one of them is to kneecap the platform. As a learner it is essential that I build associations between the graphical and textual representations. I think you did this better in your video on wavelets!
    Anyway, thank you so much for taking the time to create these videos. I am sure that they will make a lasting contribution to the field of computational neuroscience and inspire students for years to come.

  • @valor36az
    @valor36az 2 ปีที่แล้ว +2

    Your videos are such high quality thanks for the efforts.

  • @suhanovsergey
    @suhanovsergey 2 ปีที่แล้ว

    Thanks for high quality content! I love the use of palmistry as an example of a random process at 3:42 :)

  • @Treviisolion
    @Treviisolion ปีที่แล้ว

    The shape certainly makes some intuitive sense. Extremely short firing rates are more likely to be mistaken as random noise so a neuron wants to be above that limit. However, it doesn't want to be too far above it, because firing is energy-intensive and the brain is already a calorie-hungry organ. At the same time if information is encoded partially in the firing rate, then utilizing only a small subsection of possible firing rates is not information efficient, so neurons that need to be heard more often would be incentivized to use lower utilized firing rates as there is less noise in those channels. I don't know whether that explanation would necessarily result in a log-normal distribution as opposed to a low-median normal distribution, but it is interesting to see roughly the shape I was thinking emerge at the end.

  • @quentinmerritt
    @quentinmerritt 2 ปีที่แล้ว +1

    Dude that’s so cool! I’m a first year grad student at OSU looking to research Nuclear Theory! And I’ve been watching your videos since late high school, I’d love to see a series on QFT!

  • @someone5781
    @someone5781 2 ปีที่แล้ว

    This was one of the most mindblowing videos I've seen in a while. Such amazing content Artem!

  • @Luper1billion
    @Luper1billion 2 ปีที่แล้ว +1

    Its interesting because I thought the video would be able how the brain perceives information logarithmicly, but it actually shows its actually physically built logarithmicly as well.

  • @peterbenoit5886
    @peterbenoit5886 ปีที่แล้ว

    Wonderful content on a most interesting topic.

  • @danin2013
    @danin2013 2 ปีที่แล้ว +3

    i love your channel and the way you explain everything with such detail!

  • @nedfurlong8675
    @nedfurlong8675 2 ปีที่แล้ว +1

    Your videos are fantastic. What an excellent communicator!

  • @bofloa
    @bofloa 2 ปีที่แล้ว

    This lecture is wow...thanks

  • @isaacharton7851
    @isaacharton7851 2 ปีที่แล้ว

    Very productive vid. It inspires me to be productive as well.

  • @lambdo
    @lambdo 2 ปีที่แล้ว +1

    Wonderful explanation of gaussian distribution

  • @666shemhamforash93
    @666shemhamforash93 2 ปีที่แล้ว +50

    Great video!
    I would love to see a follow-up video on neuronal avalanches and the critical brain hypothesis. A nice review on the topic that you might find useful is "Being critical of criticality in the brain" by Beggs and Timme (2012).

    • @ArtemKirsanov
      @ArtemKirsanov  2 ปีที่แล้ว +6

      Thank you! I will definitely look into it!

    • @leif1075
      @leif1075 2 ปีที่แล้ว

      @@ArtemKirsanov Thank you for sharing Artem. I hope you can respond to my message about how to deal with scientific papers and dealing with math when you can. Thanks very much.

    • @a__f
      @a__f 2 ปีที่แล้ว +3

      Interestingly, I used to work in solar physics where avalanches are also a commonly used model for how solar flares occur

  • @ArgumentumAdHominem
    @ArgumentumAdHominem 2 ปีที่แล้ว +7

    Very nicely made! To me, there is one important part missing: CLT applies to i.i.d variables, whereas firing rates of neurons in the brain are clearly correlated as the neurons are connected to each other. Would be great to hear more about how interdependence of neuronal firing affects the discussed distribution.

    • @asdf8asdf8asdf8asdf
      @asdf8asdf8asdf8asdf 2 ปีที่แล้ว

      As or more interesting - is there a generalization of the CLT where the distributions are neither independent nor identically distributed?
      Monstrously ridiculously difficult problem - is there any theory in this space?

    • @neptunion
      @neptunion 2 ปีที่แล้ว

      @@asdf8asdf8asdf8asdf
      > In fact, there are multiple types of CLT that apply in a variety of different contexts - cases including Bernoulli random variables (de Moivre - Laplace), where random variables are independent but do not need to be identically distributed (Lyapunov), and where random variables are vectors in R^k space (multivariate CLT).
      Found from book below by googling around a bit.
      10 Fundamental Theorems for Econometrics, Chapter 4 Weak Law of Large Numbers and Central Limit Theorem.

    • @asdf8asdf8asdf8asdf
      @asdf8asdf8asdf8asdf 2 ปีที่แล้ว

      @@neptunion Will take a look, thank you

    • @conduit242
      @conduit242 5 หลายเดือนก่อน

      @@asdf8asdf8asdf8asdfIf there is dependence but only weak dependence, particularly if it weakens as they are separated in time or space, there is a version of the CLT called the “CLT for Weakly Dependent (Mixing) Sequences”. The variables need to satisfy certain “mixing” conditions that measure how much the variables depend on each other as their indices move apart.

  • @alkeryn1700
    @alkeryn1700 2 ปีที่แล้ว +1

    once wrote a spiking neural net with around a million neurons
    some neurons would fire almost every iteration, some every 10 iteration, and some would average once every thousands.
    didn't bother to plot the distribution but that could have been fun.

  • @abhishek101sim
    @abhishek101sim 2 ปีที่แล้ว

    Helpful content, with a good lowering of entry barrier for someone uninitiated. I learned a lot. A small but important point: sum of independent random variables is not normally distributed, but mean of independent random variables is normally distributed.

    • @stipendi2511
      @stipendi2511 2 ปีที่แล้ว

      Technically you're right, since the limit of the sum of the random variables diverges. However, I don't think stressing that point helps with conceptual understanding, since in practice all sums are finite, and then the sum approximately resembles the SHAPE of a normal distribution. Once you normalize it, which is what taking the mean does, you obtain a probability distribution.

    • @Abhishek-zb3dp
      @Abhishek-zb3dp 2 ปีที่แล้ว

      Technically it's not the mean but mean times sqrt(n) where n is the number of samples taken to get the mean and under the limit that n is large. Otherwise the mean would just be a point as n becomes very large.

  • @SuperEbbandflow
    @SuperEbbandflow 2 ปีที่แล้ว +4

    Excellent video, keep the great content coming!

  • @gz6963
    @gz6963 2 ปีที่แล้ว

    Thanks for the clear explanation, great video

  • @khwlasemaan8135
    @khwlasemaan8135 2 ปีที่แล้ว

    Impressive ... neuroscience is a powerful topic

  • @Marquesremix
    @Marquesremix 2 ปีที่แล้ว

    Your videos have a Good dinamic and didacts and the edictions is verry harmony, its really impressive why you not have 1 million of subscribers, more one subscriber from brazil 🇧🇷

  • @TheBrokenFrog
    @TheBrokenFrog 2 ปีที่แล้ว +2

    This is exactly what I was looking for today!! How strange that I found this exact topic here. Thank you :)

  • @buckrogers5331
    @buckrogers5331 2 ปีที่แล้ว +5

    I've been interested in brain science since I was a kid. This is definitely understandable to a 10 year old kid. Well done! More content please!! And you shud have more subscribers!!

    • @soymilkman
      @soymilkman 2 ปีที่แล้ว +4

      damn you must be hella smart for a 10 yr old

  • @CYellowan
    @CYellowan 2 ปีที่แล้ว

    Ok i will add ONE HUGE detail. I read a few years ago, that the brain can work in up to "10 dimentions". This means that if data is sent 1 or 2 ways at all times, the HZ should be multiplied by 2x or 1.5x. But if a crosswalk synapse can send multiple signals at once, lets say up to 5 or 10, then conditionally it can perform 5x or 10x as fast in effect, than what the hz is. So adding more specialists greatly boost the efficiency factor via that alone.
    To me, this explains how some days things just "click". I learned a thing asap 🤔🧐 In how it was related to what i know, or how maybe some strong synapses formed quckly 👏

  • @editvega803
    @editvega803 2 ปีที่แล้ว

    Wow! An amazing video! Thank you very much Artem. You have a new suscriber from Argentina 🇦🇷

  • @luwi8125
    @luwi8125 2 ปีที่แล้ว +1

    Thank you for a great video! Very interesting topic and very nice of you to show the article to make people more likely to actually look it up for themselves. 😀👍

  • @NoNTr1v1aL
    @NoNTr1v1aL 2 ปีที่แล้ว +2

    Absolutely amazing video! Subscribed.

  • @Andres186000
    @Andres186000 2 ปีที่แล้ว +2

    3:56 The Cauchy distribution does not follow the central limit theorem and actually shows up relatively frequently, so that is a caveat to the Central Limit Theorem that is worth noting. This is actually a very, very big deal when talking about epidemiology and market behaviors.

    • @marcelo55869
      @marcelo55869 2 ปีที่แล้ว +2

      The central limit theorem appears when many different independent variables each gives a little small contribution to the output of the bell curve. That's why on the limit with N going to infinity it always holds true whichever distribution you choose for the variables.
      However, If one variable outweighs the others or the variables are somehow correlated or if the number of variables N is not large enough, the theorem does not hold.

    • @gideonk123
      @gideonk123 2 ปีที่แล้ว +2

      The ordinary Central Limit Theorem is valid only for random variables which have a finite variance. Therefore it is NOT relevant for sums of variables where each is distributed as Cauchy, because Cauchy variables do not have finite variance (incidentally, they do not have a mean either…).
      But there is a different kind of limit theorem valid for ALL distributions: alpha-Stable distributions!
      Cauchy variables are a special case with alpha = 1.

    • @Evan490BC
      @Evan490BC 2 ปีที่แล้ว

      @@gideonk123 Yes, exactly, this is the reason.

  • @ruperterskin2117
    @ruperterskin2117 2 ปีที่แล้ว

    Cool. Thanks for sharing.

  • @anywallsocket
    @anywallsocket 2 ปีที่แล้ว

    What it means is that the things we measure to be lognormal, we are assuming an additive linearity, when likely there exists a more natural measure of the thing in a multiplicative non-linearity, e.g., ignoring the fact that the thing is self-interacting, or grows from itself.

    • @conduit242
      @conduit242 5 หลายเดือนก่อน

      That’s not what it means.

  • @chistovmaxim
    @chistovmaxim 2 ปีที่แล้ว

    really interesting video for someone reseraching on NN, thanks!

  • @crimfan
    @crimfan 2 ปีที่แล้ว +1

    Lognormal is the central limit theorem for RVs that combine in a multiplicative fashion (as long as the tails aren't too heavy).

  • @geodesic616
    @geodesic616 2 ปีที่แล้ว

    Why Guys like this are so under subscribed . Wish you success

  • @KaliFissure
    @KaliFissure 2 ปีที่แล้ว

    Most stimulating content in ages! 👍🖖🤘

  • @ASMM1981EGY
    @ASMM1981EGY ปีที่แล้ว +1

    Awesome episode

  • @Jeffben24
    @Jeffben24 2 ปีที่แล้ว

    Thank you Artem ❤

  • @YonatanLoewenstein
    @YonatanLoewenstein 2 ปีที่แล้ว

    Very nice!
    An explanation of why the distribution of firing rates in the cortex is log-normal can be found in Roxin, Alex, et al. "On the distribution of firing rates in networks of cortical neurons." Journal of Neuroscience 31.45 (2011): 16217-16226.

  • @stevenschilizzi4104
    @stevenschilizzi4104 2 ปีที่แล้ว

    Brilliant, Artem! And fascinating.

  • @jpcf
    @jpcf 2 ปีที่แล้ว

    High quality content here!

  • @quantumfineartsandfossils2152
    @quantumfineartsandfossils2152 12 วันที่ผ่านมา

    I like your Mah jong music. This is how you arrive to algorithmic diagnostics for molecular & particle robotics using topological quantum refrigeration

  • @thomassoliton1482
    @thomassoliton1482 2 ปีที่แล้ว

    On a more “global” neural scale, it is well know that there is a strong (inverse) relationship between EEG power and frequency: the fast-fourier transform of EEG activity is high at low frequency (< 5 Hz) and low at high frequency (up to 100 Hz) (Buzaki, Steriadi, and others) so that a plot of log(power) vs frequency is fairly linear. Not surprising since it could be considered an “emergent” property of the neural spiking distribution you show. The slope of that relationship can be used to deduce the state of consciousness - sleep (steep negative slope) versus attentive waking (shallow negative slope). So high-frequency power shows a relative increase during waking, because (generally) there is less synchronization of neurons by thalamo-cortical inputs (e.g. during sleep).

  • @middlegrounds-was-taken
    @middlegrounds-was-taken 4 หลายเดือนก่อน

    What a great video! Thanks for your effort in making this

  • @kapteinskruf
    @kapteinskruf 2 ปีที่แล้ว

    Outstanding!

  • @justmewendy6461
    @justmewendy6461 ปีที่แล้ว

    Very good. Thank you.

  • @accountname1047
    @accountname1047 2 ปีที่แล้ว

    This video is fantastic

  • @tringuyenuc2381
    @tringuyenuc2381 2 ปีที่แล้ว +1

    This is a very nice connection of logarithmic perception and biological features of humans. I wonder if there is an analogy explanation of the rule 70-30?

  • @TheKemalozgur
    @TheKemalozgur 2 ปีที่แล้ว +1

    Whenever there is a log-normal behaviour, we can think of connnected and combined behaviour of things, namely evolutionary step. order of importance of things can only stabilized enough in a logarithmic fashion.

  • @enricoginelli3405
    @enricoginelli3405 2 ปีที่แล้ว

    Super cool video Artem! Keep up!

  • @neutrino9
    @neutrino9 2 ปีที่แล้ว

    Truly amazing topics, thank you !

  • @CoolDudeClem
    @CoolDudeClem 2 ปีที่แล้ว +2

    I just want to probe the parts of my brain where the picture and sounds form so I can record my dreams and then play them back like a movie.

  • @aaronsmith6632
    @aaronsmith6632 2 ปีที่แล้ว +3

    Freaking fascinating. I imagine these properties would transfer to neutral network design as well!

  • @knaz7468
    @knaz7468 2 ปีที่แล้ว

    Really nice explanation, thanks!

  • @MattAngiono
    @MattAngiono 2 ปีที่แล้ว +2

    New to this channel and finding this very intriguing!
    It seems to even parallel the patterns in how we actually think on the macro level.
    Are you familiar with cognitive scientist and TH-camr John Vervaeke?
    I bet you two could have a wonderful conversation that both audiences would enjoy!
    He speaks much more about the big picture of cognition, yet so much of it involves these similar patterns with a split in extending out vs honing in.

  • @anywallsocket
    @anywallsocket 2 ปีที่แล้ว

    It’s just a counterbalance to the exponential growth of error or chaos. Having lognormal actions optimally slows entropy growth.

  • @sunkojusurya2864
    @sunkojusurya2864 2 ปีที่แล้ว

    Insightful video. 👍 Keep going.

  • @kimchi_taco
    @kimchi_taco 2 ปีที่แล้ว

    11:05 synaptic weights are log normal. ML should initialize weights in log normal.

  • @whiteoutTM
    @whiteoutTM 2 ปีที่แล้ว

    fascinating and engaging!

  • @wilsonbohman3543
    @wilsonbohman3543 ปีที่แล้ว

    i have a heavy background in audio production, and i figured this made a lot of sense given the logarithm nature of how we perceive sound, it’s cool to see that this is just inherent to our brains in general

  • @AswanthCR7
    @AswanthCR7 2 ปีที่แล้ว +19

    Loved the video and the presentation :) Can biasing the weights of an artificial neural network toward such a log normal distribution provide any advantage?

    • @anonymous_4276
      @anonymous_4276 2 ปีที่แล้ว +1

      Exactly what I was wondering!

  • @IIAOPSW
    @IIAOPSW 2 ปีที่แล้ว

    I'm going to engage in some rank speculation (except its not really speculation because there is *some* research I could cite). Suppose that those brain waves you're picking up aren't a single neuron at all, but actually a self perpetuating loop signal of several neurons. The longer the loop, the longer it takes for the signal to propagate back to the start, the lower the frequency. Now if you were to take a bunch of neurons connected to each other completely at random, and then count out the number of loops consisting of 2 neurons, 3 neurons, 4 neurons... n neurons, you should find that the number of loops goes down exponentially with length. Loops of length k can be made in n choose k different ways. The lower the frequency, the longer the loop, the fewer possible ways to construct it. Hence the frequencies end up with a log distribution.

  • @QasimAlKhuzaie
    @QasimAlKhuzaie 2 ปีที่แล้ว

    A very interesting video. Thank you very much

  • @Mr.Nichan
    @Mr.Nichan ปีที่แล้ว

    Log-normal distributions are also pretty common, especially with frequency. For instance, I think that's what the black-body curve is, though I may be wrong.

  • @mapnzap
    @mapnzap 2 ปีที่แล้ว

    That was very well done

  • @bovanshi6564
    @bovanshi6564 2 ปีที่แล้ว +1

    Great video, really interesting!

  • @matveyshishov
    @matveyshishov 2 ปีที่แล้ว

    Beautiful, thank you!

  • @dalliravitejareddy3089
    @dalliravitejareddy3089 2 ปีที่แล้ว

    great effort

  • @davispeixoto
    @davispeixoto 7 หลายเดือนก่อน

    Your videos are awesome!

  • @RanLevi
    @RanLevi ปีที่แล้ว

    That was amazing! Great work, Artem - love your videos :-)

  • @maxmyzer9172
    @maxmyzer9172 2 ปีที่แล้ว

    3:21 this video so far is more helpful than the statistics course i took

  • @neillamas8929
    @neillamas8929 2 ปีที่แล้ว

    Summary: the spiking frequency (aka firing rate) of neurons in the brain follow a lognormal distribution. This can be seen as a small quantity of generalizer neurons responsible for most daily neural activity (~10% of neurons do 50% of the activity) as compared to specialists neurons. (Note that this is not a binary classification but rather a continuum). One of the most promising hypotheses that explains the emergence of this distribution is that the change of size in a synaptic spine is proportional to its size.

  • @anthonyhsiao4560
    @anthonyhsiao4560 2 ปีที่แล้ว

    Amazing video. Thank you.
    I would guess that one or a mix of the following two are at the (physical) root of this :
    1) either this is due to the "serial nature" of things eg they are connected in series and hence they are physically embodying a multiplication. One neuron firing triggering the next one triggering the next one etc. Since its a multiplication therefore it becomes log normal.
    2) alternatively it could be because of the hierarchical structure of the network (brain). You mentioned there is a spectrum of the general (higher level) vs specialist (lower level) neurons, and since they are organized hierarchical, there is again this serial-Ness, since a higher level neuron might be triggered by a lower level neuron.

  • @ЕвгениГеоргиев-т1я
    @ЕвгениГеоргиев-т1я 2 ปีที่แล้ว

    great video analysis

  • @PalCan
    @PalCan 2 ปีที่แล้ว

    Awesome video

  • @martinr7728
    @martinr7728 2 ปีที่แล้ว

    I'm quite impressed how you present all the information, very concise and clear