Backpropagation and the brain

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ย. 2024

ความคิดเห็น • 39

  • @YannicKilcher
    @YannicKilcher  4 ปีที่แล้ว +7

    Note: This is a reupload. Sorry for the inconvenience.

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware ปีที่แล้ว

      The brain does this thing call axon regulation. In some parts where there are reuptake axons, they self regulate to reduce the amount of feedback when over stimulated. Basically this means they close, and leave the flooded neutral transmitter in the flow stream for the dendrites. This has the effect of down regulating the signal.
      I saw another video where you covered the direct feedback mechanism, and mentioned that the neurons didn’t have a back propagation mechanism, and wanted to share that with you.

  • @stephanrasp3796
    @stephanrasp3796 4 ปีที่แล้ว +11

    I think at 4:50, the perturbation should be added to w, not x, i.e. f(x, w+n). Awesome content btw!

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว +2

      True, you want to jiggle the model itself. Thanks!

  • @redone9553
    @redone9553 3 ปีที่แล้ว +4

    Thanks for the upload! But who says that we need negative voltage for a signed gradient? Why not assume high frequencies are positive and low are negative?

  • @MikkoRantalainen
    @MikkoRantalainen ปีที่แล้ว

    Great video! I think I've seen at least summary of this algorithm earlier and this video make it more clear.

  • @dermitdembrot3091
    @dermitdembrot3091 4 ปีที่แล้ว +5

    Could it be that perturbation learning is just Hebbian learning where the updates are scaled by the "reward"? So if the "reward" is always 1 it would correspond to Hebbian learning. And for negative rewards the weights are changed to reduce the activations. In the r=-1 vs r=-2 case that would give a negative update for both but a stronger one for the second "action" (comparable to the REINFORCE algorithm).

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว +2

      Yes that's exactly what's happening. Basically every unit does RL by itself.

    • @dermitdembrot3091
      @dermitdembrot3091 4 ปีที่แล้ว

      @@YannicKilcher Thanks for confirmation!

  • @Zantorc
    @Zantorc 4 ปีที่แล้ว +6

    For perturbation learning, excitation and inhibition use completely different mechanisms in the brain - the neuro-transmitter is even different and different cell types are involved. So rather than dampen all weights when the result is wrong it can selectively dampen the excitation and/or amplify the inhibition. So there is an extra degree of freedom, which is the degree to which the correction falls on the inhibitory neurons v excitatory neurons as well as the magnitude of the correction. So this is at least a 2D correction vector - possible more given that individual neuron sub-types may be differently affected. Therefore my claim is that in the brain it's not so much 'scalar feedback' as 'vector feedback', at least for perturbation learning. I suspect it is the lack of distinction between neurons in ML which leads to poor results for perturbation learning.

    • @iuhh
      @iuhh 4 ปีที่แล้ว +1

      I think the different mechamisms in a single brain neuron could probably be represented by two or more artificial neurons though, maybe in multiple layers that handles excitation and inhibition separately, so not sure how that could relate to the quality of the results.

    • @Zantorc
      @Zantorc 4 ปีที่แล้ว +3

      @@iuhh The more you know about neurons, the less you're likely to think that. The point neuron can't do what a pyramidal neuron can do, it's predictive, synapse strength isn't the equivalent of a weight it's one bit at most on distal and apical dendrites and doesn't cause firing - it's part of the pattern matching process.

  • @jyotiswarupsamal1587
    @jyotiswarupsamal1587 2 ปีที่แล้ว

    This is a good explanation. I could understand the basics.
    Thank you

  • @terumiyuuki6488
    @terumiyuuki6488 4 ปีที่แล้ว +3

    It does sound suspiciously like Decoupled Neural Interfaces. Think you'd like to make a video on that? It would be great.
    Keep up the great work!

  • @Neural_Causality
    @Neural_Causality 4 ปีที่แล้ว +4

    Does anyone know of an implementation of the proposed idea on the paper?
    Also, thanks a lot for sharing this paper, and comments on different papers, I think it's quite useful!

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว +1

      If you look in the comments here you'll find a link to Bengio's paper about the algorithm, they might have something.

    • @Neural_Causality
      @Neural_Causality 4 ปีที่แล้ว

      @@YannicKilcher Thanks! Will check it

  • @Murmur1131
    @Murmur1131 3 ปีที่แล้ว

    Thanks so much! Super interesting! High class content!

  • @BuzzBizzYou
    @BuzzBizzYou 4 ปีที่แล้ว +2

    Won’t the proposed network create a massive IIR filter?

  • @bzqp2
    @bzqp2 2 ปีที่แล้ว

    I like how immediately once the paper is written by Hinton you switched from drawing the layers horizontally to drawing them vertically xd

  • @8chronos
    @8chronos 2 ปีที่แล้ว

    Thanks for this nice video.
    One thing still seems unclear to me, does this only allow for possibly near biological NN-training or are there also other advantages?
    E.g. Is it faster than Backprop?

    • @moormanjean5636
      @moormanjean5636 2 ปีที่แล้ว +1

      This is what I would like to know as well. I would guess its slower, but the only way to train networks in a comparable manner given certain assumptions.

  • @victorrielly4588
    @victorrielly4588 4 ปีที่แล้ว +2

    Here’s a link to an Archive.org paper on difference target propagation, for anyone like me who doesn’t want to pay to read the biology paper. Also, this paper looks like the original work describing the machine learning aspect of this idea.
    arxiv.org/pdf/1412.7525.pdf

  • @joirnpettersen
    @joirnpettersen 4 ปีที่แล้ว +4

    If the brain uses back-propagation, and we can some day figure out a way to model it mathematically, would adverserial attacks become a thing we might need to worry about? If not, would it be for a lack of information, or is there some difference between the way the brain does it and the way we do it on computers?

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว

      very nice question. I think this is as of yet unanswered, but definitely possible.

    • @BrtiRBaws
      @BrtiRBaws 4 ปีที่แล้ว +11

      Maybe we can see optical illusions as a sort of adversarial attack :)

    • @maloxi1472
      @maloxi1472 4 ปีที่แล้ว +3

      ​@@BrtiRBaws Yes, absolutely. I would argue that things like optical illusions, ideological belief structures, very elaborate lies, hallucinogens, unhealthy but tasty food... are all adversarial attacks on different substructures of the brain

    • @priyamdey3298
      @priyamdey3298 3 ปีที่แล้ว +1

      Numenta shows that if information flow (both inputs and weights of neurons) are quite sparse, then a network becomes quite robust to perturbations / random noise. And they say that brain has a very sparse information flow. So maybe yes, we are still yet to include more meaningful priors (like sparseness) in the right way to make them robust.

    • @bzqp2
      @bzqp2 2 ปีที่แล้ว

      Hitting a guy in the head with a shovel can be an adversarial neural network attack.

  • @stefanogrillo6040
    @stefanogrillo6040 9 หลายเดือนก่อน

    Duper

  • @sehbanomer8151
    @sehbanomer8151 4 ปีที่แล้ว +1

    I thought this is a part 2 or something

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว +3

      no, sorry, I deleted it by accident

  • @ThinkTank255
    @ThinkTank255 2 ปีที่แล้ว +2

    How many times do I have to tell you guys, the brain doesn't "learn"??? The brain *memorizes* verbatim. For prediction, the brain says, "What matches my memories the best?" and chooses that as a prediction. It is as simple as that. Brains are generally *not* as good as backpropagation at generalization, but that feature of brains is actually very useful for nonlinear spatio-temporal patterns, such as doing mathematics and logic. This is why, to date, ML based methods have not been able to solve extremely complex reasoning based problems. They overgeneralize when it comes to nonlinear logical processes.
    It is actually extremely easy to prove the brain doesn't use backpropagation. How many times do you have to read a book to give a good summary? Once. Etc.... The brain learns *instantly* by rote memorization. Instant learning brings many evolutionary benefits.

    • @DajesOfficial
      @DajesOfficial ปีที่แล้ว

      How many times have you read a book before it became possible for you to give a good summary from the first time? Lets test your hypothesis by giving a book to an infant and asking them to give a good summary from the first time?

    • @ThinkTank255
      @ThinkTank255 ปีที่แล้ว +1

      @@DajesOfficial You've actually proven my point. The problem is, most humans aren't particularly good at remembering factual information. This is because 99.99% of the information you are getting at any given time isn't factual information. It random sights, sounds, smells, that your brain deems important for your survival. The reason adults are better than infants is that they have practiced that skill of honing in on factual information.

  • @herp_derpingson
    @herp_derpingson 4 ปีที่แล้ว

    DEJA VU

    • @YannicKilcher
      @YannicKilcher  4 ปีที่แล้ว

      yea sorry, I hope YT reinstates the old one

  • @palfers1
    @palfers1 4 หลายเดือนก่อน

    2020 is quite dated.