What is Back Propagation

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ก.พ. 2025
  • Learn about watsonx→ ibm.biz/BdyEjK
    Neural networks are great for predictive modeling - everything from stock trends to language translations. But what if the answer is wrong, how do they “learn” to do better? Martin Keen explains that during a process called backward propagation, the generated output is compared to the expected output, and then the error contributed by each neuron (or “node”) is examined. By adjusting the node’s weights and biases, error is reduced and thus the overall accuracy improved.
    Get started for free on IBM Cloud → ibm.biz/sign-u...
    Subscribe to see more videos like this in the future → ibm.biz/subscri...

ความคิดเห็น • 55

  • @vencibushy
    @vencibushy ปีที่แล้ว +48

    Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering.
    However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.

  • @Kiera9000
    @Kiera9000 ปีที่แล้ว +25

    thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate

  • @Zethuzzz
    @Zethuzzz 10 หลายเดือนก่อน +6

    Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation

  • @hamidapremani6151
    @hamidapremani6151 10 หลายเดือนก่อน +5

    Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!

    • @hashemkadri3009
      @hashemkadri3009 9 หลายเดือนก่อน +1

      marvin u mean, smh

  • @anant1870
    @anant1870 ปีที่แล้ว +25

    Thanks for this Great explanation MARK 😃

  • @muskansoni581
    @muskansoni581 10 ชั่วโมงที่ผ่านมา

    Beautifully explained backpropagation🌸

  • @saisrikaranpulluri1472
    @saisrikaranpulluri1472 11 วันที่ผ่านมา

    Incredible, Martin example made me understand the concept exactly. Your real life examples are great alongside entertaining.

  • @BrianMarcWhittaker
    @BrianMarcWhittaker 2 หลายเดือนก่อน

    Thank you for explaining this. I'm reading “Architects of Intelligence” and that's the first time I’ve heard the term backpropagation. Your examples and drawings help me better understand the topic.

  • @mercyl2355
    @mercyl2355 2 หลายเดือนก่อน +7

    Thanks Marlon.

  • @Mary-ml5po
    @Mary-ml5po ปีที่แล้ว +8

    I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?

    • @im-Anarchy
      @im-Anarchy ปีที่แล้ว +1

      What did he even taught actually?

  • @Adnanuni
    @Adnanuni 4 หลายเดือนก่อน +4

    Thank you Mariin😃

  • @pleasethink4789
    @pleasethink4789 ปีที่แล้ว +6

    Hi Marklin!
    Thank you for such a great explanation.
    (btw, I know your name is Martin. 😂 )

  • @ca1790
    @ca1790 8 หลายเดือนก่อน +3

    The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.

  • @brpawankumariyengar4227
    @brpawankumariyengar4227 26 วันที่ผ่านมา

    Very Good Video …. Thank you very much ❤

  • @sakshammishra9232
    @sakshammishra9232 ปีที่แล้ว +2

    Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊

  • @joeyoviedo5202
    @joeyoviedo5202 4 หลายเดือนก่อน

    Thank you so much Morlin! Great video

  • @RadiantNij
    @RadiantNij 5 หลายเดือนก่อน

    Great work, so easy to understand

  • @EMos48
    @EMos48 4 หลายเดือนก่อน

    Awesome thank you Marvin.

  • @l_a_h797
    @l_a_h797 9 หลายเดือนก่อน +1

    5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.

    • @mateusz6190
      @mateusz6190 9 หลายเดือนก่อน

      Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful

  • @KamleshSingh-um9jy
    @KamleshSingh-um9jy 7 หลายเดือนก่อน

    Excellent session ..thank you !!

  • @sweealamak628
    @sweealamak628 10 หลายเดือนก่อน +1

    Thanks Mardnin!

  • @npomfret
    @npomfret 3 หลายเดือนก่อน

    This would really benefit from a (simple) worked example

  • @msatyabhaskarasrinivasacha5874
    @msatyabhaskarasrinivasacha5874 9 หลายเดือนก่อน

    Awesome.....awesome superb explanation sir

  • @ramuk-
    @ramuk- 5 หลายเดือนก่อน

    thanks Marvin!

  • @boeng9371
    @boeng9371 ปีที่แล้ว +1

    In IBM we trust ✊😔

  • @idobleicher
    @idobleicher 10 หลายเดือนก่อน

    A great video!

  • @1955subraj
    @1955subraj ปีที่แล้ว

    Very well explained 🎉

  • @ashodapakian2788
    @ashodapakian2788 9 หลายเดือนก่อน +1

    Off topic: what drawing board setup do these IBM videos use ?
    it's really great.

    • @boyyang1290
      @boyyang1290 9 หลายเดือนก่อน

      I'd like to know, too.

    • @boyyang1290
      @boyyang1290 9 หลายเดือนก่อน

      I find it ,he is drawing on the Glass

  • @sahanseney134
    @sahanseney134 7 หลายเดือนก่อน

    cheers Marvin

  • @rishidubey8745
    @rishidubey8745 8 หลายเดือนก่อน

    thanks marvin

  • @rigbyb
    @rigbyb ปีที่แล้ว

    Great video! 😊

  • @stefanfueger3487
    @stefanfueger3487 ปีที่แล้ว +23

    Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?

    • @Aegon1995
      @Aegon1995 ปีที่แล้ว +4

      There’s a separate video for that

    • @IBMTechnology
      @IBMTechnology  ปีที่แล้ว +11

      Ha, that's so true. Here you go: ibm.biz/write-backwards

    • @tianhanipah9783
      @tianhanipah9783 ปีที่แล้ว +4

      Just flip the video horizontally

  • @somethingdifferent1910
    @somethingdifferent1910 7 หลายเดือนก่อน

    At 2:20 when he was talking about biases, does it have any relation with Hyperparameter or regularization unit?

  • @harrybellingham98
    @harrybellingham98 5 หลายเดือนก่อน

    probably would have been good to describe that this is supervised learning as this would nit translate well for a beginner trying to apply this to other form of NNs

  • @guliyevshahriyar
    @guliyevshahriyar ปีที่แล้ว

    Thank you!

  • @Ellikka1
    @Ellikka1 10 หลายเดือนก่อน

    When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?

  • @neail5466
    @neail5466 ปีที่แล้ว +1

    Thank you for the information.
    Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!!
    Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way?
    Additionally how the comparison actually performed?
    Especially for the information that can't be quantised !

  • @mr.wiksith5091
    @mr.wiksith5091 5 หลายเดือนก่อน

    thank youu

  • @jaffarbh
    @jaffarbh ปีที่แล้ว

    Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?

  • @gren509
    @gren509 7 หลายเดือนก่อน +3

    Save yourself 8 minutes.. It's a FEEDBACK loop - FFS !

  • @the1111011
    @the1111011 ปีที่แล้ว

    why you didn't explain how the network updates the weight

  • @Justme-dk7vm
    @Justme-dk7vm 10 หลายเดือนก่อน +1

    ANY CHANCE TO GIVE 1000 LIKES ???😩

  • @mohslimani5716
    @mohslimani5716 ปีที่แล้ว

    Thanks still I need to understand how technically does it happen

    • @AnjaliSharma-dv5ke
      @AnjaliSharma-dv5ke ปีที่แล้ว +2

      It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus

  • @vservicesvservices7095
    @vservicesvservices7095 6 หลายเดือนก่อน

    Try to use more unexplained terminology to explain the terminology you try to explain is the source of confusion. 😂 Thumb down.

  • @tsvigo11_70
    @tsvigo11_70 6 หลายเดือนก่อน

    A neural network cannot be connected by weights, this is nonsense. It can be connected by synapses, that is, by resistances. The way the network learns is incredibly tricky: not only does it have to remember the correct result, which is not easy in itself, but it has to continue to remember the correct result while remembering a new correct result. This is what distinguishes a neural network from a fishing net.