Back Propagation in Neural Network with an Example | Machine Learning (2019)

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ต.ค. 2024
  • Backpropagation in Neural Network is a supervised learning algorithm, for training Multi-layer Perceptrons (Artificial Neural Networks).
    The Backpropagation algorithm in neural network looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. The weights that minimize the error function is then considered to be a solution to the learning problem.
    Visit our website for more Machine Learning and Artificial Intelligence blogs
    www.codewrestl...
    Checkout other videos on Machine Learning
    Decision Tree (ID3 Algorithm) : • Decision Tree Solved |...
    Candidate Elimination Algorithm : • Candidate Elimination ...
    Naive Bayes Algorithm : • Naive Bayes algorithm ...
    Checkout the best programming language for 2020
    • Top Programming Langua...
    Checkout best laptop for programming in machine learning and deep loearning in 2020
    • Best Laptop for Machin...
    10 best artificial intelligence startup in india
    • 10 Artificial Intellig...
    Join Us to Telegram for Free Placement and Coding Challenge Resources including Machine Learning also~ @Code Wrestling
    t.me/codewrest...
    Ask me A Question: codewrestling@gmail.com
    Refer Slides: github.com/cod...
    Music: www.bensound.com
    For Back Propagation slides comment below 😀

ความคิดเห็น • 207

  • @ahming123
    @ahming123 5 ปีที่แล้ว +147

    Can you remove the background music, otherwise it's awesome

    • @CodeWrestling
      @CodeWrestling  5 ปีที่แล้ว +23

      Cannot remove in this video, but will take care of it in coming videos.

    • @coxixx
      @coxixx 4 ปีที่แล้ว +7

      @@CodeWrestling every rose has its thorn

    • @Ashajyothi23
      @Ashajyothi23 3 ปีที่แล้ว

      I agree

    • @saisudha6512
      @saisudha6512 2 ปีที่แล้ว

      very well explained

  • @ahmedpashahayathnagar5022
    @ahmedpashahayathnagar5022 2 ปีที่แล้ว

    clear explanation sir we have understood easily but if we don't write it than again we forget after few days, so I have written from starting till end.Thanks for explaining clearly and in a simple manner.

  • @vishwanath-ts
    @vishwanath-ts 4 ปีที่แล้ว +47

    Why do you add music 🎶 in the background, we want the content not the music, it's too irritating....

  • @haadialiaqat4590
    @haadialiaqat4590 2 ปีที่แล้ว +1

    very nicely explained. It would be better without the background music.

  • @kushshri05
    @kushshri05 4 ปีที่แล้ว +1

    I wonder backpropagation was this much easy... thanks a ton 🙂🙂

  • @shaurya478
    @shaurya478 5 ปีที่แล้ว

    Finally got the idea about backprop..thanks man

  • @Javeed_Mehdi
    @Javeed_Mehdi 2 ปีที่แล้ว +1

    Excellent.............Thanks

  • @sentientbeing9781
    @sentientbeing9781 5 ปีที่แล้ว +50

    The song is to low, i can almost hear you. Make the song volume higher next time >3

    • @CodeWrestling
      @CodeWrestling  5 ปีที่แล้ว +12

      Sorry for the loud background music, we will make it better next time.

  • @rishi6954
    @rishi6954 4 ปีที่แล้ว +3

    Net h1 should be
    b1+(w1*x1)+(w3*X2) as per the network connections... Because it is w3 that is connected to h1 and not w2
    Correct me if I'm wrong... Or else like my comment so that I would know

  • @musfirotummamluah9881
    @musfirotummamluah9881 3 ปีที่แล้ว +1

    Thank You so much .. Finally i understood back propagation... Do you have video about Elman Recurrent Neural Network ?. If you have please send me your video link.

  • @watsonhuang4760
    @watsonhuang4760 4 ปีที่แล้ว +6

    Thanks! This video explains back propagation very well! btw i almost passed out when the volume increased in the end

  • @hohongduynguyen6916
    @hohongduynguyen6916 3 ปีที่แล้ว

    Thank you, this video is thoroughly useful.

  • @HarishIndia123
    @HarishIndia123 3 ปีที่แล้ว +3

    background music was bit annoying.... couldnt hear or focus on what you were saying

  • @redscorpion747
    @redscorpion747 2 ปีที่แล้ว

    bad explained

  • @mathhack8647
    @mathhack8647 2 ปีที่แล้ว +1

    Great introduciton to BNN , Nonetheless the background Music is a little bit loudy .

  • @darshanaupadhyay3610
    @darshanaupadhyay3610 5 ปีที่แล้ว +21

    finally i understood back propagation.. Thank you so much!

  • @sreenathm4539
    @sreenathm4539 3 ปีที่แล้ว

    Excellent video...nicely explained...

  • @huytrankhac8729
    @huytrankhac8729 4 ปีที่แล้ว +5

    The most clear explanation I found on youtube. Thank you so much. Please make more concept videos like this about machine learning

  • @BroaderBasicsBuddy
    @BroaderBasicsBuddy 3 ปีที่แล้ว +1

    Please remove background song!!!..so far so good

  • @SantoshKumar-fr5tm
    @SantoshKumar-fr5tm 4 ปีที่แล้ว +5

    Nice explanation. Actually, I have seen all theory till now, you showed how backpropagation is actually calculating the further weights and biases. Thanks again.

  • @mohammedshuaibiqbal5469
    @mohammedshuaibiqbal5469 4 ปีที่แล้ว +1

    hello sir you said you will implement decision tree algorithm in python. but i didn't find it in your playlist

  • @shravyaa8023
    @shravyaa8023 5 ปีที่แล้ว +7

    this site is just wonderful. Thank you so much for all the videos. It would be fantastic if you could explain their implementations in python also.

    • @dhanushp1680
      @dhanushp1680 2 ปีที่แล้ว +1

      this is not a site babe

  • @anirbansarkar6306
    @anirbansarkar6306 3 ปีที่แล้ว

    Great explanation. It was really helpful. 😇

  • @syeda8343
    @syeda8343 2 ปีที่แล้ว +1

    I need Matlab coding of exactly this manual work bro😐

  • @KwstaSRr
    @KwstaSRr 3 ปีที่แล้ว

    masterpiece

  • @sohaigill
    @sohaigill 3 ปีที่แล้ว

    can we use ANN and fixed-effect Poisson regression model ? in two steps for better results ?

  • @batuhanartan
    @batuhanartan 4 ปีที่แล้ว +3

    Great explanation except the sound issues :) Background music is nice but should be a little lower :) Thank you for video man very helped for me :)

  • @chethankodenkiri2825
    @chethankodenkiri2825 4 ปีที่แล้ว +1

    Happy teachers day

  • @FPChris
    @FPChris 2 ปีที่แล้ว +1

    Bg music is distracting

  • @nikitasinha8181
    @nikitasinha8181 3 ปีที่แล้ว

    Thank u so much

  • @dcrespin
    @dcrespin ปีที่แล้ว

    The video shows -with all the advantages as well as the limitations of working with a specific neural graph and particular numerical values- what is the BPP of errors in a feedforward network. But the basic idea applies to much more general cases. Several steps are involved.
    1.- More general processing units.
    Any continuously differentiable function of inputs and weights will do; these inputs and weights can belong -beyond Euclidean spaces- to any Hilbert space. Derivatives are linear transformations and the derivative of a neural processing unit is the direct sum of its partial derivatives, with respect to the inputs and with respect to the weights; this is a linear transformation expressed as the sum of its restrictions to a pair of complementary subspaces.
    2.- More general layers (any number of units).
    Single unit layers can create a bottleneck that renders the whole network useless. Putting together several units in a unique layer is equivalent to taking their product (as functions, in the sense of set theory). The layers are functions of the of inputs and of the weights of the totality of the units. The derivative of a layer is then the product of the derivatives of the units; this is a product of linear transformations.
    3.- Networks with any number of layers.
    A network is the composition (as functions, and in the set theoretical sense) of its layers. By the chain rule the derivative of the network is the composition of the derivatives of the layers; this is a composition of linear transformations.
    4.- Quadratic error of a function.
    ---
    Since this comment is becoming too long I will stop here. The point is that a very general viewpoint clarifies many aspects of BPP.
    If you are interested in the full story and have some familiarity with Hilbert spaces please Google for papers dealing with backpropagation in Hilbert spaces.
    For a glimpse into a deep learning algorithm which is orders of magnitude more efficient, controllable and faster that BPP search in this platform for a video about: deep learning without backpropagation.
    Daniel Crespin

  • @mustafasalihates2866
    @mustafasalihates2866 3 ปีที่แล้ว +1

    You dont wanna mess with backpropagation ever never. Haha But he explain so good

  • @saitejareddy621
    @saitejareddy621 3 ปีที่แล้ว

    Fascinating BGM

  • @fidelventura957
    @fidelventura957 4 ปีที่แล้ว +4

    Simply the best of the best. Thank you for your hard work, thank you.

  • @codematrix
    @codematrix ปีที่แล้ว

    Shouldn’t 1/2 = 1/n where n = the number of nodes for the givem layer? Really the sum of all errors for given layer is just the average. BTW great explanation.

  • @praveensoni1119
    @praveensoni1119 3 ปีที่แล้ว +8

    Thanks a ton, finally understood backpropagation (didn't know the math behind it was this easy)

  • @dedetwinkle9775
    @dedetwinkle9775 2 ปีที่แล้ว

    Thank your for creating the video. A small feedback will be that, the background music is too loud, I bearealy can understand what you are saying because of the volume of the background music. I

  • @shaistasultana771
    @shaistasultana771 2 ปีที่แล้ว +1

    Bht sahi explanation

  • @harishbabuk.s4344
    @harishbabuk.s4344 3 ปีที่แล้ว

    content is good, but background music is bit annoying. this kind of background score is good for 2 to 3 minutes not for videos of length more than 15 minutes.
    Pls avoid in next videos

  • @dr.ramakella2813
    @dr.ramakella2813 3 ปีที่แล้ว

    While the content is excellent the background music is loud and disturbing. Please switch it off . Thanks

  • @mohammedsohailuddin9833
    @mohammedsohailuddin9833 3 ปีที่แล้ว

    when you're explaining, please don't play music in the background for god sake!

  • @waqarhussain1829
    @waqarhussain1829 3 ปีที่แล้ว

    Very Nice

  • @TheSocialDrone
    @TheSocialDrone 4 ปีที่แล้ว +2

    You taught it very well to a backbencher like me!

    • @CodeWrestling
      @CodeWrestling  4 ปีที่แล้ว +1

      thanks, one of the best comment ever received.

  • @bheeshmsharma4834
    @bheeshmsharma4834 3 ปีที่แล้ว

    Thanks

  • @raviteja6106
    @raviteja6106 ปีที่แล้ว

    Thank you so much bro for giving detailed explanation.

  • @vinayvernekar3634
    @vinayvernekar3634 4 ปีที่แล้ว +1

    @10:52 , I am unable to understand net_o1 = w_5 * out_h1 + w_6 * out_h2+ b_2 * 1. how does the derivative translate into 1 * out_h1 * w_5^(1 - 1) + 0 + 0 = out_h1 = 0.593269992, from where did w_5^(1-1) come from shouldnt it just be 1 * out_h1

    • @amitsahoo1989
      @amitsahoo1989 4 ปีที่แล้ว +1

      exponent of x decreses by one in derivatives ..i think that is what he is trying to show ,..

  • @trondknudsen6689
    @trondknudsen6689 2 ปีที่แล้ว

    Nice slides :)

  • @federicopinto9353
    @federicopinto9353 4 ปีที่แล้ว +1

    The music was too loud but your explanation was very helpful .

  • @easycoding9095
    @easycoding9095 3 ปีที่แล้ว

    hi, from where 0.1384 came in last?

  • @hamedhomaee6410
    @hamedhomaee6410 ปีที่แล้ว

    If now I can work in the field of data science, it's because of YOU 🙏

  • @crazythoughts656
    @crazythoughts656 3 ปีที่แล้ว

    9:05 to 9:20. I didn't understand can u explain me in detail. Any one. Pls. Ans me.

  • @nishitapatnaik2850
    @nishitapatnaik2850 3 ปีที่แล้ว

    There are so many calculations mistake please correct or at least put it in description.

  • @MrTabraiz961
    @MrTabraiz961 ปีที่แล้ว

    The first equation at 13:55 seems incorrect, can anyone confirm this?

  • @frenchip4513
    @frenchip4513 3 ปีที่แล้ว

    Background music is really disturbing..

  • @ismail8973
    @ismail8973 3 ปีที่แล้ว +1

    Well explained the math behind it

  • @ericbig666
    @ericbig666 2 ปีที่แล้ว

    It will be nice if you turn off the music or turn down 50%,

  • @piyushdhasmana2575
    @piyushdhasmana2575 4 ปีที่แล้ว

    Bro inspite of using total error for w5 can we just use error of out 1..
    Cz in hidden layer any how we have to use again error out1

  • @shivamdubey4783
    @shivamdubey4783 3 ปีที่แล้ว

    great tutorial but music making it hard to understand

  • @jabedomor887
    @jabedomor887 5 ปีที่แล้ว +1

    good job

  • @lokeshjhanwar7603
    @lokeshjhanwar7603 3 ปีที่แล้ว +1

    best one among all i have seen ever

  • @snehitvaddi
    @snehitvaddi 4 ปีที่แล้ว +1

    Bro, whatever comment section may say, I loved the video and understood it. Keep going👍

  • @praneethaluru2601
    @praneethaluru2601 4 ปีที่แล้ว +1

    Faar better explanation of Math than the other videos on TH-cam.

  • @phattailam9814
    @phattailam9814 3 ปีที่แล้ว

    why don't you update bias?

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 ปีที่แล้ว +1

    Music is a little loud

  • @absolute___zero
    @absolute___zero 4 ปีที่แล้ว

    this guy incorrectly drew bias weights. bias weight is the actual bias, not the value in the bias node, because in the design most folks use is that the bias node always has a value of 1.0 (constant) and the weight is the parameter that is being adjusted due to backprop. it is done this way because it can be efficiently computed on GPUs and requires less `if()` statements.

  • @motoreh
    @motoreh 5 ปีที่แล้ว +1

    Excellent example!, Now I understand backpropagation and better for the steepest decent with the chain rule!

  • @fatemetardasti223
    @fatemetardasti223 4 ปีที่แล้ว +1

    what the hell is that background noise!!!!

  • @PraveenKumarkrp
    @PraveenKumarkrp 3 ปีที่แล้ว

    Great Video. Had to focus hard to understand because of the background music. Background music is louder than the voice over.

  • @taimoorneutron2940
    @taimoorneutron2940 2 ปีที่แล้ว +1

    really helpful and easy to understand thanks

  • @bourbon3587
    @bourbon3587 3 ปีที่แล้ว

    please remove music

  • @sharathkumar8422
    @sharathkumar8422 3 ปีที่แล้ว +1

    Excellent explanation of the concept! Thank you so much for making this...

  • @islamicinterestofficial
    @islamicinterestofficial 4 ปีที่แล้ว +1

    What a Suplendid Explanation
    Love you Bro
    Stay Blessed

  • @maheshsb3048
    @maheshsb3048 4 ปีที่แล้ว +1

    You are freaking best at explaining thank you

  • @saitanush9453
    @saitanush9453 2 ปีที่แล้ว

    tq very much

  • @leonrai2899
    @leonrai2899 4 ปีที่แล้ว

    nice music tho

  • @kavun4905
    @kavun4905 ปีที่แล้ว

    thank you for uploading video with f in youtuber music , i couldn't focus shit but good content tho

  • @badiyabhargav8597
    @badiyabhargav8597 3 ปีที่แล้ว

    superb ........

  • @aayushbafna7594
    @aayushbafna7594 4 ปีที่แล้ว

    @14:15 should there be w2*i2 instead of w3*i2 in the formula of net_h1?

  • @absolute___zero
    @absolute___zero 4 ปีที่แล้ว

    you are using single bias for both neurons, this is going to introduce a limitation in your model, every neuron has to have its own bias

  • @reachDeepNeuron
    @reachDeepNeuron 4 ปีที่แล้ว

    to implement backpropagation in neural network algorithm , is there any built in packages available as part of sci-kit learn ? Or do I have to write a script explicitly to implement back propagation ?

  • @مملكهالحب-م2ض
    @مملكهالحب-م2ض 4 ปีที่แล้ว

    Let Out = (2/1 + exp (3.5-01-02-03)) -1 Search for: 1. Number of layers in the neural network. 2. The number of neurons. 3. Output if 01 = 1,02 = 2,03 = 1h
    Solution please

  • @aryanshridhar8517
    @aryanshridhar8517 4 ปีที่แล้ว

    Why did you take mean square error ? Neural network is basically a connection of many logistic regression output , So wouldn't it be the log loss function ?

  • @Engr.rizwandawlatzia
    @Engr.rizwandawlatzia 3 ปีที่แล้ว

    slides

  • @reachDeepNeuron
    @reachDeepNeuron 4 ปีที่แล้ว

    saw hell lotta videos and blogs including mattmazzur blog.... But you ripped like anything , bro...Amazing explanation, awesome job....literally you saved my time & made me to catch sleep early.....but how did you get 1/1+e-x as e^x/ a_e^x

    • @CodeWrestling
      @CodeWrestling  4 ปีที่แล้ว

      Thanks a ton. If you will differentiate it, you will get the and. Maybe quickly google it and you will get the answer.

  • @ragnarT3
    @ragnarT3 3 ปีที่แล้ว +3

    Really good brother.... short and effective..if one focusus on content background music hardly matters to them, but yes it might be disturbing for others

    • @CodeWrestling
      @CodeWrestling  3 ปีที่แล้ว

      We apologize for background music. It was an editing issue. Thanks for your support.

    • @ragnarT3
      @ragnarT3 3 ปีที่แล้ว

      @@CodeWrestling nop no issues it was fine..i have edited my msg...was misinterpreting a bit b4 editing

  • @shivaramakrishna4949
    @shivaramakrishna4949 3 ปีที่แล้ว

    Wonderfull . Explanation is awesome 👏👏👏

  • @TechTelligence16
    @TechTelligence16 3 ปีที่แล้ว +1

    Best explanation ever!

  • @marchansen1965
    @marchansen1965 3 ปีที่แล้ว

    Great video, but the video is disturbing ...

  • @raj4624
    @raj4624 3 ปีที่แล้ว

    Tysm for crystal clear explanation.. God bless you

  • @absolute___zero
    @absolute___zero 4 ปีที่แล้ว

    you didn't explain where the -1 came from (at 9:26 ) in the formula for dEtotal=2 * 1/2(targeto1-outo1)^2-1 * -1 + 0 . the derivative rules for power don't consider any `-1` values

  • @anilraghu8687
    @anilraghu8687 2 ปีที่แล้ว

    Confusing

  • @arunmehta8234
    @arunmehta8234 3 ปีที่แล้ว

    Can you please send me the slides?

  • @KapilKumar-ke6qf
    @KapilKumar-ke6qf 3 ปีที่แล้ว

    Nice vidio

  • @mahsameymari6411
    @mahsameymari6411 4 ปีที่แล้ว

    I cannot here you what is the point of loud music in tutorial video

  • @blessoneasovarghese9834
    @blessoneasovarghese9834 5 ปีที่แล้ว

    If we give a one input and output, then weights will get adjusted as per those . But when another set of input and output are given , entirely new weights are formed. Then how does training happen in such algorithm?

  • @samrashafique8580
    @samrashafique8580 5 ปีที่แล้ว +2

    Thank u so much ❤️

    • @CodeWrestling
      @CodeWrestling  5 ปีที่แล้ว

      Thanks for appreciating!! #CodeWrestling

  • @saribarif189
    @saribarif189 4 ปีที่แล้ว

    I have a question qbout updating the weight. During using weight update formula, you have not used the multilayer perception rule in which we assume the +ve or -ve value of learning rate to increase or decrease weight. I have a doubt about this rule.

  • @madang9234
    @madang9234 4 ปีที่แล้ว +3

    Bro plz do other concepts of machine learning. I'm not getting wt my lecture is teaching ,I'm depending on your vedios .can u plz do make vedios regularly

  • @shihangcheng173
    @shihangcheng173 2 ปีที่แล้ว

    very clear!! thank you!!!

  • @davidwarner2491
    @davidwarner2491 3 ปีที่แล้ว

    Bohot khoob!!