Backpropagation in 5 Minutes (tutorial)

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 พ.ย. 2024

ความคิดเห็น • 290

  • @Garentei
    @Garentei 5 ปีที่แล้ว +1

    Watch the 3blue1brown one first and then come back to this one. If I hadn't done that I would have no idea what I just saw. Siraj is a great teacher for people who already know what they are doing.

  • @teemu828
    @teemu828 4 ปีที่แล้ว +21

    These kind of videos are superb for refreshing memory, when you already know the stuff but forgot it! Thanks!

  • @Tymon0000
    @Tymon0000 7 ปีที่แล้ว +353

    I want an explanation in 2 hours, to actually understand it.

    • @1234macro
      @1234macro 7 ปีที่แล้ว

      Gotta wait for the next part, though.

    • @Tymon0000
      @Tymon0000 7 ปีที่แล้ว +32

      +EBR Read it. Watched it again. And it still was 5 minutes long. Your solution didn't work! xD

    • @kaewmungmuang
      @kaewmungmuang 6 ปีที่แล้ว

      hahaha++

    • @driden1987
      @driden1987 6 ปีที่แล้ว +7

      Yeah, totally, his videos are on another level. Plus explaining a concept like the gradient of a function shouldn't be done in 5 minutes imo.

    • @BUDA20
      @BUDA20 6 ปีที่แล้ว +4

      other channels "explain" this in 2 hours but fails to deliver, Siraj did a great job here

  • @vasavisuma
    @vasavisuma 3 ปีที่แล้ว

    the best video to get an overview on backpropagation algorithm

  • @sakibhossain6136
    @sakibhossain6136 2 ปีที่แล้ว

    Short video with huge information + How it's working = Perfect Video

  • @MichaelPerry911
    @MichaelPerry911 7 ปีที่แล้ว

    Wholly crap this is amazing. I've been taking a course where the instructor just teaches us random crap and I had no idea how it all connected till I watched this video! Thank you, thank you

  • @pfannkuchengesicht42
    @pfannkuchengesicht42 7 ปีที่แล้ว +3

    wow, that was fantastic! Didn't expect such a short video about this topic to be this clear and understandable.

  • @offchan
    @offchan 7 ปีที่แล้ว +5

    I think that backpropagation is best explained as a computation graph instead of neural network figures. That way you understand how the computers really compute these stuff and it generalizes to how you think about deep dream or style transfer.
    And it's also easier to understand. With neural net figures, it's hard to visualize the process.

  • @YakrifZee
    @YakrifZee 4 ปีที่แล้ว

    A derivative is a slope of tangent line.... Man you explained something I never really understood in 10 years of academic life. I came here to learn backpropagation but ended up learning something which I couldn't in my academic life. Now I can die peacefully.

  • @steffff4y
    @steffff4y 7 ปีที่แล้ว

    i finally made something that works correctly!! god damn it its 2AM and i was banging my head around this for the last 12 hours. I made my first basic network that can solve XOR thanks to your example of back propagation!
    Thank you so much for this video !!

  • @anthonyrebuffo9509
    @anthonyrebuffo9509 3 ปีที่แล้ว +1

    just the amount of time that i needed for my presentation, thank you

  • @AhmadM-on-Google
    @AhmadM-on-Google 7 ปีที่แล้ว

    Great work Siraj. The wacky profile makes it easier to learn, I don't know how. Thanks for making this subject easier.

  • @aniktahabilder2518
    @aniktahabilder2518 4 ปีที่แล้ว

    Siraj, I am your fan. You really represent well. Just give credit if you get this work from someone else. You deserve credit for your awesome presentation.

  • @sulies2824
    @sulies2824 5 ปีที่แล้ว

    best video I've found on back-propagation. Thank you so much

  • @emilangelov1641
    @emilangelov1641 7 ปีที่แล้ว

    I am the 124.000 subscriber and you convinced me with 2 videos. Looking forward for more interesting and very nicely made videos from you.

  • @yafercorralbarrera9089
    @yafercorralbarrera9089 3 ปีที่แล้ว

    Oh my GOD! This is a great resume of many AI teory!!

  • @jayce511
    @jayce511 6 ปีที่แล้ว

    What an awesome video! As someone entering college to be a data scientist who already has somewhat understanding of calc, stats, and compsci, this video perfectly connected all three!

  • @dehehihohu1
    @dehehihohu1 7 ปีที่แล้ว

    Thanks Suresh! Finally a clear explanation on Backpropagation!!!
    Keep Going!!!!!

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +2

      Siraj* thanks!

  • @MarxSeoul55
    @MarxSeoul55 7 ปีที่แล้ว +2

    Hands down your best video so far! Keep up the good work!

  • @frankbraker
    @frankbraker 6 ปีที่แล้ว

    That is the FIRST TIME I have ever followed this to feel some inkling of understanding it. Thanks!! Also, to compensate for my lack of understanding, for many years I liked a genetic algorithm for training weights instead. But now I think that might still be the better approach.

  • @celsiusfahrenheit1176
    @celsiusfahrenheit1176 4 ปีที่แล้ว

    Wow, thank you for this Birdseye view of this complicated subject.

  • @michaeledwardlenert3690
    @michaeledwardlenert3690 7 ปีที่แล้ว

    This an awesome explanation of backprop, and I majored in the liberal arts. Thanks, Siraj.

  • @adi_verm
    @adi_verm 7 ปีที่แล้ว

    When you have a data mining test tomorrow and Siraj drops this!

    • @PM-st6vu
      @PM-st6vu 7 ปีที่แล้ว +1

      aditya verma haha pursuing from?

    • @adi_verm
      @adi_verm 7 ปีที่แล้ว

      VIT Vellore... nuf said

  • @chandlersupple3553
    @chandlersupple3553 6 ปีที่แล้ว +1

    Great video! There's so much variance in explanations for back propagation that it can be a bit difficult to grasp at first. Most of the time, you get people writing a blog on BP who aren't that well versed in deep learning in terms of the math.

  • @vonderasche2963
    @vonderasche2963 7 ปีที่แล้ว +5

    good explanation. I like how you broke down the calculus bits

  • @TabernacFuck
    @TabernacFuck 7 ปีที่แล้ว +2

    This is a really great video. I love how the theory is explained alongside a simple example. The best part is that the code is actually runnable!! Kudos!!

  • @georgebockari289
    @georgebockari289 7 ปีที่แล้ว +5

    I stand corrected...this was a fucking great video. Doing some late night reading and this couldn't have popped up at a better time... really, you rock

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว

      thanks George, more to come~

  • @sourabhmeherchandani162
    @sourabhmeherchandani162 7 ปีที่แล้ว

    Man, this is awesome! I was going through the Andrew's Course and kind of felt a bit lost while implementing the back propagation. This helped in clearing the sky. Looking forward to Deep Learning Nano Degree.

  • @marktensen2626
    @marktensen2626 7 ปีที่แล้ว

    Siraj this is terrific! Deep learning has become so easy to implement due to libraries like tensorflow or pytorch that I often forget what's really happening under the hood when I make such networks. This video was perfect for refreshing my knowledge and tying the dots together again. Thanks!

  • @Dave-nz5jf
    @Dave-nz5jf 7 ปีที่แล้ว

    You should do one of those videos where you teach it in 5 stages : first to a child, then teenager, then college student, then grad student, then expert. There is a lot of foundational stuff that would be interesting to some and the expert stuff could come out too. This is the perfect subject for that kind of format.

  • @kebman
    @kebman 6 ปีที่แล้ว +1

    I actually grasped this! *Thank you!* Now on to understanding...

  • @WorldWaterWars14
    @WorldWaterWars14 6 ปีที่แล้ว

    Nice, covered everything I’m learning about in my intro to Neural Networks class.

  • @KylePiira
    @KylePiira 7 ปีที่แล้ว +57

    You should do more videos like this. I was actually able to keep up with everything first time around without reducing the speed to 0.5. Great work!

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +7

      will do thanks Kyle

  • @defenestrated23
    @defenestrated23 7 ปีที่แล้ว +10

    Back to basics, excellent!
    At some point, I would LOVE it if you could do a video on how to build a network using LSTMs at a low level, like with Numpy and stuff. I use them all the time, but I can't wrap my head around they do the unrolling trick or compute the gradient. Thanks!

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +3

      definitely. great idea thanks

  • @transam351
    @transam351 7 ปีที่แล้ว

    thank you for talking slower in this video, it is very helpful. and although i enjoy the memes that usually randomly popup, they can become very distracting while trying to understand the complex subjects discussed in your videos

  • @Luckasborges
    @Luckasborges 7 ปีที่แล้ว +2

    It was an excellent video Siraj. Simple e objective!

  • @TheTushki1
    @TheTushki1 7 ปีที่แล้ว

    Hey that was super- fast and duper- great !

  • @-dialecticsforkids2978
    @-dialecticsforkids2978 4 ปีที่แล้ว +3

    4:23 - best sketch ever.

  • @Kevin-cy2dr
    @Kevin-cy2dr 4 ปีที่แล้ว

    This is a good revision type video for those who want to brush up their concepts. However its not wise to consider this as a learning video , best sequence would be to first watch 3Blue1Browns video(or any other back propogation video that explains the concept in length).Then come to Siraj's video to get a summary

  • @glautercoelho8939
    @glautercoelho8939 7 ปีที่แล้ว

    Awesome explanation !

  • @ddudduru9601
    @ddudduru9601 4 ปีที่แล้ว

    This is the only youtube video I actually slowed down the speed.

  • @jennymandl6910
    @jennymandl6910 6 ปีที่แล้ว

    Absolute fire video please make more

  • @chameleonchamlee2551
    @chameleonchamlee2551 ปีที่แล้ว

    wow, amazing, i think I understand it now

  • @raise7935
    @raise7935 6 ปีที่แล้ว

    nice explanation bro. everything is clear now

  • @barulli87
    @barulli87 4 ปีที่แล้ว

    awesome explanation!

  • @pedrapapelpesoura4097
    @pedrapapelpesoura4097 4 ปีที่แล้ว

    your channel is very good

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 7 ปีที่แล้ว +1

    thank you for back-propagation video

  • @orkutachilles
    @orkutachilles 7 ปีที่แล้ว

    very good explanation

  • @danielenrique7184
    @danielenrique7184 7 ปีที่แล้ว

    Very nice your explanation, thanks a lot!

  • @mariedrapalova7365
    @mariedrapalova7365 7 ปีที่แล้ว

    wow... at first I thought what the hell did I click? Who is this crazy guy? But you are actually very good. That was excellent explanation. Thank you sir....:) lolz ...

  • @elliotlaurent6303
    @elliotlaurent6303 5 ปีที่แล้ว

    Thanks that was very clear!

  • @MohammadHefny_HefnySco
    @MohammadHefny_HefnySco 6 ปีที่แล้ว

    Thank you for your great effort ... luv all your videos.... thanks for the effort

  • @Muuip
    @Muuip 5 ปีที่แล้ว

    Great concise presentation! I appreciate seeing the code as well. Thanks for the upload! 👍

  • @M0481
    @M0481 7 ปีที่แล้ว

    great video! loved it!!

  • @locoland123
    @locoland123 7 ปีที่แล้ว

    Great video :) full of energy!

  • @fupopanda
    @fupopanda 5 ปีที่แล้ว

    This video is what you watch after already understanding back propagation. That kind of goes for pretty much most of Siraj's videos lol

  • @leosousa7404
    @leosousa7404 7 ปีที่แล้ว

    Great explanation

  • @minh1391993
    @minh1391993 7 ปีที่แล้ว

    that's very clear, thank you

  •  3 ปีที่แล้ว

    Awesome !! Thanks :)

  • @descalzarte1463
    @descalzarte1463 5 ปีที่แล้ว

    these videos are awesome! thank you

  • @tyhuffman5447
    @tyhuffman5447 7 ปีที่แล้ว

    Siraj, off topic question, would you be interested in doing a series on the Google A.I. Experiments, specifically the Infinite Drum Machine? We would need to setup AudioNotebooks get some sounds in .wav form, which there are many of, and SamplesToFingerprints and to t-SNE to map but for me it would be allot of fun to play with sounds and very practical. I think this would yield the most learning about deep neural networks because sounds are messy so loads of tinkering would be required and trial and error is how people learn. Are you interested or is this more of an intermediate than an introduction? Would you be interested in presenting this to those that are interested? My guess is this sound project would be a blast to work through. Imagine how jazzed people would be to be able to go out and record snips of sounds and run them through the neural network to see if the neural network can guess the sound or get it close to something similar with some degree of accuracy. Back propagate to update the net and also attempt to alter the structure of the net to improve accuracy. I think people would love it and that would get people interested in mucking around a lot.

  • @priyankagrawal9517
    @priyankagrawal9517 7 ปีที่แล้ว

    Great video

  • @neilslater8223
    @neilslater8223 7 ปีที่แล้ว

    @4:40 There is a bit of glossing over detail on part of the subject I see a number of confused people posting on Stack Overflow or Datascience Stack Exchange. Namely that you don't backpropagate the *error* value per se, but the gradient of the error with respect to a current parameter. This is made more confusing to many software devs implementing back propagation because usual design of neural nets is to cleverly combine the loss function and the output layer transform, so that the derivative is numerically equal to the error (specifically only at the pre-transform stage of the output layer). It really matters to understand the difference though because in the general case it is not true, and there are developers "cargo culting" in apparently magic manipulations of the error because they don't understand this small difference.

  • @AkashMishra23
    @AkashMishra23 7 ปีที่แล้ว

    Again an Awesome Video, You should also do a video on Clac 2 and 3, and Multivariable Calculus, I've already been through Tons of Definite and Indefinite Integrals, Maxima and Minima, Limits and Continuity in Senior High School but it still seems too vast for me, a concise video would really help me out understanding them.

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +1

      thanks Akash, great points

  • @aneeshjoshi6641
    @aneeshjoshi6641 7 ปีที่แล้ว +19

    Hi Siraj,
    at 0:38 , shouldn't there be 3 Inputs? one for each of the features including the bias?
    If I understand correctly, number of input layer neurons is equal to number of features in our data.
    Am i correct?

    • @bhaveshpandey5280
      @bhaveshpandey5280 7 ปีที่แล้ว +1

      Hey Aneesh, I was wondering the same. I do think this is an error. The number of neurons in the input layer need to be the features in the training data set. The fourth neuron in the input layer would probably be a bias unit.

    • @aneeshjoshi6641
      @aneeshjoshi6641 7 ปีที่แล้ว

      Hi.
      He has already made the 3rd feature in all the datapoints 1 to act as the bais.
      Also the diagram shows one datapoint going to one neuron.

    • @beyhan9191
      @beyhan9191 7 ปีที่แล้ว +1

      you're right. There should be 3 input in image

    • @waseemshahwan
      @waseemshahwan 7 ปีที่แล้ว

      its just an example image but yeah there should be 3

  • @victorgaiva8247
    @victorgaiva8247 7 ปีที่แล้ว +1

    Hey Siraj. I've asked on your live stream if you were going to to implement a LSTM from scratch. What I meant was that if you were going to implement it without using any additional libraries in python besides the most common ones. So, no Tensorflow or Keras.

  • @rubencristobalgarcia185
    @rubencristobalgarcia185 7 ปีที่แล้ว

    great video!!!

  • @pythondoesstuff2969
    @pythondoesstuff2969 4 ปีที่แล้ว

    Really helpful video btw just didn't understand the error part

  • @AmirAli-gv7kn
    @AmirAli-gv7kn 6 ปีที่แล้ว

    you have earned a subscriber. I am watching Andrew NG but I can't seem to get proper intuition. Your video helped me a bit to understand. Thank you for that

  • @CriticalKev
    @CriticalKev 7 ปีที่แล้ว

    This is awesome:)

  • @MsRAJDIP
    @MsRAJDIP 6 ปีที่แล้ว

    You are my hero. But gradient descent is still nightmare to me 😂

  • @shivaprasadreddy1395
    @shivaprasadreddy1395 6 ปีที่แล้ว

    Nice video Siraj it helped me,but I felt it was speed enough..

  • @vtrandal
    @vtrandal 2 ปีที่แล้ว

    If you're talented and you know it, clap your hands.

  • @ranjeethmahankali3066
    @ranjeethmahankali3066 7 ปีที่แล้ว +47

    Hi Siraj, Can we say that narrow minded people are that way because they are overfit ?

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +58

      indeed. they need more dropout. drugs can help

  • @gabryx7
    @gabryx7 7 ปีที่แล้ว +1

    Hi Saraj, I just discovered how much I love ML and I love your videos, I watch at least one every day! Without sounding rude I wanted to ask you, did you learn all of this by yourself? Or did you study this topic at University? I ask because your knowledge and teaching skills are awesome and I would love to understand the topic this good :)

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +3

      thanks Gabriel! lots of online studying from sources like r/machinelearning and twitter

  • @olfmombach260
    @olfmombach260 7 ปีที่แล้ว +6

    So you try to bring the slope to 0 in an effort to hit the local minimum with the lowest value of f(x). How do you know that you don't end up in one, where another is way lower, but you kind of roll down the slope and can't get up again?

    • @calebkirschbaum8158
      @calebkirschbaum8158 7 ปีที่แล้ว

      You don't. That is one problem with this type of NN. However, if your program is simple enough, then this will work fine.

    • @LapajHaver
      @LapajHaver 7 ปีที่แล้ว

      in case you want to optimize on more complex surfaces (which is almost always the case in real life) you can avoid getting stuck in a local minima by using gradient descent optimization algorithms like "momentum"

    • @silasalberti3524
      @silasalberti3524 7 ปีที่แล้ว +1

      I'm not an expert, but consider looking up "non-convex loss functions". They are exactly the problem you are adressing. Most loss functions are convex though, which means they have only one global minimum. Being trapped in a local minimum can't happen in these cases.

    • @olfmombach260
      @olfmombach260 7 ปีที่แล้ว

      Silas Alberti​
      So convex means the grade of the polynome is less than 3?
      And just to make sure, Gradient Descent is a loss function?

    • @silasalberti3524
      @silasalberti3524 7 ปีที่แล้ว

      Olf Mombach The mathematical definition of convexity is a bit more complicated, but quadratic polynomials are indeed convex.
      Gradient descent isn't a function though. It's a method used to minimize the loss function.

  • @razkarl
    @razkarl 6 ปีที่แล้ว

    Great video!
    Small mistake in 3:30 - you said 'the derivative of f(g(x)) is equal to the derivative of f(x) times the derivative of g(x)' where you meant to say what is actually written in the slide - (f(g(x))' = f'(g(x))*g'(x) and not f'(x)*g'(x)

  • @terigopula
    @terigopula 6 ปีที่แล้ว

    You get my undivided attention right from the time of ' HELLO WORLD IT'S SIRAJ ' :P

  • @secondsandthings
    @secondsandthings 7 ปีที่แล้ว +6

    Siraj, could you do another video explaining batch normalization and the math behind that?

  • @nkdms.2031
    @nkdms.2031 7 ปีที่แล้ว +1

    great work Siraj, you are the "Greek Freak" of AI tutorials!
    Are they any plans for a longer and more in depth video for this algorithm?

  • @swanbosc5371
    @swanbosc5371 7 ปีที่แล้ว +1

    Hello Siraj, it's world... Great video as always!😊 What do you think about tf-slim as interface for tensorflow?
    (thanks for the content BTW 👍)

  • @harshaldeore
    @harshaldeore 5 ปีที่แล้ว

    I think, at the beginning the way you have shown input arrays or matrix is confusing, We convert the matrix of pixels of input image to a single column then feed each number to each node in input layer.
    By this what i mean is at a time we input one image and one number to one node.

  • @Spearced
    @Spearced 7 ปีที่แล้ว

    So because I've never been so hot on calculus, just to clarify: the delta of a particular neuron (from which we get the adjustment for the weight of each input from the previous layer by multiplying this delta by each input value, right?) is found by multiplying the sum of the weighted deltas of the next layer by the gradient of this layer (which in the case of the sigmoid function would be x*(x-1) where x is the output of the current neuron). Is this right? I'm building a simple neural network in Max/MSP and trying to wrap my head around how multiple layers work.

  • @trulyspinach
    @trulyspinach 7 ปีที่แล้ว +1

    Hey man, really love your video. Just a simple question: in the end of this video you said you're going to calculate the derivative of life. Just wondering have you done that yet ? :)

  • @playonce4186
    @playonce4186 3 ปีที่แล้ว +1

    siraj bro what can i do with this partial derivative garabge formulas they look so complicated

  • @ebird97
    @ebird97 7 ปีที่แล้ว

    awesome!

  • @ruairim7551
    @ruairim7551 6 ปีที่แล้ว +16

    Great. I think I got it, but just in case, tell me the whole thing again. I wasn't listening.

    • @ddudduru9601
      @ddudduru9601 4 ปีที่แล้ว

      I think I heard it. But I don't get it.

  • @mikaelsitruk
    @mikaelsitruk 7 ปีที่แล้ว

    In the derivation function the (time approx. 2:53) the slope is indeed 4 but the graph is incorrect (wrongly plotted black line representing the slope)

  • @cricketspike
    @cricketspike 7 ปีที่แล้ว

    I'm a bit lost at the dot product example at 1:21
    why are we multiplying a row of inputs on the same input node by a column of different weights? Wouldnt the value of each node in the next row be based on a column of inputs (the curent value of each node) * the weight of that each nodes connection to the next one?
    How many output nodes are there in this equation?

  • @truliapro7112
    @truliapro7112 7 ปีที่แล้ว +27

    Siraj please teach us a bit slow. Why you teach and explain it so fast ?

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว +12

      will go slower thanks

    • @smartinezai
      @smartinezai 7 ปีที่แล้ว +3

      because he has to go fast, like sonic.
      also he was trying to do it in 5 minutes heahahha

    • @nahiyanalamgir7614
      @nahiyanalamgir7614 6 ปีที่แล้ว

      So he can make eye catching titles and get more views?

    • @andrewgregg5873
      @andrewgregg5873 6 ปีที่แล้ว +2

      I actually like the speed. If I don't quite get something instantly, I can pause or rewatch that part.

    • @nahiyanalamgir7614
      @nahiyanalamgir7614 6 ปีที่แล้ว +2

      For a guy who is revising his knowledge, sure. But someone who is learning step by step, this video is a nightmare.

  • @JuaniPisula
    @JuaniPisula 7 ปีที่แล้ว

    Hello Siraj, around 3:26, you say that df/dx = (df/dx)*(dg/dx) , which is wrong, but in the screen it is stated correctly.
    Amazing videos bro!

  • @tesfayzemuygebrekidan9917
    @tesfayzemuygebrekidan9917 7 ปีที่แล้ว

    I have a question? The weights are common for all data sets i think. When they are adjusted for the first dataset, they will be readjusted for the next data set if there is. Doesn't it forget its previous adjustment?

  • @Mech288
    @Mech288 5 ปีที่แล้ว

    For the final Backpropagation equation, shouldn't the current weights subtract, not add, the deltaWeights * learningRate? Since we want to move down the gradient instead of up.

  • @eliecerecology
    @eliecerecology 7 ปีที่แล้ว

    HI Siraj,
    awesome videos, I got a question, in you funcion:
    " def nonlin(x,deriv=False):
    if(deriv==True):
    return x*(1-x)
    return 1/(1+np.exp(-x))"
    The derivative should be= -e^x / (e^x +1)^2 ??

  • @afshinmeh
    @afshinmeh 7 ปีที่แล้ว

    Thanks Raj. I think there is a problem with the first example (binary operation example): the inputs are 3 but the graph shows 4.

  • @hiteshvaidya3331
    @hiteshvaidya3331 7 ปีที่แล้ว

    please make a video on how to do backpropagation on Convolutional Neural Network.

  • @TasosPapastylianou
    @TasosPapastylianou 7 ปีที่แล้ว

    Nice video! Can I ask what software(s) you used to create the animation
    in 2:00? (This looks like the same one that 3Blue1Brown uses to me)

  • @maanvis81
    @maanvis81 7 ปีที่แล้ว

    With regards to deriving the meaning of life, if the answer to all questions is 42, what would the weights of each input neuron be so that every question we ask the network results in 42?

  • @divyanshu30gupta
    @divyanshu30gupta 6 ปีที่แล้ว

    URGENT HELP NEEDED!!!!!
    function [pred,t1,t2,t3,a1,a2,a3,b1,b2,b3] = grDnn(X,y,fX,f2,f3,K)
    %neural network with 2 hidden layers
    %t1,t2,t3 are thetas for every layer and b1,b2,b3 are biases
    n = size(X,1);
    Delta1 = zeros(fX,f2);
    Db1 = zeros(1,f2);
    Delta2 = zeros(f2,f3);
    Db2 = zeros(1,f3);
    Delta3 = zeros(f3,K);
    Db3 = zeros(1,K);
    t1 = rand(fX,f2)*(2*.01) - .01;
    t2 = rand(f2,f3)*(2*.01) - .01;
    t3 = rand(f3,K)*(2*.01) - .01;
    pred = zeros(n,K);
    b1 = ones(1,f2);
    b2 = ones(1,f3);
    b3 = ones(1,K);
    %Forward Propagation
    wb = waitbar(0,'Iterating...');
    for o = 1:2
    for i = 1:n
    waitbar(i/n);
    a1 = X(i,:);
    z2 = a1*t1 + b1;
    a2 = (1 + exp(-z2)).^(-1);
    z3 = a2*t2 + b2;
    a3 = (1 + exp(-z3)).^(-1);
    z4 = a3*t3 + b3;
    pred(i,:) = (1 + exp(-z4)).^(-1);
    %Backward Propagation
    d4 = (pred(i,:) - y(i,:));
    d3 = ((d4)*(t3')).*(a3.*(1-a3));
    d2 = ((d3)*(t2')).*(a2.*(1-a2));
    Delta1 = Delta1 + (a1')*d2;
    Db1 = Db1 + d2;
    Delta2 = Delta2 + (a2')*d3;
    Db2 = Db2 + d3;
    Delta3 = Delta3 + (a3')*d4;
    Db3 = Db3 + d4;
    for l = 1:100
    t1 = t1*(.999) - .001*(Delta1/n);
    b1 = b1*(.999) - .001*(Db1/n);
    t2 = t2*(.999) - .001*(Delta2/n);
    b2 = b2*(.999) - .001*(Db2/n);
    t3 = t3*(.999) - .001*(Delta3/n);
    b3 = b3*(.999) - .001*(Db3/n);
    end
    end
    end
    delete(wb);
    end
    %I can't seem to understand the fault, is it the matrix multiplication, because the code does run successfully but when I test the t1,t2,t3 with some testing examples, the prediction for all examples are exactly same and are equal to the predicted vector for the last trained example.
    % please help, i am stuck here for over a month now, thanks!!

  • @vishalbgp
    @vishalbgp 7 ปีที่แล้ว

    helpful video. looks like varun dhawan. 😁

    • @SirajRaval
      @SirajRaval  7 ปีที่แล้ว

      thx correction: he looks like me