Amazingly well-explained. In ten minutes i understand perfectly with this three essential steps how to compute the backward password. Thank you so much to help me in my path of self-learning
That's how we write 1 in Germany, assuming the tutor is from Germany. I believe this is done to ensure the clear difference visually between, 1 (one), I (uppercase i), l (lowercase l) etc,.
One quick note about linear regression @4:46, gradient methods are almost never used in practice because there is a closed form solution with pure linear algebra.
I have always noticed. Even the complicated topics can be easily explained when we simple numbers without going with the alphabets. Thanks, Python Engineer!
That was a great tutorial and I enjoyed it. Before watching this video I used to calculate gradients through a painful operations!!! It would be interesting if you could have other examples with a bit complicated functions like multi-rule functions (say absolute value function for example). Moreover, if you give an exercise to do and attach a solution, it would be great for learners to practice, I guess.
It's 1. In this case it is taking the derivative with respect to y^, not y. This means y is treated as a constant here with regards to the derivative. The y gets dropped and the y^ turns into just it's coefficient which is 1.
Hi Patrick , you are great thanks for it. At 6:35 doing backward pass you mentioned dloss/dy^ , i think it should be ds/dy^ isnt? Please clear my confusion
No what I mentioned is correct. Maybe it gets clearer at minute 9 when I explain the steps in more detail. While doing the backward pass we need dloss/dy^ as an intermediate result for the chain rule
I have installed torch in my system. But in VS code, it shows "No module named torch". If i create virtual env and then pip install torch, it works. But Not In VS Code. Please help Thanks
im 13 yo and i can confirm that some experiments with pytorch, help of chat gpt and a lot of thinking i manage to follow along (to be noted: i havent even been introduced to calculus or anything similar and im not a native speaker as you can see)
hey!! your videos are very clear! but i have a small question in this, if x,y are 2 separate variables, then dz/dx=x'y+yx' ? why did you consider other variable as a constant? at 2:55
Thanks for watching! In this toy example the function is x*y, so df/dx=y. We are using partial derivatives here, that means the other one is constant. But we also have to do the partial derivative for y then
This is a very useful tutorial. But i need a little bit explanation. when you said (4:47) dLoss/ds =ds^s/ds = 2s, but it,s not simple for me, why it's equal to 2s ??? Please give me a way or a route to understand it.
Strange it makes you disable gradients with no_grad() in order to actually subtract them. The software doing the calculus for you makes it much easier than writing the code manually in C++.
I did just that. It helps if you use a good linear algebra library, without it I got completely lost in endless for loops and keeping track of indices. By using the Eigen library , as if by magic, all the equations reduced to about 8 lines of code, it was one of the most gratifying pet projects I ever did, seeing four screens of unreadable code get reduced to a golden nugget. It took me about 8 days, including doing it the wrong way and learning the new library. I can recommend it!
Clear, simple, with an easy example... Just brilliant! Thank you so much, I wish more people made things simple :D
I'm sharing this!
thanks so much!
@@patloeber Thank you man: you're doing a great job!
This is the best video I've seen for backprop, forward pass and how the values get updated. Classic!!
you're a genius, I've been stuck on this step since ages. Thanks and keep going!!
Glad you like it!
i guess it's kind of off topic but does anybody know a good website to stream new tv shows online ?
@Jayceon Santana Flixportal :D
@Leonidas Preston Thanks, I signed up and it seems like a nice service :) Appreciate it !
@Jayceon Santana No problem xD
Amazingly well-explained. In ten minutes i understand perfectly with this three essential steps how to compute the backward password. Thank you so much to help me in my path of self-learning
Best explanation of backprop i've ever seen! Phenomenal!
this is the best explanation of backpropagation on youtube, thank u so much
Glad to hear thar!
Da hast du mich wirklich an die Hand genommen und alles ganz super erklärt! danke schön!
Your video is clearer than my teacher's lecture. I subscribe directly
thanks! glad yo hear that
You just made it look so simple. One of the best tutorials in a long time. Many thanks.
Glad you liked it!
This is wonderful. I appreciate the simple example and translating to PyTorch. Too few videos connect the two. Thank you!
First time I finally understand the chain rule in detail....thx!
7:30 That's the craziest 1 I have ever seen.
That's how we write 1 in Germany, assuming the tutor is from Germany.
I believe this is done to ensure the clear difference visually between, 1 (one), I (uppercase i), l (lowercase l) etc,.
One quick note about linear regression @4:46, gradient methods are almost never used in practice because there is a closed form solution with pure linear algebra.
I found your explaination of backpropogation very intutitive. I'm loving this playlist.
Thank you 😊
I have always noticed. Even the complicated topics can be easily explained when we simple numbers without going with the alphabets. Thanks, Python Engineer!
glad to hear that :)
The best so far
the best video i could've ever find! thanksss!
Amazing explanation! good job man.
Great video on the topic!
Simple and precise, go ahead with your great work.
Excellent clarity and explanation!
8:10
I need learn some calculate.
to figure out why the answer is 1.
😱 what an explanation! Thanks!!
Short, sweet and to the point, thank you!
Thank you so much for your extraordinary explanation. Please keep up this informative video.
9:53 final gradient = -2
dLoss/dw = -2
12:36
w.grad mean dLoss/d w
h.grad dLoss/d h
g.grad dLoss/d g
that was very useful, thank you so mush
thankyou for the explanation
Thank you so much!
It's really helpful! Thank you!
i request you to make a Udemy course out of these video series, very nicely taught.
thanks! maybe in the future! What kind of course would you be interested in?
@@patloeber machine learning from zero to hero?
@@patloeber a full course. Please. It'd go really well. Please make it free tho...
@@lambsauce5445 a free course is the one you watch LOL
extremely well explained.
Thanks 😊
Thanks, I like this tutorial
That was a great tutorial and I enjoyed it. Before watching this video I used to calculate gradients through a painful operations!!! It would be interesting if you could have other examples with a bit complicated functions like multi-rule functions (say absolute value function for example). Moreover, if you give an exercise to do and attach a solution, it would be great for learners to practice, I guess.
Keep on going ... Very very useful ...
thanks :)
love it ,good
thanks a lot for this video, keep going!!!
Glad you like it :)
Thanks, very detailed explanation, helped a lot!
Great to hear!
why the calculation of d (y_pred - y)/d y_pred = 1 why not -1 ?
due to square, (-1)^2 = 1
This is the best explanation for backpropagation in the entire TH-cam
thanks so much!
so after backward pass 1st iteration, first we need to reset gradient, right? then start a new iteration like update weight, forward pass and so on?
Great video, nice coding examples, but at 7:57 ds/dy^ should be -1 and not 1
It's 1. In this case it is taking the derivative with respect to y^, not y. This means y is treated as a constant here with regards to the derivative. The y gets dropped and the y^ turns into just it's coefficient which is 1.
great videos, thanks for doing these!
glad you like it!
Great! So easy to understand. Thank you so much
Glad you like it!
Great explanation!. Thank you very much for this video. Your channel is fantastic!
thanks for the feedback :)
Thanks. Good explanation
Thanks :)
It was Brilliant can you please add playlist for Pytorch with NLP
Very well taught
thanks!
Very good!
thanks!
thanks man big help
Glad it is helpful!
love it
Nice to have a use for calculus
Indeed!
Thank you!!
glad you like it!
Hi Patrick , you are great thanks for it. At 6:35 doing backward pass you mentioned dloss/dy^ , i think it should be ds/dy^ isnt? Please clear my confusion
No what I mentioned is correct. Maybe it gets clearer at minute 9 when I explain the steps in more detail. While doing the backward pass we need dloss/dy^ as an intermediate result for the chain rule
thanks, How simple everything is when I hear you draw pictures with a pen and explain
I have installed torch in my system. But in VS code, it shows "No module named torch". If i create virtual env and then pip install torch, it works. But Not In VS Code.
Please help
Thanks
im 13 yo and i can confirm that some experiments with pytorch, help of chat gpt and a lot of thinking i manage to follow along (to be noted: i havent even been introduced to calculus or anything similar and im not a native speaker as you can see)
great video! the only thing complicated is this guys "1".
PyTorch should have this as their official tutorial..
thanks!
hey!! your videos are very clear! but i have a small question in this, if x,y are 2 separate variables, then dz/dx=x'y+yx' ? why did you consider other variable as a constant? at 2:55
Thanks for watching! In this toy example the function is x*y, so df/dx=y. We are using partial derivatives here, that means the other one is constant. But we also have to do the partial derivative for y then
This is a very useful tutorial. But i need a little bit explanation.
when you said (4:47) dLoss/ds =ds^s/ds = 2s,
but it,s not simple for me, why it's equal to 2s ???
Please give me a way or a route to understand it.
OH, I found my own answer
Glad you like it!
Could either of you explain this to me please? I don't understand how we'd get 2s.
Oh wait. dloss/ds where loss = s^2 so it's dloss = s^2 ds => dloss = 2s.
Strange it makes you disable gradients with no_grad() in order to actually subtract them. The software doing the calculus for you makes it much easier than writing the code manually in C++.
I did just that. It helps if you use a good linear algebra library, without it I got completely lost in endless for loops and keeping track of indices. By using the Eigen library , as if by magic, all the equations reduced to about 8 lines of code, it was one of the most gratifying pet projects I ever did, seeing four screens of unreadable code get reduced to a golden nugget. It took me about 8 days, including doing it the wrong way and learning the new library. I can recommend it!
Why is your 1 an upside down v? It makes things difficult to follow sometimes
i didn't understand backward pass clearly how we got -2
it is based on the chain rule. Can you explain exactly what you didn't understand? I will try to help.
Thank you so much (T^T), it made me understand.
Hi Patrick my friend could you do c++ opencv tutorials also ? Thanks.
Good suggestion! Maybe in the future :)
Can anyone please tell me how to remove that extra path in output terminal in VS code?? How to get clean output
Can be found in the settings for the code runner extension
your one dosent seem to be a one...it seems like a big lamda... please try to correct it
My ears are bleeding when i hear you pronounce Z like C
Please it is Zeeeeee not Ceeeee ....... Thank you for great video
It could be better with proper naming
How is that a 1? More like ^
Amazing explanation. Thank you so much!
This is really helpful. Thank you!