This is best lecture on optimizations, very clearly explanations, SGD, SGD+Momentum, Adagrad, RMSProp and Adam. If you have doubt i would request you to watch this with Andrew Ng Deep Learning Specialization Lectures to get a clear picture around optimization.
Absolutely brilliant !! Since you have went through the details (instead of jumping over them) I finally understood how derivative is taken in ML - Kudos!! 😊
As a CS grad student myself, I've sat through many lectures. This professor is really, really good.
Animations:
33:41 SGD
38:06 SGD + Momentum
45:05 Nesterov
50:23 RMSprop
55:27 Adam
great lecture again, even though I did not understand anything.
love these lectures
thanks man, you are an amazing professor.
Its an eye-opener have to download all of the videos.
This is best lecture on optimizations, very clearly explanations, SGD, SGD+Momentum, Adagrad, RMSProp and Adam.
If you have doubt i would request you to watch this with Andrew Ng Deep Learning Specialization Lectures to get a clear picture around optimization.
I agree, Andrew Ng's DL specialization 2nd course goes really slowly, helps you understand
25:06 Isnt the loss in SGD computed for an example, what differences would SGD have over Minibatch gradient descent then?
Absolutely brilliant !!
Since you have went through the details (instead of jumping over them) I finally understood how derivative is taken in ML - Kudos!! 😊
Great lecture, Justin. Wonder what you think about MADGRAD?
50:00 RMSprop
51:40 Adam
Great lecture, I came back to watch again
Thanks a lot professor, i will be grateful if you add a video course for proximal gradient methods.
46:00 The students have good eyes
-t s
lost my way in lecture 4
rewatch again...going through it once is not enough
@@ijaAI yes I just watch it once, but I need quite a lot of time to digest these math 🥲
please fix voice in next recordings...