This guy is insane, the theoretical knowledge he has provided is at par with the ones i saw on MIT OOCs. I can't believe he has covered all the crucial fundamentals of DL in 4-5 classes. Learning tensorflow becomes easy after watching this entire playlist.
Amazing..!!! 🤩🤩 got to know clearly in one shot... This is the first time i have listened to these concepts and I Understood them really well... Thank you so much @Krish Naik
Great content and very educative and precise. Those hand written notes would be very helpful😍 moving forward. But I am unable to access or get it😔. So just wondering if it is available somewhere.
and thus new recipe came from me by krish sir .. and my target is now to finish it as early as possible... thank you sir for uploding such a great content for us !!!
Hi sir, there are crazy number for ads in middle of the videos. Ads play after every 3 minute, so in one hour of the session there were 20+ times ads coming up, highly distracting. Could you please reduce this for future revision. Nevertheless, thank you so much for such wonderful explanation. This is was really helpful
But sir, now a question arises that we are trying to reach the minima. But what about the case where the curve is so irregular that it has multiple minima. It will have some local minima and one global minima. How do we make sure that we converge to the global minima as there can be chances that we will reach to some local minima instead of global minima. Also, what factors need to be considered to reach global minima.
In the minima the slope will be approximately zero. So the weight updation will not happen and the loss will be minimal at that place. The local minima is dependent upon the loss function that we use
hi sir, your session is good but i need the course materials ,i visited through the link ,it not showing properly .plz can you share the course materials
Hi Krish, With ADA grad. doesn't it still has the issues of gradient descent that it will take a lot of resources because we are taking all the data like GD. I understand that it will converge faster because of adaptive learning rate value but still in terms of resource needed for every epoch it will be exactly same as GD? Kindly advice
@@sathegdeep it is not published yet... I will share the link after it will be published in a week or two.. Topic was a robust business specific real time sign language translator
This is just a pure knowledgable Sesson I have seen so far on deep learning optimizers.
This guy is insane, the theoretical knowledge he has provided is at par with the ones i saw on MIT OOCs. I can't believe he has covered all the crucial fundamentals of DL in 4-5 classes. Learning tensorflow becomes easy after watching this entire playlist.
best session on optimizers I have watched. Thanks for the lecture.👍
You are remarkable Krish...the way you explained this topic and made it understandable is really appreciated.
clear like a crystal!!!! Tks for sharin and congratulations !!!
Amazing..!!! 🤩🤩 got to know clearly in one shot... This is the first time i have listened to these concepts and I Understood them really well... Thank you so much @Krish Naik
Hey , can you share the notes.
Amazing explanation Krish. All at one place. Thanks
best session on optimizers I have watched. loved the concept :)
simply awesome explanation!!! Thank you!
Here it is a masterpiece session !!! Loved it 😍
Amazing explanation of Exponential Weighted Average in smoothening the noise to reach the global minima!
Very nice explanation sir❤
you are doing an amazing contribution to the society!
Love you krish .... you are Great.....Thank you very much
Such a simple n clear explanation...very helpful...thank you so much
Awesome explanation! understood very well.
your explanation is sooooo gooood ! HUGE THANKS TO U sir!
Your way of teaching is exceptional sir.❤❤
Very Very great explanation sir, it helps us a lot
WoW!!! Simply amazing!
You are very amazing Sir. Can you please make a video on the implementation of Particle Swarm Optimization in ML model using python?
You explain soo good!! Thank you for this session
Crazy Crazy maths behind these things I loved it....
Thank you Krish.
Thank you sir for this amazing session
Great content and very educative and precise. Those hand written notes would be very helpful😍 moving forward. But I am unable to access or get it😔. So just wondering if it is available somewhere.
Sir, Please make us understand by taking one local minima curve and How our best optimizer is able to find global minima there.
and thus new recipe came from me by krish sir .. and my target is now to finish it as early as possible... thank you sir for uploding such a great content for us !!!
Very nice session excellent.. thanku
Thank you so much Sir !
I loved this session 👍👍👍💪
Hey Krish can you please make a UPDATED roadmap on machine learning with the updated playlists
Hi sir, there are crazy number for ads in middle of the videos. Ads play after every 3 minute, so in one hour of the session there were 20+ times ads coming up, highly distracting. Could you please reduce this for future revision. Nevertheless, thank you so much for such wonderful explanation. This is was really helpful
Amazing class
Hard Topic but taught so smoothly like exponential average :-)
great and amazing lecture sir hats off you sir
Great Tutorial...
Excellent !!!!!
great course
Sir u r allways 10 to 10 rating.If any one gives less rating they finding color for water.
Brilliant ❤
17:43,SGD trianing is slower but convergence is faster when compared to GD and BGD due to frequent weight update
But sir, now a question arises that we are trying to reach the minima. But what about the case where the curve is so irregular that it has multiple minima. It will have some local minima and one global minima. How do we make sure that we converge to the global minima as there can be chances that we will reach to some local minima instead of global minima. Also, what factors need to be considered to reach global minima.
In the minima the slope will be approximately zero. So the weight updation will not happen and the loss will be minimal at that place. The local minima is dependent upon the loss function that we use
love the session ..
finished watching
Thankyousomuchsir❤
Sir can I know what are features I will get if I become a member of your channel
Brand Symbol for Teaching and Energy,Torch Bearer for MI
==> my reaction at the explanation of adaptive learning rate of AdaGrad
Thanks krish
Thank you sir.
Sir please do live sessions on NLP also
Krish I could not understand exponential weighted part from this video
Thank u dear sir😍😊
thanks
thank u dear sir
love this
hi sir, your session is good but i need the course materials ,i visited through the link ,it not showing properly .plz can you share the course materials
though the content is good, the only thing is that there are loads of ads in this video and seriously it is hindering the studies.
Hey Krish! Can I have the name of the software you take notes in?
Scrble ink... Available in Microsoft store
@@krishnaik06 sir iske notes kaha milenge
@@nandinijaiswal5571 description check
awesome loved your classes. maths is like cakewalk because of you. thankyou
but why is exponential weighted average called exponential? why that term?
It's an awesome session ..
That was great session!
Where are the notes of this season?
i dont found the deta set so can you tell me pls
Hi Krish, With ADA grad. doesn't it still has the issues of gradient descent that it will take a lot of resources because we are taking all the data like GD. I understand that it will converge faster because of adaptive learning rate value but still in terms of resource needed for every epoch it will be exactly same as GD? Kindly advice
Because it will be able to find global minima fast you don't have to pass every samples again and again to calculate gradient
Sir I am unable to get membership of your channel. As soon as I fill my debit card details it says "invalid or unsupported method."
How to proceed?
Can anyone tell me how can there be a square in cost function formula?
12:38
Can someone with the materials saved please share it? The link in the description seems to have expired
Yeah! If you got it. Don't forget to share me . Please . Please ping here..if you get materials
I am inspired to write research paper....
Mine first paper is accepted today @ International journal for research in applied science and engineering technology
@@waizkhan1452 wow that's great 😍
Congratulations brother, can you share paper for study
@@sathegdeep it is not published yet... I will share the link after it will be published in a week or two.. Topic was a robust business specific real time sign language translator
Best
krish sir OP!
Thankyou sir❤️🙌
finished watching