Despite being a AIML professional for a few years now, I love to spend time on your channel. Every word you speak make me think “aise koi mujhe 3-4 saal pehele sikhata” when I was new finding my way in. I am waiting for a special topic “Batch Normalisation”, Great Job Nitish Sir.
@@campusx-official Hellos sir please can you complete or atleast give a dat or atleast give an guestimated date that when you will complete your dl playlist
I have binge-watched all your deep learning videos in the last two days ....you have explained everything so easily and with examples as well... thank you sir
@CampusX, great content after spending so much time and money on these online courses now I am getting deeper into, I must say you will be remembered by AI/ML generation. 💯
In general when observed from the Sigmoid and Tanh Graph, the derivative (ie Slope of line tangent to Curve at a Point) of Tanh is always greater than Sigmoid, and whenever x tends to infinity both derivatives can be same to each other, and dw is directly proportional to derivative of activation function, hence when Tanh used in internal hidden layers its derivative is high and weight update are faster and hence our network converges faster than compared to sigmoid, and the problem with relu is that if any weights go negative relu makes it zero
I guess, The activations are positive (0-1), but when it gets multiplied by the other gradients during propagation then it can either become positive or negative.
@@samrat_chauhan sir mentioned in earlier videos that we do standardization when they is no definite upper or lower limit on the data while Normalization is applied when we know the upper limit and lower limit of the data eg age would mostly be in of 0-110 or so
Despite being a AIML professional for a few years now, I love to spend time on your channel. Every word you speak make me think “aise koi mujhe 3-4 saal pehele sikhata” when I was new finding my way in. I am waiting for a special topic “Batch Normalisation”, Great Job Nitish Sir.
Thanks Piyush :)
@@campusx-official Hellos sir please can you complete or atleast give a dat or atleast give an guestimated date that when you will complete your dl playlist
Nike. m n FM. I.. .justn..
Piyush bhaiya referal dedo pls
I have binge-watched all your deep learning videos in the last two days ....you have explained everything so easily and with examples as well... thank you sir
@CampusX, great content after spending so much time and money on these online courses now I am getting deeper into, I must say you will be remembered by AI/ML generation.
💯
Your teaching is motivating me more to learn more about Machine Learning and Deep learning. Thanks a lot for such easy explanation.
This guy is GOD !!
Indeed Sir, ur way of explaining things is awesome..... Great work.... Thanks for sharing...Stay blessed
Thank you man. I have never seen such good explanation. You explained theory as well as the math part so effortlessly..
Thank you Sir! One of the great tutors I have cm across :) Any planning to expand NLP series?
indeed 🙂
Best video on activation function.
Thanks for video, waiting for Relu and Tanh 2-3 hr coding video.
Superb demonstration
Good Explanation
Thanks for your efforts in machine learning teaching field
In general when observed from the Sigmoid and Tanh Graph, the derivative (ie Slope of line tangent to Curve at a Point) of Tanh is always greater than Sigmoid, and whenever x tends to infinity both derivatives can be same to each other, and dw is directly proportional to derivative of activation function, hence when Tanh used in internal hidden layers its derivative is high and weight update are faster and hence our network converges faster than compared to sigmoid, and the problem with relu is that if any weights go negative relu makes it zero
also Softmax activation is used in output layer in Multiclass Classification Problem
Great video sir!! Thank you
Sir, pls make a complete tutorial on how to develop Content Generator, Text Summarizer using ML and NLP? Love and Respect from Kathmandu, Nepal
Great content , Thanks Sir for providing such great content to us 🧡 . It helps us a lot
Hi Nitish ,please make a video on which videos or books you follow to learn ML DL NLP etc!
Great 🙏🙏
Great video...
Thanks for everything
Please increase frequency of DL playlist thanks🏆
amazing video
God level teacher
Thank you soo much sir ....
Amazing tutorial sir.
Nice sir
thanks sir jee!
thanks bro
33:41 Couldn't be better example than this Nitish 👏🏻
Mujhe ek baat samajh nhi aati 27k views hai es vedio aur sir itna achha pdate hai phir bhi log like kyu nhi kerte ?
God level teacher ❤️🤌🏻
we differentiate loss function in gradient decent right ,so why should we differentiate activation function
?
You are really amazing Sir.
Great work sir!!
Sir Grammarly Ka ML system develop kijiye na, please 🙏 sir
Thanks!
🥰🥰🥰 great bro
Superb...
@32:15 Hi Nitish,please explain how gradinets become negative when sigmoid ranges bw 0 to0.25
I guess,
The activations are positive (0-1), but when it gets multiplied by the other gradients during propagation then it can either become positive or negative.
sir at 28:54 sigmoid sirf 0-1 o/p krta h toh ap bole -ve and +ve positive value?
8:45 how it will be linear at last layer it is sigmoid... then how will u apply linear in the last layer...
Sir please provide the links for your Microsoft Onenote notes link
Love from Pakistan🥰🥰
Sir in ur github I dont find any code and lectures of these 50 lectures
finished watching
tq🥰
softmax?
Sir Can u upload code for practice whatever u learn
i think there the data should be Standardized in place of Normalized
Please explain the scenerio
Standardization and normalisation are two diff. Cases you will fill on Google but there he just interchanged those two terms
@@sanskarkurude I know that but in what scenarios we should use standardization
@@samrat_chauhan sir mentioned in earlier videos that we do standardization when they is no definite upper or lower limit on the data while Normalization is applied when we know the upper limit and lower limit of the data eg age would mostly be in of 0-110 or so
best
One note notes link?
🙏
Sir please upload video