When will you cover RNN, encoder-decoder & transformers? Also, if you could make mini projects on these topics, it would be great. Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍
Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!
Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!
Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.
please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤
At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)
Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.
Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too
at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.
if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.
can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?
One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima
was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟
Great concise video. Loved it. A small question 💡: Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?
yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.
Wouldn't Categorical and Sparse Entropy become same ? As after OHE, all log terms become zero except the current one which gives same result as from Sparse.
what is the difference : 1.) if we update the weights and bias on each row ,for all epoches , 2) for each batch (all rows togeather), for all epoches . can you tell senarios where one is better over other?
@@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube! Your method of teaching is very simple and understandable. Thank You for providing credible content!
When will you cover RNN, encoder-decoder & transformers?
Also, if you could make mini projects on these topics, it would be great.
Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍
Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!
Your every word and every minute of sayings are worth a lot!
Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation
Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!
Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.
Ony day this channel become most popular for Deep learning ❤️❤️
the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot
Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.
Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.
please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤
These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.
This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.
At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)
Great content for me....now everything about loss function is clear .......thank you
Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.
Thanks for the timestamps It's really helpful
Thank you so much sir for another amazing lecture ❤😊
It was a great Explanation . Thank you so much for such amazing videos.
44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima
My Morning begins with campusX...
Gentlemen u r on right track
same bro
@@santoshpal8612 gentleman learn english first
@@CodeyourLife32hii bro are you learning now or getting job
Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too
Galaxy tab s7+
Simple and easy to understand.
Sir, you are really amazing. I have learned lot of things from your TH-cam channel.
at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.
Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks
With all respect....thank you very much ❤
Very well explained, Thanks
amazing content as usual. thanks
can't we first handle outliers then apply MCE? it is not such a big disadvantage na.
Learning DL and Hindi together, respect from Afghanistan Sir!
thank you so much sir, clear explaination
if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.
can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?
hmesha ki trha kmaaaal Sir g
at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.
One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima
Such wonderful learning experience
Great work sir. Amazing 😍
was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟
Great concise video. Loved it.
A small question 💡:
Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?
I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category
yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.
nice explanation sir
thank you so much
This is so very important
Beautiful explanation
How beautiful this is 🥰
Mindboggling !!!!!!!!!!!!!!!!!!
excellent teaching skill.sir plz provide notes pdf
Very well explained
great explanation. can you tell me why we need bias in NN . how it is useful
amazing lectureeeeeeee
Thank You Sir.
thank you for your hard work
Sir, which tool are you using for explanation in this video
Thank you sir 😁😊
awesome man just amazing ... ! ! !
Great work
I wanted this video and got it. Thank you.
Amazing sir 🙏🏻
this playlist is a 💎💎💎💎💎
Wouldn't Categorical and Sparse Entropy become same ?
As after OHE, all log terms become zero except the current one which gives same result as from Sparse.
Amazing
ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.
I am enjoying your video like a web series sir
Great video sir as expected
great content
Great content!
🦸♂Thank you Bhaiya ...
Thanxs sir
what is the difference :
1.) if we update the weights and bias on each row ,for all epoches ,
2) for each batch (all rows togeather), for all epoches .
can you tell senarios where one is better over other?
+1
Sir, how should i study while making notes ? i feel like i am gettting jumbeled in my brain
Welcome Back Sir 🤟
Awesome
thanks sir
Awesome sir!
can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used
22:25 unit^2
Thank you
43.32
cost function = 1/n∑ ( loss function )
sir carryon this series
As usual crystal clear explanation Sir ji❤❤🙌 @CampusX
easyy thankssss
best
please put timestamp for each topic in this video.
thank your sir for this great content.
13/05/24
please share the white board @CampusX
Thank you!!!
Respect
maza aagya
9:38 matter ho gya khtrnak wala
grate
Why you stopped posting videos in this Playlist?
Creating the next one right now... Backpropogation
@@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube!
Your method of teaching is very simple and understandable. Thank You for providing credible content!
Revising my concepts.
August 04, 2023 😅
❤
finished watching
Please take care of background noises
Isn't logloss convex?
Thank you sir for resuming
❤
Hi sir
I want complete end to end project video.please share me
aise explain karoge to like to karna padega na....