@Nitish Shing. I really loved your all videos and the way you explain the concept. Looking fore more videos of such nature in future. I am learning AI through your videos. Thanks for amazing work you are doing. Lots of support from Bhutan
Brilliantly explained. May I please know what pen and drawing pad you use to write the mathematics and explanation? Thanks and keep uploading videos and I am sure you will soon have millions of subscribers
in last topic how prediction works there you told p = 0.25 means only one node will be ignored during training in every epoch(according to you) but i think if you want to drop or ignore only one node from one layer(one layer is made up by 4 layer) the p value should be 0.75 and this is the formula - p = Neurons to keep / Total neurons If i am wrong please correct me and if we both are right then let me know please 🙂
If the weight is calculated based on the ratio of availability of the nodes, i.e., 0.75, then during testing we will use the same weight value to predict the output. Why to multiply with the availability ratio of 0.75 during testing?
Is it more accurate to say for every combination of forward and backward pass instead of saying for each epoch? One epoch may consists of several mini batches. Also is it common to apply drop outs on input layer? Thanks...
Hello Sir, just what you mentioned in the video that the probability of each weight being present while testing will be 0.75 given that p is 0.25. But as the nature being random for selecting the node to be dropout isnt it possible in the worst case scenario that one node being present all the time for eg the node was present for all the 100 epochs?
This guy will become a brand in aiml
Thanks, sir jee for such videos! please start BERT/transformer series!
I think you nailed it with the random forest analogy. That's just brilliant.
@Nitish Shing. I really loved your all videos and the way you explain the concept. Looking fore more videos of such nature in future. I am learning AI through your videos. Thanks for amazing work you are doing. Lots of support from Bhutan
Amazing you are the king of deep learning
Brilliantly explained. May I please know what pen and drawing pad you use to write the mathematics and explanation? Thanks and keep uploading videos and I am sure you will soon have millions of subscribers
Amazing video again ...
Very good tutorial. Thank you very much for such a good tutorial.
very simple and precise explanation. Thanks
Thanks for this simple explanation.
Very well explained.
amazing video thank you, learned so much!
Wonderful videos all
finished watching
Amazing❤❤
in last topic how prediction works there you told p = 0.25 means only one node will be ignored during training in every epoch(according to you) but i think if you want to drop or ignore only one node from one layer(one layer is made up by 4 layer) the p value should be 0.75
and this is the formula -
p = Neurons to keep / Total neurons
If i am wrong please correct me and if we both are right then let me know please 🙂
hidden gem
sir you are a genius
Awesome teaching...
Thank You Sir.
I have a request. Can you please make a video about Monte Carlo (MC) Dropout too?
best video.
Great
If the weight is calculated based on the ratio of availability of the nodes, i.e., 0.75, then during testing we will use the same weight value to predict the output. Why to multiply with the availability ratio of 0.75 during testing?
Thank you soo much Sir ....
Thanks for providing the paper
Great Video
Is it more accurate to say for every combination of forward and backward pass instead of saying for each epoch? One epoch may consists of several mini batches. Also is it common to apply drop outs on input layer? Thanks...
ig we usually don't apply dropout to input layer
yea you are right
addicted
Hello Sir, just what you mentioned in the video that the probability of each weight being present while testing will be 0.75 given that p is 0.25. But as the nature being random for selecting the node to be dropout isnt it possible in the worst case scenario that one node being present all the time for eg the node was present for all the 100 epochs?
Thanks awesome
Sir pls send the notes link
sir please share the one note link for notes
Sir how do you make these amazing tutorials? What do you use to write on the screen? Just curious
By pen tablet
off-topic. do you have any connection with the Bengali language? I am a Bengali and for some reason, I think, you do. just curious. :)
💚
1st
Good
Revising concepts.
August 15, 2023😅
finished watching
now sleep sweet dreams.
@@umersiddiqui9580 with excitement