If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
Sir, their is a mistake on timestamp 0.41 where a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1]. But, you really teach great. thank you...
best explanation, best playlists I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B. Both represent the same thing. Hope it helps you.
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
You will get all the information in upcoming videos that I have already uploaded in this series. If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ? Thank you so much for these incredible videos
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.
Excellent video! .Bro why do we have multiple nuerons in every hidden layer. is it from the point of view of introducing non linearity?
@@arpit743 Yes, but not entirely. Multiple neurons allow us to capture complicated patterns. A single neuron won’t be able to capture complicated patterns from the dataset.
@@CodingLane thanks alot! but why is that it allows for complicated boundaries?
@@arpit743 more refined outputs will allow to see the limitations to your network boundaries...and hence u can pinpoint at exact location and correct it as per your needs. It doesn't allows for complicated boundaries , u are ALLOWED to see your complicated boundaries,and hence work thru it
Sir,
their is a mistake on timestamp 0.41 where
a2[1] is wrong , A2[2] is the activation function you written second becouse the value you multiply with the weight in weighted sum those value of a is for first hidden layer which use to find the second hidden layer value A2[2] not a2[1].
But, you really teach great.
thank you...
Thanks a lot for This Amazing Introductory Lecture 😊
Lecture - 2 Completed from This Neural Network Playlist
This was more helpful than my lectures!
so glad I found this channel!!
Thank you! I appreciate your support 😇
Absolutely loved the way you explain. So easy to understand. Thank you
Very informatics video.Explained all the terms in a simple manner.Thanks alot
at 0:58 in a1[1] = activation(....), last sum should be W13[1]*a3[0] not W13[1]*a3[1]
best video on youtube for this topic
Thank you so much. Much appreciate your comment! 🙂
best explanation, best playlists
I don't usually interact with the algorithm much by giving likes and dropping comments or liking but you beat me into submission with this. Hopefully I understand the rest of it too lol.
Super sir. I have learned more information from this and also calculation way. It's very useful to our study. Thank you sir
Happy to help!
You are great. It will be very good if you continue.
Thank you for your support! I will surely continue making more videos.
Best explanation I've seen so far
Your videos on neural networks are really good. Can you please also upload videos for generalized neural networks too, that would really be helpful P.S Keep Up the good work!!!
Thank you so much for your feedback. I will surely consider making videos on generalized neural networks.
This is so well explained.. thankyou
Very helpful and to the point and correct!
Im a bit confuse through the exponent notations since some of it were not corresponding to the other
Nicely explained. Keep up the good job!
Isnt the equation : Z= W.X+B = transpose(W)*X + B.Hence the weight matrix what you have given is wrong right?
Hi... I have taken the shape of W as (n_h, n_x). Thus equation will be Z = W.X + B. But if you take W as (n_x, n_h), then equation of Z = transpose(W).X + B.
Both represent the same thing. Hope it helps you.
@@CodingLane thanks for the quick clarification.makes sense now.keep up the great work!!
Great video, and great explanation thanks dude!
Your Welcome!
Your videos are very helpful. It will be great if you sort the video..Thank you😇😇😇
great video,
Please also make a video on SVM as soon as possible
Okay Sure ! Thank you so much for your suggestion. I have been asked alot, to make video on SVM. So, I will try to make it just after finishing this Neural Network playlist .
Fantastic explanation. Thank you
Good Explanation !!
Thank you!
Brother your explanation was great but there are some mistakes i have pointed out.
Amazing work, keep it going :)
Thank You!
Super Bro❤❤❤❤
B1 and B2 are initialized randomly too ?
Small doubt, what is f(z1)...I am assuming these are just different type of activation functions...where input is just the weight of current layer*input from previous layers...is that correct?
Yes correct… but do check out the equations properly. It has bias also.
@@CodingLane Thanks for your prompt response.
Sir it's W ¹¹[¹] * a⁰[1] right? You've done it as W ¹¹[¹] * a¹[1] at the matrix multiplication, can you just verify I'm wrong?
Yes… there is a typo error
great video
Can A* actually be Z*, e.g. A1 = Z1?
No, we need to apply a non-linear activation function. So A1 must be = some_non_linear_function(Z1)
hi, how to calculate the cost?
You will get all the information in upcoming videos that I have already uploaded in this series.
If you still have questions, then you can write me mail on : codeboosterjp@gmail.com
Please share code algorithm backpropagate
let bro cook
5:04
Extremely confusing tutorial and there's a mistake
This should be :
A[3]⁰ not A[3]¹
this video should be titled " Explain - Forward and Backward Propagation - to Me Like I'm Five. Thanks man you saved me a lot of time.
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
One of the Best Comments I have seen. Thank you so much! And thanks for the title idea 😂😄
I've always felt as if I was on the cusp of understanding neural nets but this video brought me past the hump and explained it perfectly! Thank you so much!
I am really elated hearing this. Glad if helped you out. Thank you so much for your appreciation. 🙂
You drop something ... 👑
haha.. what is it? Thanks btw
you explained in very clear and easy ways. Thank you, this is so helpful!
Your welcome!
wait you haven't explained backpropagation at all
Literally best. Crisp and clear!! Thank you
hi can you put caption option
Hi.. somehow captions were not generated in this video. All my ohter videos do have caption. I will change the settings to bring caption in this video as well. Thanks for bringing this to my attention.
Lord Jay Patel
y no subtitles?
what is this B1
Good job. But Gradient descent W2 and W1 mus be updated simultaneously.
Thank you! Yes they should be updated simultaneously.
where did came from the algorithm that calculates the next W in 5:30 ? I know it is intuitive, but does it have something to do about Euler's method ? Or another one ?
Thank you so much for these incredible videos
Top Class Explanation!
Glad it was helpful!
This was actually pretty straight forward
Glad if it helped you!
You explain better than popular course instructor on deep learning
Thanks for the compliment 😇
I am sure he is talking about Andrew Ng Lol. His explanation on that video is too detailed and the notations are too confusing lol. But the same explanation in his Machine Learning Specialization course is much better.
thank u sir it was really helpful
Your welcome!
Thanks man. The slides were amazingly put up.
Thank you so much!
Awesome, really helpful! Thank you
Your welcome!
Such a simple and neat explanation.
Thank you!
Excellent explanation jazakallah bro
great video as always
Thank You soo much !!!
you are really awesome. love your teaching ability
Thank you so much !
@@CodingLane, you are most welcome bro. Please make the implementation of Multiclass Logistics Regression using OnevsAll/OnevsOne method
@@mdtufajjalhossain1246 Okay! Thanks for suggesting!