#1 Solved Example Back Propagation Algorithm Multi-Layer Perceptron Network by Dr. Mahesh Huddar
ฝัง
- เผยแพร่เมื่อ 1 ก.ค. 2024
- #1 Solved Example Back Propagation Algorithm Multi-Layer Perceptron Network Machine Learning by Dr. Mahesh Huddar
Back Propagation Algorithm: • Back Propagation Algor...
Derivation of Back Propagation Algorithm: • Derivation of Back Pro...
#1 Solved Example Back Propagation Algorithm: • #1 Solved Example Back...
#2 Solved Example Back Propagation Algorithm: • #2. Solved Example Bac...
#3 Solved Example Back Propagation Algorithm: • #3. Backpropagation So...
#4 Solved Example Back Propagation Algorithm: • Backpropagation Solved...
Back Propagation Algorithm with bipolar weights: • 16. Update weights usi...
Multi-Layer Perceptron LearningSolved Example: • Solved Example Multi-L...
Multi-Layer Perceptron Learning: • Multi-Layer Perceptron...
Gradient Descent Algorithm: • 2. Gradient Descent Al...
Gradient Descent and Delta Rule: • 1. Gradient Descent | ...
The following concepts are discussed:
______________________________
Solved Example Back Propagation Algorithm,
Back Propagation Algorithm Solved Example,
Back Propagation Algorithm,
Multi-Layer Perceptron Network,
Back Propagation Algorithm Machine Learning,
Back Propagation Algorithm Multi-Layer Perceptron Network
Derivation of Backpropagation algorithm: • Derivation of Back Pro...
Gradient Descent Algorithm: • 2. Gradient Descent Al...
Gradient Descent and Delta Rule: • 1. Gradient Descent | ...
Machine Learning - • Machine Learning
Big Data Analysis - • Big Data Analytics
Data Science and Machine Learning - Machine Learning - • Machine Learning
Python Tutorial - • Python Application Pro...
********************************
1. Blog / Website: www.vtupulse.com/
2. Like Facebook Page: / vtupulse
3. Follow us on Instagram: / vtupulse
4. Like, Share, Subscribe, and Don't forget to press the bell ICON for regular updates
man l looked many bp explanations and most of them telling of rocket science stuff this is the easy and clear one ever, thanx for sharing
Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.
Welcome
Do like share and subscribe
Damn sir, u are the best youtube teacher in AI.. Love youu ma sir
Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!
Welcome
Do like share and subscribe
This makes the math very clear. I now know the math and have some intuition, so I hope to fully connect the two soon. Thanks for the great video!
Thank You
Do like share and subscribe
It looks like error in ∆wji notation you followed just opposite
@@MaheshHuddar
I have an Exam in 2 days and your videos just saved me from failing this module.
Thank you so much and much love from 🇩🇿🇩🇿.
Welcome
Do like share and subscribe
I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!
Welcome
Do like share and subscribe
Very clear! How about bias b? What is the formula in case we add a bias?
Simple lucid example illustrated. Please continue.
Thank You
Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.
Best video explanation on ANN back propagation. Many thanks sir
Thank You
Do like share and subscribe
how did you update the weights of connections connecting input layer and the hidden layer?
Great sir, it is very clear example how to calculate ANN. Thanks, keep being productive
Thank You
Do like share and subscribe
Thank you so much, this video comes right after I feel bad at machine learning and wanna give up but now I think it is not so hard as I think
Welcome
Do like share and subscribe
Mr.Huddar, thanks a lot for perfect explanation. One thing though, how do I calculate the change of bias term for each neuron in my neural network?
Very clear. Thanks for uploading this video.
Welcome
Do like share and subscribe
You taught Very good. Today is my exam. Your videos were really helpful. I hope I pass well without getting a backlog in this subject. 👍👍
Welcome
Do like share and subscribe
you are a life saver, thank you soooo much.
Thank You
Do like share and subscribe
Amazing stuff just to the point and clear.
Thank You
Do like share and subscribe
very clear thank you for the content
Welcome
Do like share and subscribe
Good job, but why no account for the bias term before applying sigmoid function?
Thank you Sir, for each and every point explain it.
Welcome
Do like share and Subscribe
Great work Dr. Mahesh. Thanks from Pakistan.
Welcome
Do like share and subscribe
Easy to understand. Give next one is CNN
Awesome explanation.
Thanks
Welcome
Do like share and subscribe
Nice video, easily understood the topic, thank you
Welcome
Do like share and subscribe
I believe for hidden units, the w_kj in delta(j) formula should have been w_jk. Namely, other way around.
And delta(w_ji) should have been delta(w_ij), again the other way around.
yeh i was about to comment the same thing
It'd an awesome explanation sir....no words to thank you sir
You are most welcome
Do like share and subscribe
Can you please do vedios on Cnn with mathematical concepts..your vedios are much useful and understandable.Thank you
You saved me you're a hero thank you
Welcome
Do like share and subscribe
Awesome explanation! Thanks a lot!
Thank You
Do like share and subscribe
Thank you sir, all of you machine learning videos have helped us students a lot
Welcome
Do like share and subscribe
@@MaheshHuddar Sir Can you solve problems on HMM , CNN pls
Last night !!
Today is exam well explain boss
thank you! understood the concept smoothly with your video!
Thank You
Do like share and subscribe
@@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?
ah nevermind, got it from your next video in this series lol
不错,老印!
Thank you So much Sir.....This videos made us clear understanding of Machine learning all concepts...
Welcome
Do like share and subscribe
Köszönjük!
Welcome
Do like share and subscribe
Sir, thank you for this great video! It was really helpful. I appreciate the clear explanation
Glad it was helpful!
Do like share and subscribe
GOATED Explaning, mantab mantab
Very useful, thanks!
Welcome
Do like share and subscribe
Very Well Explained ...Keep up the Good Work
Thank You
Do like share and subscribe
thanks a lot for your explanation!
Welcome
Do like share and subscribe
Thank you so much! You saved me. I subscribed. Thanks
Welcome
Please do like share
Clear explanation. recommanded...
Thank You
Do like share and subscribe
Finally one that makes sense
Thank You
Do like share and subscribe
Today is my exam again well explain sir
Concept is clear , i got confidence in this concept sir,thank you👍👍👍👍
Welcome
Do like share and subscribe
@@MaheshHuddar sir can you provide videos in gaussian process in machine learning
how are you deriving deltaj formula, u can include derivations of sigmoid functions.
Thank you!
Welcome
Do like share and subscribe
THANK YOU SIR , BIRILIANT INDIAN MIND
Welcome
Do like share and subscribe
good staff thank u
Welcome
Do like share and subscribe
thank u sir
Welcome
Do like share and subscribe
awesome.
Thanks!
Do like share and subscribe
thank you sir.
Most welcome
Do like share and subscribe
Thanks a lot Sir!!!
Welcome
Do like share and subscribe
Thank you so much sir
Welcome
Do like share and subscribe
Nice Video sir.. where is the bias here
Great video, easy to understand
Thank You
Do like share and subscribe
Thanks for solution
Welcome
Do like share and subscribe
@@MaheshHuddar sure. Thanks 😊
Thanks a lot sir 🙏🙏🙏🙏🙏🙏🙏
Most welcome
Do like share and subscribe
thank you
Welcome
Do like share and subscribe
you are a king sir. Thank you for saving me from my exam tommorow.
Welcome
Do like share and subscribe
All the very best for your exams
thank you man
Welcome
Do like share and subscribe
thank you sir
Most welcome
Do like share and subscribe
Thank you
Welcome
Do like share and subscribe
Very helpful
Do like share and subscribe
in one epoch how many times back propagation takes place?
youre a legend
Thank You
Do like share and subscribe
you are a legend
Welcome
Do like share and subscribe
And where is the b biased
There should be some constant also?
Soooooo Good❤
Thank You
Do like share and subscribe
YOU ARE THE GBEST
well done sir
Thank You
Do like share and subscribe
What about the bias terms ?
How do you update de weights if you have more input data?? In this case he only has 1 input, how do you do it with 2 inputs? Do you do the same twice?
everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network
👍👍👍👍👍👍👍👍
Thank you! This helped me a lot!
Welcome
Do like share and subscribe
The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?
yes
Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.
gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.
how he got ytarget = 0.5@@adityachalla7677
good video
Thank You
Do like share and subscribe
I have a question, what if we have a bias term and some bias weights. Do we need to account for those or it would be 0?
Yes you have to consider
Follow this video: th-cam.com/video/n2L1J5JYgUk/w-d-xo.html
From which book u have taken problem from?
Can you explain this
The input of a neuran is 0.377
Output of the same neuran is 0.5932
How this happens
How does the 2.71^-0377 give the answer of 0.6867
GODDD!!!!!!!!!!!!!!!!!!!
Thank You
Do like share and subscribe
txs
Welcome
Do like share and subscribe
I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?
same doubt .. what is the correct method?
@@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.
For a multiple data point, you use error function and for single output you make use of loss function.
Loss function is error = actual - target
Error function is 1/2(actual-target)square
For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2.
For a single data point, you use the loss function. Loss function=
(y actual - y target)
The errors are made to be positive by squaring.
I have a confusion. We use ReLU on hidden layer and not sigmoid. Shouldn't we calculate hidden layer's activation using ReLU instead of sigmoid?
Yes you have to
Do calculation Based on activation function
@Mahesh Huddar ik. You have used sigmoid function on the hidden layer. This will result in an error.
🙏🙏🙏🙏🙏🙏
Do like share and subscribe
What about the bias factor??
He did not add it to keep it simple i guess.
But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..
hello, thanks for your lecture, please let me ask after we calculate Denta 5 in backpropagation, why don't we calculate new w35 and new w45 and then substitute in the formula Denta 3 and Denta 4 but have to use old w35 and w45, please reply, have a nice day
The error was due to old weights,hence update the weights using old weights
@@MaheshHuddar Thank you so much!!
Sir gradient descant video kocham fastta video upload pannuga
Videos on Gradient Decent:
th-cam.com/video/ktGm0WCoQOg/w-d-xo.html
th-cam.com/video/5hB4_8o34GU/w-d-xo.html
th-cam.com/video/ibKP0nIT7YU/w-d-xo.html
based
Based...?
sir, also each perceptron has bias with it right ?
Yes
Follow this video for bias: th-cam.com/video/n2L1J5JYgUk/w-d-xo.html
Thank u sir a lot!! @@MaheshHuddar
Sir how to update bias in back propagation
Refer this video: th-cam.com/video/n2L1J5JYgUk/w-d-xo.html
Great
Thank You
Do like share and subscribe
@@MaheshHuddar done 👍
At what stage we have to stop Forward and Backward pass unti Error becomes zero
this is specified as epoch and must be given in a question. There is no defined condition to stop. It really depends on your needs.
what about bias
It is assumed to be zero here
it will be a3, a4 and a5 instead of a1,a2,a3 while finding the forward pass
the notations of delta w are also wrong, this man fcukrd up the notations
but honestly teaching good
Hi, can I contact you, I need to solve my assignment with 2d vector data with mixture model. i need to perform EBP. Can you please tell me how can I contact you.
#Mahesh Hunder, dnt we need bias, just curious
Follow this video
th-cam.com/video/n2L1J5JYgUk/w-d-xo.html
@@MaheshHuddar @6:31 min Just curious can you share the web page for these formulae ?
mahesh daale