wow, wow...I'm only five minutes into this video and I am in awe of how you explain this stuff; you are a great teacher; thanks so much and I will definitely start watching more of your other videos; be safe my friend.
Pursuing course for AI ML.. dont understaand this concept there. then started this video here and idea gets cleared. thanks krish sir. u r really amazing.
Hello sir, I'm an DOT NET Dev and on the path to persue my dream i.e machine learning. I'm amazed by this explanation. Thanks a lot. Will regularly watch your videos now. These videos are super crisp.
This is the best explanation I've come across to date on SVM. Thank you for putting together this video and explaining the concepts clearly and concisely. Appreciate it!
Krish naik sir, you are an excellent teacher. You are transforming the teaching industry and how a topic should be taught and its relevance in industry.
sir you made me to thik ml may be i can do that i am a fullstack developer and i was really frustrated with the math behind ml but you made it learn bad instaractors and teachers from this guy my hommiiiiiiie thankyou sir
I brought a bunch of books on ML and SVM, but they all had difficult words and equations in them, Krish made it so simple that I wish I had found this tutorial earlier.
Hi, In the SVM model there is something called soft margin and you missed to talk about that concept in this tutorial. Hope you will do in future. Your videos are awesome. Keep up the good work Krish.
Hi .. I think kernel SVM tries to transforms points from SOME dimensions to the OTHER dimensions where the points r linearly separable or almost separable but not necessarily from lower to higher dimensions.we try different functions .... sometimes in real world we can’t Visualize the data ...so we believe in SOME dimensions data might be separable....or might not be .. if it gives good results according to the metrics we have chosen, it’s nice else we go for other algorithms ...thanks for making this algorithm simple.nice explanation brother ...👌👌👌
May I know where have you learned all this from with so good intuition? I've heard that you're a self-taught data scientist. Please shed some light on this.
After watching Rohit (Koi Mil gaya) future watching models, Krish sir inspired, learnt on his own and became Data Scientist. Hope this answer is useful to you. Thanks.
Going to higher dimensions... increases the volume of the space exponentially and the number of points remains same. You are searching space has increased. So its better to project back by using methods PCA (dimensionality reduction)
hey! am alexander..i have seen most of your totorials. it is good and very nice explanation..plz post the tutorials related to ANN and different remote sensing data application by this machine learning algorithm
Very nice video but I have few doubts:- 1. What is the actual significance of Support Vectors. Are they just the data points which help create margins and is that there only role. 2. The hyperplane or ML model can classify data into two labels based on the position of data above or below hyperplane. So what role do the margin play. In any case the data will be classified as per their position above or below hyperplane. Hyperplane plays the critical role. 3. How increasing the margin will reduce generalization error.
Hd 1 q here. Is there any relation btw d+ and d-. As in the 1st fig- d+ is the distance from the +eve support vector to the hyperplane and d- is the distance from the -eve support vector to the hyperplane. So is d+ always equal to greater than or less than d- or there is no specific relation as such? As always gr8 video and gr8 explanation. Thank you..
Marginal Lines are parallel to the Hyperplane.So ideally the distance between the hyperplane and +ve marginal line and -ve marginal is same.Idea here is to create a generalized model and as the distance between the Marginal lines will be more,more generalized model will be for new/unseen data set
wow, wow...I'm only five minutes into this video and I am in awe of how you explain this stuff; you are a great teacher; thanks so much and I will definitely start watching more of your other videos; be safe my friend.
Pursuing course for AI ML.. dont understaand this concept there. then started this video here and idea gets cleared.
thanks krish sir. u r really amazing.
Hello sir,
I'm an DOT NET Dev and on the path to persue my dream i.e machine learning.
I'm amazed by this explanation. Thanks a lot. Will regularly watch your videos now. These videos are super crisp.
I have been waiting for this video.
I didnt understand it from other videos,your explanation is really clear,crisp and easy to understand.
I never seen such kind of clear explanation. I have seen all your videos. Very good. Please keep going. We need your support. Thank you.
Thanks krish... I request you to make this video and within two days you accepted my request... Once again thank a lot for making this video...
I'm really saying sir. Your session is simply super & your language also interacted
This is the best explanation I've come across to date on SVM. Thank you for putting together this video and explaining the concepts clearly and concisely. Appreciate it!
By far the best explanation I have come across till date for SVM. Would be brilliant if I could learn in a class from you sir.
Krish naik sir, you are an excellent teacher. You are transforming the teaching industry and how a topic should be taught and its relevance in industry.
You're such a Great teacher, thanks a lot and please upload soon the remaining part i can't wait. God bless you
best explanation on TH-cam Channel , Speech Less Sir.....................Thanks Sir
Krish always coming through with the best explanations! Thank you!
sir you made me to thik ml may be i can do that
i am a fullstack developer and i was really frustrated with the math behind ml but you made it
learn bad instaractors and teachers from this guy my hommiiiiiiie
thankyou sir
After watching MIT video on this topic, I can say that you deserve to be in MIT
what are talking... Mr. krish is giving explaination like layman .... he is not at all taking about the optimization of hyperplane
I brought a bunch of books on ML and SVM, but they all had difficult words and equations in them, Krish made it so simple that I wish I had found this tutorial earlier.
Your teaching makes my understanding more clear.Thank you sir.
onw thing i wonder of krishna sir is , where is the source of energy that comes to him while teaching? He is next level teacher
you are the best person for ML and DS in youtube
This is the best explanation I found regarding SVMs and Kernels.
One of the best channels for data science!
You have unique style in teaching
The BEst explanation of SVM, appreciate your help
Hi, In the SVM model there is something called soft margin and you missed to talk about that concept in this tutorial. Hope you will do in future. Your videos are awesome. Keep up the good work Krish.
I just had to pause the video and thank you for your wonderful teaching.. really cant thank you enough
Good explanation for a newbie like me.
Thanks for summarising all the points covered at the end
thanks very much waiting for the next part. Explanation is crisp and clear
This is phenomenal, you are really helping so much people with this type of videos, especially me haha. Keep up the great work man, really
Hi .. I think kernel SVM tries to transforms points from SOME dimensions to the OTHER dimensions where the points r linearly separable or almost separable but not necessarily from lower to higher dimensions.we try different functions .... sometimes in real world we can’t Visualize the data ...so we believe in SOME dimensions data might be separable....or might not be .. if it gives good results according to the metrics we have chosen, it’s nice else we go for other algorithms ...thanks for making this algorithm simple.nice explanation brother ...👌👌👌
super sir i never expected like the way of SVM concept.
Your videos are always easy to understand
This is one of the best explanations I have seen !
you have explained the solution but not the problem and why this approach is used.
Wow, this is very impacting, you are good teacher. keep it up
Thank you so much sir.. i was waiting for your lecture on SVM for so long.. waiting for the next parts too.. 🙏
amazing explanation thank you very much for sharing this kind of knowledge
Fun Fact: You have used "Particular" word for 96 times. Getting close to a Century!! :) :)
Good work by the way. Very helpful
Thanks for counting :)..but I think u should focus on the class :)
The best explanation I've come so far! Thank you!
wonderful explanation...from Pakistan
Great Explanation
You are not the best, You are best of all the bests. ;)
Thankyou so much for clear & concise explanation
Sir have Good teaching skill
Very good explanation. Understand.
Daaga khol diya sir aapne ❤️
Salute to you sir!! now I just want to be a guide like you sir.. how can u ease our studies so well!!😍😍😍😄😄
Very good explanation. Thank you.
Thanks for this video krish naik sir😍
short and to the point...Loved it
You are a legend, bro!
Nice explanation, I especially come for the mathematical part..
waooo........ great explanation
You are really good Krish
You helped me to learn by brain and not by-heart
Thank you! Very well explained!
Excellent. Superb Explaination
Thanks a lot sir, I am waiting for part 2 so please do as soon as possible
May I know where have you learned all this from with so good intuition? I've heard that you're a self-taught data scientist. Please shed some light on this.
After watching Rohit (Koi Mil gaya) future watching models, Krish sir inspired, learnt on his own and became Data Scientist. Hope this answer is useful to you. Thanks.
@@anythingit2399 lmao
Neekunduku ra yerri p.. Ka
May Allah bless you with what you want, need, or just think.... ❤❤❤❤
Thank you very much for clear explanation !!!
Awesome way of teaching sir..looking forward for more sessions .
Thank you krish sir for this❤
Thank you. SO simple and understandable
Excellent....👏👏
That's awsm clearly explained ✌️
nice video. have a nice day sir
Very Clearly explained
Going to higher dimensions... increases the volume of the space exponentially and the number of points remains same. You are searching space has increased. So its better to project back by using methods PCA (dimensionality reduction)
thanks sir for making such video
hey! am alexander..i have seen most of your totorials. it is good and very nice explanation..plz post the tutorials related to ANN and different remote sensing data application by this machine learning algorithm
Very nice video but I have few doubts:-
1. What is the actual significance of Support Vectors. Are they just the data points which help create margins and is that there only role.
2. The hyperplane or ML model can classify data into two labels based on the position of data above or below hyperplane. So what role do the margin play. In any case the data will be classified as per their position above or below hyperplane. Hyperplane plays the critical role.
3. How increasing the margin will reduce generalization error.
Thanks for the video....it really helps a lot.. :)
wonderful, got a like and subscription
very nice explanation
Best explanation !! thanks a lot
God send. Thank you so much sir. Charan sparsh.
Amazing vedio..waiting for the next one
The explanation is very clear. Thanx for the video. Subscribing your channel
superb explanation. Thanks
Thank you sir, much appreciated!
thank you sir very very much sir!!
AMAZING TEACHER
superb explantion
excellent explanation!
Thankyou sir helped alot❤
thanks for your explaination on this topic
Thank you Krish Sir
Good explanation,,thanks
Best clarification. Please, do more videos. Waiting to learn..
Thank you krish, waiting for next part.
Excellent, sir. Thank you.
Great Video!
I really appreciate all the effort in the different machine learning topics that you make in your videos. Good explanation.
Very nicely explained!
Thank you sir!
Eagerly waiting for next video .
Thank you so much😊😊
Great video
amazing video
Hd 1 q here. Is there any relation btw d+ and d-.
As in the 1st fig- d+ is the distance from the +eve support vector to the hyperplane and d- is the distance from the -eve support vector to the hyperplane.
So is d+ always equal to greater than or less than d- or there is no specific relation as such?
As always gr8 video and gr8 explanation. Thank you..
Marginal Lines are parallel to the Hyperplane.So ideally the distance between the hyperplane and +ve marginal line and -ve marginal is same.Idea here is to create a generalized model and as the distance between the Marginal lines will be more,more generalized model will be for new/unseen data set
thank you for doing this..