Soft Margin SVM : Data Science Concepts

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ก.ค. 2024
  • SVM for REAL data.
    SVM Intuition Video: • Support Vector Machine...
    Hard-Margin SVM Video: • SVM (The Math) : Data ...
    Hinge Loss Video: • Loss Functions : Data ...

ความคิดเห็น • 107

  • @paulbrown5839
    @paulbrown5839 3 ปีที่แล้ว +79

    This guy deserves to be paid for this stuff. It's brilliant.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +3

      Haha, glad you think so!

    • @nitishnitish9172
      @nitishnitish9172 ปีที่แล้ว

      Absolutely, I have the same thing in my mind

  • @caiocfp
    @caiocfp 3 ปีที่แล้ว +68

    You are a great teacher, hope this channel thrives!

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +4

      I hope so too!

  • @blairt8101
    @blairt8101 18 วันที่ผ่านมา

    you saved my life, I will watch all your videos before my exam on machine learning

  • @xxshogunflames
    @xxshogunflames 3 ปีที่แล้ว +2

    Awesome video, thank you for clarifying these topics for us. The format is pristine and I get a lot from the different ways you present information because by the second or third video I have a good foundation for the tougher parts to chew. Again, thank you!

  • @maurosobreira8695
    @maurosobreira8695 2 ปีที่แล้ว +9

    Third video on SVM from this guy and I'm now a subscriber. Best explanation so far, and I watched a bunch before getting to these videos! Two thumbs up!

  • @random_uploads97
    @random_uploads97 2 ปีที่แล้ว +4

    Loved both hard margin and soft margin videos, everything is clear in 25 minutes collectively. Thanks a lot Ritvik! May your channel thrive more, will share a word for you.

  • @santiagolicea3814
    @santiagolicea3814 7 หลายเดือนก่อน +1

    You're the absolute best at explaining complex things in such an easy way, it's even relaxing

  • @akshiwakoti7851
    @akshiwakoti7851 2 ปีที่แล้ว +4

    Thanks for making SVM easy. You’re a great communicator.

  • @harshithg5455
    @harshithg5455 3 ปีที่แล้ว +2

    Came here after Andrew Ng s videos. Found yours to be way more intuitive. Brilliant

  • @rohit2761
    @rohit2761 2 ปีที่แล้ว

    What an amazing video. Absolutely Gold. Please make more videos. Never stop making one

  • @jackli8603
    @jackli8603 ปีที่แล้ว

    Thank you so much!!!! You are a life saver!!! I had been troubled by the soft margin svm for a week until your video explained to me very clearly. What I didn't understand was the lambda part but now I do!!! THANKSSSSSSSSSSSSSSSSS

  • @giovannibianco5996
    @giovannibianco5996 3 หลายเดือนก่อน

    Definitely best video about svm I' ve found online; better than my university lectures (sadly). Great job!

  • @adithyagiri7933
    @adithyagiri7933 2 ปีที่แล้ว

    great job man...keep bringing us these kinds of amazing stuff

  • @johnmosugu
    @johnmosugu ปีที่แล้ว

    Thank you very much, Ritvik, for simplifying this topic and even ML. God bless you more and more

  • @ashhabkhan
    @ashhabkhan 2 ปีที่แล้ว

    explaining complex concepts in a simple manner. That is how these topics must be taught. Wow!

  • @stanlukash33
    @stanlukash33 3 ปีที่แล้ว +4

    You deserve more subs and likes. Thank you for this!

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      I appreciate that!

  • @mikeyu6347
    @mikeyu6347 8 หลายเดือนก่อน

    best teacher, very articulate. looking forward to more videos

  • @Pazurrr1501
    @Pazurrr1501 2 ปีที่แล้ว

    This videos are real hidden gems. And they deserve to be not hidden any more..

  • @RiteshSingh-ru1sk
    @RiteshSingh-ru1sk 3 ปีที่แล้ว +1

    Gem of lectures!

  • @ledinhanhtan
    @ledinhanhtan 6 หลายเดือนก่อน

    Brilliant explanation! Thank you!

  • @bytesizedbraincog
    @bytesizedbraincog ปีที่แล้ว

    You are gem to the data science community!

  • @Rohit-fr2ky
    @Rohit-fr2ky ปีที่แล้ว

    Thanks a lot, i mightn't be able to understand SVM,without this..

  • @nukagvilia5215
    @nukagvilia5215 2 ปีที่แล้ว

    Your videos are the best!

  • @58_hananirfan45
    @58_hananirfan45 ปีที่แล้ว

    This man has single handedly saved my life.

  • @cyanider069
    @cyanider069 9 หลายเดือนก่อน

    You are really good at this man

  • @axadify
    @axadify 2 ปีที่แล้ว

    Such a brilliant explanation!

  • @Greatasfather
    @Greatasfather 2 ปีที่แล้ว

    I love this. Thank you so much. Helped me a lot

  • @huyvuquang2041
    @huyvuquang2041 11 หลายเดือนก่อน

    Thanks so much for your amazing works. Keep it up.

  • @xintang7741
    @xintang7741 7 หลายเดือนก่อน

    Well explained! Very helpful!

  • @thankyouthankyou1172
    @thankyouthankyou1172 7 หลายเดือนก่อน

    this teacher deserves Nobel price!

  • @danalex2991
    @danalex2991 2 ปีที่แล้ว

    AMAAZING VIDEOO ! You are so awesome.

  • @yaadwinder300
    @yaadwinder300 2 ปีที่แล้ว

    the search to find a good youtube video on SVM has finally ended, gotta watch other topics too.

  • @ahmetcihan8025
    @ahmetcihan8025 3 ปีที่แล้ว

    Just perfect mate

  • @chunqingshi2726
    @chunqingshi2726 ปีที่แล้ว

    cystal clear, thanks a lot

  • @josephgill8674
    @josephgill8674 3 ปีที่แล้ว +1

    Thank you from an MSC Data Science student at Exeter University in exam season

  • @mv829
    @mv829 2 ปีที่แล้ว

    thank you for this video, very helpful!

  • @FEchtyy
    @FEchtyy 2 ปีที่แล้ว

    Great explanation!

  • @aashishprasad9491
    @aashishprasad9491 3 ปีที่แล้ว

    you are a great teacher, I dont know why youtube doesnt reccomend your videos.
    Also please try some social media marketing.

  • @houyao2147
    @houyao2147 3 ปีที่แล้ว +1

    Perfect!

  • @xiaoranlin8918
    @xiaoranlin8918 2 ปีที่แล้ว

    Great clarification video

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 ปีที่แล้ว

    Awesome.

  • @moatasem444
    @moatasem444 ปีที่แล้ว

    شرح رائع ❤❤❤

  • @vantongerent
    @vantongerent 2 ปีที่แล้ว +1

    So good.

  • @venkat5230
    @venkat5230 3 ปีที่แล้ว

    Wow great lecture clear explanation...tq rit

  • @jaivratsingh9966
    @jaivratsingh9966 9 หลายเดือนก่อน

    Excellent

  • @e555t66
    @e555t66 ปีที่แล้ว

    Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.

  • @stevenconradellis
    @stevenconradellis ปีที่แล้ว

    These explanations are so brilliantly and intuitively given, making daunting-looking equations and concepts understandable.
    Thank you @ritvikmath, you are truly a gift to data science.

  • @xt.7933
    @xt.7933 4 หลายเดือนก่อน

    This is clearly explained!! Love your teaching. One question here, how do you choose lamda? What is the impact of a higher or lower lamda?

  • @user-ug8uy2cv3s
    @user-ug8uy2cv3s ปีที่แล้ว

    great explanation thank you

  • @maheshsonawane8737
    @maheshsonawane8737 10 หลายเดือนก่อน

    🌟MAgnificient🌟Very nice Thanks helps in interview questions.

  • @honeyBadger582
    @honeyBadger582 3 ปีที่แล้ว +1

    Great video! I have a question. The optimisation formula for soft-margin svm that I usually see in textbooks is : min ||w|| + c * sum over theta. How does the equation in your videos relate to this one? Is it pretty much the same just with different symbols or is it different? Thanks!

  • @aminr23
    @aminr23 3 หลายเดือนก่อน

    greatest teacher ever

    • @ritvikmath
      @ritvikmath  3 หลายเดือนก่อน

      wow thanks!

  • @gdivadnosdivad6185
    @gdivadnosdivad6185 7 หลายเดือนก่อน

    You are the best! Please consider teaching at a university!

  • @gabeguo6222
    @gabeguo6222 ปีที่แล้ว

    GOAT!

  • @arundas7760
    @arundas7760 3 ปีที่แล้ว

    Very good, thanks

  • @lilianaaa98
    @lilianaaa98 หลายเดือนก่อน

    thanks a lot !

  • @MrGhost-do1rw
    @MrGhost-do1rw ปีที่แล้ว

    I came here to understand lambda and I am not disappointed. Thank you.

  • @504036465
    @504036465 2 ปีที่แล้ว

    Nice video..Thank you..

  • @kankersan1466
    @kankersan1466 3 ปีที่แล้ว +2

    underrated chanel

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      Hopefully not for long :D

  • @vantongerent
    @vantongerent 2 ปีที่แล้ว

    How do you choose your support vectors, if they are no longer the closest vector to the decision boundary? Does the value of "1" get generated automatically when you plug the values of X and Y in? Or is there some scaling that takes place to set one of the vectors value to "1"?

  • @shriqam
    @shriqam 2 ปีที่แล้ว

    Hi Ritvikmath, Many thanks for the wonderful, I really love the simple notations you have used for the equations which make them very easy to understand. Can you suggest any books/courses that follow the similar notation to yours or can you please provide the source which helped you in creating these contents..? Thanks in Advance

  • @yifanzhao9942
    @yifanzhao9942 3 ปีที่แล้ว +1

    Shoutout to my previous TA!! Also do you mind uploading pictures of whiteboard only for future videos, as it might be easier for us to check notes? Thank you!

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +2

      Hi Yifan! Hope you're doing well. Yes for the newer videos I am remembering to show the final whiteboard only

  • @adilmuhammad6078
    @adilmuhammad6078 ปีที่แล้ว

    Very nice!!!

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Thank you! Cheers!

  • @sergecliverkana4694
    @sergecliverkana4694 3 ปีที่แล้ว +1

    Awesome Thank you very much

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      You are very welcome

  • @xviktorxx
    @xviktorxx 3 ปีที่แล้ว +7

    Great videos, will you be also talking about kernel trick?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +10

      Yes I will! It is on the agenda

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 ปีที่แล้ว

    What if you made observations based upon latent variables? Could that remove the need for parameter lambda for a prior?

  • @amankushwaha8927
    @amankushwaha8927 2 ปีที่แล้ว

    Thanks

  • @dawitabdisa7262
    @dawitabdisa7262 ปีที่แล้ว

    hello, thank you for tutorials . how to apply SVM model to classify an alpha data, to realize the detection of driver’s sleepless? very looking forward for your reply.

  • @caseyglick5957
    @caseyglick5957 3 ปีที่แล้ว +10

    Your board work is great!
    Why are you using an L2 loss for w, rather than L1 based on what showed up in the previous video?

    • @vldanl
      @vldanl 2 ปีที่แล้ว +4

      I guess that it's because L2 loss is much easier to derive, rather than L1. And also L1 is not differentiable if w=0

    • @caseyglick5957
      @caseyglick5957 2 ปีที่แล้ว

      Thanks! Having smooth derivatives does help a lot.

    • @DerIntergalaktische
      @DerIntergalaktische 2 ปีที่แล้ว +1

      @@vldanl Isn't the Hinge loss part already pretty hard to derive? Compared to ||w||?

  • @loveen3186
    @loveen3186 ปีที่แล้ว

    amazing

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Thank you! Cheers!

  • @achams123
    @achams123 3 ปีที่แล้ว +2

    what was Vapnik on when he invented this?

  • @DerIntergalaktische
    @DerIntergalaktische 2 ปีที่แล้ว

    The margin is taken into account twice in a weird way. The obvious one is the lambda ||w||. But the hingeloss has the margin as a unit of measurement. So if a datapoint is at distance five from the support vector, the hinge loss can drastically change depending on the size of the margin. Is this double accounting of the margin intended? Should there be a normalization for this? I believe deviding the hinge loss by ||w|| should work.

  • @PF-vn4qz
    @PF-vn4qz 2 ปีที่แล้ว

    so can we mathematically solve for the vector w and the value b the soft-margin svm optimisation problem? and if so can anyone point where to read up on this?

  • @user-ws8jm8uq4c
    @user-ws8jm8uq4c ปีที่แล้ว

    how can we still have some data between margins even after rescaling w vector so that min |w^Tx + b| = 1? doesnt it mean we find the closest possbile data points to the hyperplane and rescale the w vectors so that the distrant from closest data points to the hyperplane falls into 1? this way, there shouldn't be any plots between margins... could u help correct ?

  • @mashakozlovtseva4378
    @mashakozlovtseva4378 3 ปีที่แล้ว +2

    Very detailed explanation! I'd like to know, how are we hoing to find w and b params? Using gradient descent or another technique?

    • @stanlukash33
      @stanlukash33 3 ปีที่แล้ว

      I had the same question

    • @COMIRecords
      @COMIRecords 2 ปีที่แล้ว

      I think you can find optimal params in 2 ways: the first one consists in minimizing with respect to w and b the primal formulation problem, and the second one consists in maximizing with respect to a certain alpha (which is a Lagrange multiplier) the dual formulation of the problem. In the second case, once you have computed the optimal alpha, you can replace it in the equation of w (written in function of alpha) and you will find the optimal w. In order to find the best b you have to rearrange some conditions, but i am not sure about that.

    • @eltonlobo8697
      @eltonlobo8697 2 ปีที่แล้ว +1

      Can use Gradient descent and update weights and bias for every example like shown in this video: th-cam.com/video/UX0f9BNBcsY/w-d-xo.html

  • @tule3835
    @tule3835 8 หลายเดือนก่อน

    Question about Lambda:
    Does that mean when Lambda is LARGE -> We care more about MisClassfication Error. When Lambda is SMALL, we care about Minimize the Weight Vector and Maximize the Margin ???

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 ปีที่แล้ว

    Where does the kernel come in?

  • @Cerivitus
    @Cerivitus 2 ปีที่แล้ว +1

    Why are we minimizing ||w|| to the power of 2 for soft SVM but only ||w|| for hard SVM?

  • @matthewcarnahan1
    @matthewcarnahan1 2 หลายเดือนก่อน

    The margin for a hard margin SVM is pretty intuitive. But not with soft margin SVM. With hard margin, it's a rule that both margin lines must lie on at least one of their respective points. I think with soft margin, there's a rule that for any value of lambda, at least one of the margin lines must lie on at least one of their respective points, but it's not mandatory that both do. Do you concur?

  • @muralikrishna2691
    @muralikrishna2691 ปีที่แล้ว

    Is hinge loss is differentiable

  • @codeschool3964
    @codeschool3964 3 หลายเดือนก่อน

    Explained 3 hours lecture in less than 1 hour

  • @Ranshin077
    @Ranshin077 3 ปีที่แล้ว +1

    I love your board work, but you should really have an image of the board without you in it, or just delay your walk into the picture after a second or two at the beginning so I can snag shot for my notes a bit easier, lol.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Noted! I'm starting to remember this for my new videos. Thanks!

    • @redherring0077
      @redherring0077 2 ปีที่แล้ว

      Haha. I have dedicated a whole hard disk for ritvik’s data science videos. I just hope he is going to write a book or even better do an end to end data science course on coursera😍😍

  • @juanguang5633
    @juanguang5633 ปีที่แล้ว

    could be nicer if you talk about slack variables

  • @sushantpatil2566
    @sushantpatil2566 10 หลายเดือนก่อน

    EUREKA!

  • @junli1865
    @junli1865 2 ปีที่แล้ว

    Thank you !

  • @philosopher_sage_07
    @philosopher_sage_07 10 หลายเดือนก่อน

  • @paulhetherington3854
    @paulhetherington3854 2 หลายเดือนก่อน

    /p''P' 2'z mrjn hlf txt ~arch tmp XP < VIN 58#/
    /mrjn djz bx 2'Cn'' < avn cg kntrl ~ ferris R''/
    /smltz 2'Cn'' wth abv mrjn djz bx ~arch tmp/
    /r'' intr sktz of visocity ++ mrjn hlf txt vrchal/
    /XP(cR'' mrjn VIN 58# ~ rchz ferris avn cg kntr cntr LN'' hlf txt ++ symbol/