Mathematics of SVM | Support Vector Machines | Hard margin SVM

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 134

  • @atharvasuryawanshi2455
    @atharvasuryawanshi2455 ปีที่แล้ว +58

    This video made me emotional ....I couldn't understand my professor's slides at all but now its damn easy. This channel is underrated for sure.

  • @samiddhachakrabarti4461
    @samiddhachakrabarti4461 2 ปีที่แล้ว +15

    All other TH-cam videos just explain the basics, but no one explains the maths behind the SVM. But you explained the whole mathematics of SVM. I was stuck on understanding the logic behind the SVM, and even I read many books on ML, but I had not get brief explanation. But your video helps me a lot.

  • @Jaweddddddd
    @Jaweddddddd 2 ปีที่แล้ว +10

    Raat me video bnake dark circle bn gye hain, but really thanks for this content. well explained.

  • @theubiwhovian
    @theubiwhovian ปีที่แล้ว +4

    It's commendable how painstakingly you explain every single detail! Wish we had more professors like you.
    Thanks a ton for these videos, you're a life saviour..

  • @tanishdogra8815
    @tanishdogra8815 7 หลายเดือนก่อน +15

    04:22 The main goal of SVM is to find the equation of a hyperplane with the maximum margin between positive and negative hyperplanes.
    08:44 In machine learning, a decision rule can be derived using the equation w * u + b > 0.
    13:06 The equation of the hyperplanes is assumed to be W transpose x + b = 1 for the positive hyperplane and W transpose x + b = -1 for the negative hyperplane.
    17:28 The distance calculation involves simplifying equations to x squared and creating positive and negative lines for convenience.
    21:50 Multiplying by a factor greater than 1 shrinks the margin, while multiplying by a factor less than 1 expands the margin.
    26:12 Maximize distance between positive and negative hyperplanes with constraints
    30:34 The distance between two support vectors in SVM can be calculated using the dot product of the vectors and the unit vector
    34:54 The expression for distance calculation and the optimization function in SVM
    Crafted by Merlin AI.

    • @deebafarheen2270
      @deebafarheen2270 2 หลายเดือนก่อน

      how did you do this by using AI?

  • @chalmerilexus2072
    @chalmerilexus2072 2 ปีที่แล้ว +20

    What an explanation! Beyond words. Thank you for knowledge spreading

  • @saurabhnandsrivastava7474
    @saurabhnandsrivastava7474 4 ปีที่แล้ว +10

    Your explanations are simply best .. if u understand with patience point by point ....u can answer questions on them even in sleep ....thnkx bro

  • @bhupendraahirwar3456
    @bhupendraahirwar3456 2 ปีที่แล้ว +65

    i am a PhD scholar at NIT in Mathematics this video is for actual working of SVM rather than those 10 min video whose aim is just views but the viewer doesn't know.

    • @editor_real
      @editor_real ปีที่แล้ว +5

      I am pursuing eco Hons now first year and it is easily understandable what he is trying to teach , most underrated channel

  • @pratimbaidya1480
    @pratimbaidya1480 28 วันที่ผ่านมา

    Sir I really like your in-depth explanations. It is actually needed rather than a 10 min short overview where we get no depth on the model. Thank you for making these in-depth mathematical videos.

  • @apoorva3635
    @apoorva3635 3 ปีที่แล้ว +11

    Beautifully explained. Stumbled upon this video after a long search. You are better than many of the top ML teachers on YT. Thank you very much :).

  • @ahbazmemon9369
    @ahbazmemon9369 4 ปีที่แล้ว +7

    Best explanation I ever seen for such complex topic. Thankyou sir👏

  • @tarunnegi3045
    @tarunnegi3045 7 หลายเดือนก่อน +3

    The main goal of SVM is to find the equation of a hyperplane with the maximum margin between positive and negative hyperplanes.
    04:22
    In machine learning, a decision rule can be derived using the equation w * u + b > 0.
    08:44
    The equation of the hyperplanes is assumed to be W transpose x + b = 1 for the positive hyperplane and W transpose x + b = -1 for the negative hyperplane.
    13:06
    The distance calculation involves simplifying equations to x squared and creating positive and negative lines for convenience.
    17:28
    Multiplying by a factor greater than 1 shrinks the margin, while multiplying by a factor less than 1 expands the margin.
    21:50
    Maximize distance between positive and negative hyperplanes with constraints
    26:12
    The distance between two support vectors in SVM can be calculated using the dot product of the vectors and the unit vector
    30:34
    The expression for distance calculation and the optimization function in SVM
    Click to expand
    34:54

  • @AMANVERMA-bq8hj
    @AMANVERMA-bq8hj ปีที่แล้ว +1

    Don't have words to thank you sir for this more than amazing explanation ! Tried multiple sources but it became more of rote learning but your video actually made me understand how SVM works ! Thanks a Ton Sir ! :)

  • @sober_22
    @sober_22 ปีที่แล้ว +14

    India need teachers like you.

    • @adbtheribd
      @adbtheribd 4 หลายเดือนก่อน

      world*

  • @madhavilathamandaleeka5953
    @madhavilathamandaleeka5953 3 ปีที่แล้ว +2

    That explanation with the help of Graph tool is just fabulous.... beginners can easily grasp....Thank you so much 👏

  • @Adarshhb767
    @Adarshhb767 7 วันที่ผ่านมา

    Just watched the same lecture from MIT and you covered every aspect of what they taught in way more better way
    Good to know Indians are competing nation wide

  • @narendraparmar1631
    @narendraparmar1631 ปีที่แล้ว

    I am having very good learning experience from you
    Thanks for your efforts.

  • @uditmodi4-yearb.tech.minin381
    @uditmodi4-yearb.tech.minin381 ปีที่แล้ว +1

    i can bet, you can't have a better playlist than this🤲
    This guy is damn good

  • @reel_viewer
    @reel_viewer ปีที่แล้ว

    What , an , explaination ... What an explaination !!! Hat's off boss🙌

  • @saijena8552
    @saijena8552 ปีที่แล้ว

    This is the best explanation of SVM....Keep doing good work

  • @Kashifspeaks_
    @Kashifspeaks_ ปีที่แล้ว

    Ammmazing guy i just wnated to know about maths of svm to write in my sem exam( b.e) and now i know much more than just that please keep doing tye great work

  • @jiteshsingh98
    @jiteshsingh98 3 ปีที่แล้ว +3

    Oh Wow Wow wow I Found This Amazing Channel 🔥 🔥

  • @chromellama7587
    @chromellama7587 2 วันที่ผ่านมา

    Hey thanks for saving my family from that fire, and for teaching me SVM's too.

  • @playwithcode_python
    @playwithcode_python 2 ปีที่แล้ว +1

    What a explained bro God bless you

  • @meenatyagi9740
    @meenatyagi9740 ปีที่แล้ว

    Thanks for giving such a good explaination on this topic on I was struggling to get clearity.It really helped me to get good picture of mathematical concept behind svm.

  • @satyaprakashmahalik6370
    @satyaprakashmahalik6370 2 ปีที่แล้ว

    Best explanation of SVM I have ever seen till now...Amazing man...keep working

  • @khushboobansal3894
    @khushboobansal3894 ปีที่แล้ว +2

    Loved the explanation! Thank you so much

  • @GAURAVKUMAR-mf6lq
    @GAURAVKUMAR-mf6lq ปีที่แล้ว +2

    Loved your Explanation i am Mtech cse from NIT

  • @princepandey7390
    @princepandey7390 ปีที่แล้ว +1

    Thank you mere ❤️ bhai 😘😘😘😘😘😘
    Sach kahu kai videos me muh maar lis lekin samjh yahi aya i love you

  • @AbdusSamad-ts7yg
    @AbdusSamad-ts7yg ปีที่แล้ว +1

    great explanation Sir

  • @AniRec-e8u
    @AniRec-e8u 2 หลายเดือนก่อน

    Same as mit video but the intuition made it clearer. Thanks for this

  • @finaz_18
    @finaz_18 11 หลายเดือนก่อน

    data science aspirant needs a teacher like you

  • @Sandesh.Deshmukh
    @Sandesh.Deshmukh 3 ปีที่แล้ว +1

    Best explanation of SVM I ever seen ❤🙌 #NitishSir

  • @deveshnandan323
    @deveshnandan323 ปีที่แล้ว +2

    Next Level , Itne detail meein Kaun batata haiii... you are
    while(1) GEM :)

  • @mdsihabuddin9856
    @mdsihabuddin9856 4 ปีที่แล้ว

    It was really very helpful lesson. Thanks for your support.

  • @mdkallis
    @mdkallis ปีที่แล้ว

    superr bro ......Amazing explanation..

  • @kadambalrajkachru8933
    @kadambalrajkachru8933 2 ปีที่แล้ว +1

    Nice explanation keep going bro...

  • @sahilmahajan421
    @sahilmahajan421 ปีที่แล้ว +1

    hats off man. this video is amazing

  • @dhananjaygurav6335
    @dhananjaygurav6335 ปีที่แล้ว +1

    It's amazing one 🤩👏... you are requested to recreate one dedicated video for SVM in which you will explain SVM theory as well as one small example

  • @Loki-yg9lq
    @Loki-yg9lq 10 หลายเดือนก่อน

    Bro your video is extremely good kudos to you. Just improve the sound quality as every thing is hazy while listening that is manageable. But overall awesome❤❤❤

  • @divyanshugera3187
    @divyanshugera3187 ปีที่แล้ว

    Coming here after MIT openware SVM video and bro let me tell you one thing, you are amazinnnggg

  • @AmanKumar-jv4sv
    @AmanKumar-jv4sv 2 ปีที่แล้ว +1

    take a bow brother.....brilliant explanation

  • @AkshatMishra-m8v
    @AkshatMishra-m8v 6 หลายเดือนก่อน +1

    hello sir why we taking projection of one vector on another??
    and how we get that equation 03:54

    • @RajorshiAdhikary
      @RajorshiAdhikary 4 หลายเดือนก่อน

      To determine the distance of that point using that perpendicular line ... And the later whole video is based on that ... What happens if that distance is greater and what happens if it's less...
      If I'm wrong,feel free to correct me 🎉

  • @akhil7895
    @akhil7895 ปีที่แล้ว

    Simply explained. ❤

  • @urviupadhyay1639
    @urviupadhyay1639 ปีที่แล้ว +1

    Great explaination

  • @shwetangacharya
    @shwetangacharya 9 หลายเดือนก่อน

    fantastic math u explicated.

  • @2Kunal-e6t
    @2Kunal-e6t 2 วันที่ผ่านมา

    You are theeeee best 😊😊❤

  • @adityamishra6954
    @adityamishra6954 2 ปีที่แล้ว +2

    Maja aa gya

  • @subodhkedar2221
    @subodhkedar2221 3 ปีที่แล้ว +1

    Thanks for such nice efforts

  • @geetanshusinghnegi2672
    @geetanshusinghnegi2672 ปีที่แล้ว

    Thanks a lot for this easy explanation :)

  • @mohsinkamal2638
    @mohsinkamal2638 ปีที่แล้ว

    lovely explanation. Thanks alot.

  • @anozatix1022
    @anozatix1022 8 หลายเดือนก่อน

    Hey isnt the projection formula of w on u is dot prod of w and v divided by magnitude of u?

  • @rishavraj4401
    @rishavraj4401 2 ปีที่แล้ว +1

    👍👍👍👍 Best

  • @inquisitivelearner8649
    @inquisitivelearner8649 ปีที่แล้ว +1

    w bar is what? u bar is what? Cn you please tell? I understood c as the linear distance from origin to the hyperplane.

  • @ramishsaeedpk
    @ramishsaeedpk 8 หลายเดือนก่อน

    this playlist saved my life

  • @abhishektehlan7814
    @abhishektehlan7814 2 ปีที่แล้ว +1

    maje aa gaye yaar

  • @manudasmd
    @manudasmd ปีที่แล้ว

    You are a legend bro🎉🎉

  • @techteens694
    @techteens694 11 หลายเดือนก่อน

    I have a doubt, pi+ and pi- are basically hyperplanes passing through the closet points from the hyperplane wx + b = 0(let's say hyperplane p), what if the distance between pi+ and p is not the same as pi- and p, there can be a case in which the distance between the nearest point of one class and the hyperplane p and the distance between the nearest point of the other class and hyperplane p may not be equal.....what to do in this kind of a case? And how do we optimize the distance function?

    • @MaindolaMusic
      @MaindolaMusic 10 หลายเดือนก่อน

      No, the distances between the hyperplane and margin planes are always equal as the hyperplane is in a way the middle plane of both margin planes

  • @MaindolaMusic
    @MaindolaMusic 10 หลายเดือนก่อน

    Best video for svm

  • @sachinraj6875
    @sachinraj6875 9 หลายเดือนก่อน

    please tell me how the perpendicular w and the coffiecient line vector w are same ??? how

  • @kislaykrishna8918
    @kislaykrishna8918 3 ปีที่แล้ว +2

    thanks sir 🙏

  • @mr.luvnagpal7407
    @mr.luvnagpal7407 3 ปีที่แล้ว +2

    best best best!!!!

  • @rockykumarverma980
    @rockykumarverma980 3 หลายเดือนก่อน

    Thank you so much sir 🙏🙏🙏

  • @sandippatel6999
    @sandippatel6999 3 ปีที่แล้ว +1

    great

  • @adityadongapure6935
    @adityadongapure6935 18 วันที่ผ่านมา

    GOATED Explanation

  • @mainak222
    @mainak222 ปีที่แล้ว

    amazing video!

  • @bishwajeetsharma3498
    @bishwajeetsharma3498 4 ปีที่แล้ว +2

    d distance nikalne ke liye W ko unit vector mein kyun convert kiya?

  • @sandipansarkar9211
    @sandipansarkar9211 2 ปีที่แล้ว +1

    finished watching

  • @jaisonphilip251
    @jaisonphilip251 ปีที่แล้ว

    how that -c changed to +b at 4:40 ? I really didn't got it. Can anybody help?

    • @NischitSapkota
      @NischitSapkota 3 หลายเดือนก่อน

      Just suppose b = -c, because wx+b = 0 looks more aesthetically pleasing than wx-c =0 ( I oversimplified it yes ...but it's the gist)

  • @shauryatiwari7462
    @shauryatiwari7462 ปีที่แล้ว

    for x1, the equation y(wx+b) should be -1, you have kept it +1, am I missing something ?

  • @themlguyyy
    @themlguyyy 3 หลายเดือนก่อน

    But how -c was replaced by +b in the equation??? at time 5:18 mins.. please someone explain?

    • @hemadevi7106
      @hemadevi7106 7 วันที่ผ่านมา +2

      I am also confused there

  • @ParthivShah
    @ParthivShah 9 หลายเดือนก่อน +1

    Thank You Sir.

  • @fatehpreetsingh1440
    @fatehpreetsingh1440 2 ปีที่แล้ว +1

    thanks it was amazing

  • @HeyLook_
    @HeyLook_ 2 ปีที่แล้ว +1

    Hi sir, As we were trying to achieve hypothesis line in Logistic regression such that it achieves equilibrium among dataPoints. And in SVM also we are doing same thing . Then what is the difference/ how it is solving logistic regression's problem/ or what is the problem in Logistic regression when we have achieved Equilibrium hypothesis line ?
    @CampusX

    • @Tusharchitrakar
      @Tusharchitrakar 11 หลายเดือนก่อน

      The optimization problem to obtain the plane is different in each
      . In svm the loss function has two terms: margin and misclassification components. In logitreg, it's the binary cross entropy. So the method in which we obtain the hyper plane is different

  • @Sanki-04
    @Sanki-04 2 ปีที่แล้ว +1

    😍😍😍👍👍👍

  • @abhishek171278
    @abhishek171278 7 หลายเดือนก่อน

    Sir can you please make another svm lecture including mathematics of primal and dual formulation and why dual is important

  • @ayu1323
    @ayu1323 2 ปีที่แล้ว +1

    godly loved it

  • @yashsaxena7754
    @yashsaxena7754 2 ปีที่แล้ว

    Great video! One question, at 5:00 you made the decision rule vector(w).vector(u) + b >= 0. How did you put 'b' here which is a characteristic of the hyperplane? Thanks for clarifying in advance.

    • @kifayatUllah712
      @kifayatUllah712 ปีที่แล้ว

      Bismillah
      Obviously, you need to know the intercept of the line. Only knowing the vector w will not help because without b, you don't know where the line is clearly situated. With vector w, you know the direction of the line but not it's exact postition. The line could have a gradient of 3, for e.g., but if you don't know where the line cuts the y axis, how can you make your decision?

    • @nikipatel7602
      @nikipatel7602 ปีที่แล้ว

      did you find the answer?

    • @rishabhinc2936
      @rishabhinc2936 ปีที่แล้ว

      @@nikipatel7602 its similiar to theta0 in linear regression ...exactly what the justcook commented above

  • @sameerabanu3115
    @sameerabanu3115 ปีที่แล้ว

    👏👏👏👏👏👏

  • @arshiyabegum6644
    @arshiyabegum6644 ปีที่แล้ว +3

    Feel like clicking the like button million times🙏

  • @jaybhanushali8559
    @jaybhanushali8559 ปีที่แล้ว

    Samaj gaya. Ye video pehele kyu nahi dekhi maine

  • @mrinmoynil8145
    @mrinmoynil8145 3 ปีที่แล้ว +1

    why we consider y=wTx+c over y=mx+c

  • @Byte2Insight
    @Byte2Insight 5 หลายเดือนก่อน +1

    you very casually said that projection is dot product, can you please provide any source where its written that dot product is actually projection of one vector over another

  • @SujeethKasukurthi
    @SujeethKasukurthi 7 หลายเดือนก่อน +1

    when the hyperplane is passing through the origin then only w vector become perpendicular to the decision boundary
    but what your teaching???

  • @good114
    @good114 2 ปีที่แล้ว +1

    💕❤️

  • @zee4654
    @zee4654 3 ปีที่แล้ว +3

    when replace c with b why to change sign ( - ) to ( + ) ? :(

    • @JawwadRafiq
      @JawwadRafiq 3 ปีที่แล้ว +1

      same question

    • @JawwadRafiq
      @JawwadRafiq 3 ปีที่แล้ว

      Please let me know the answer

    • @zee4654
      @zee4654 3 ปีที่แล้ว

      mjhy khud nhi filhal abi tk pta .

    • @AbdulRahman-zp5bp
      @AbdulRahman-zp5bp 3 ปีที่แล้ว

      he stored -c in b

    • @JawwadRafiq
      @JawwadRafiq 2 ปีที่แล้ว

      But why

  • @saptarshisanyal4869
    @saptarshisanyal4869 2 ปีที่แล้ว +1

    @4:42 You have replaced -C with +b but did not explain the reason. Pls explain it.

    • @ritwiksingh4937
      @ritwiksingh4937 2 ปีที่แล้ว +1

      perhaps there was a slight mistake it should be
      w.u - c = 0.. such that when any point is greater than 0 than it must lie in positive region else negative region... b and c are just contant for positive n negative hyperplane respectively.

  • @thatsfantastic313
    @thatsfantastic313 2 ปีที่แล้ว +1

  • @satyamsahoo9396
    @satyamsahoo9396 ปีที่แล้ว

    Thanks a lot SIR

  • @prakhartrivedi4442
    @prakhartrivedi4442 9 หลายเดือนก่อน +1

    pls anyone answer these question
    1) the planes pie(+) and pie(-) are decided by the support vectors then how come their equation can be guaranteed to be wx+b=1 and wx+b=-1
    2)if the equation are wx+b=1 and wx+b=-1 , the margin(d) we aiming to maximize is basically the distance bw these two planes only, and if we see the distance bw these two planes is 2 which is a constant then how come we can maximize a constant

    • @NischitSapkota
      @NischitSapkota 3 หลายเดือนก่อน

      2.) you're confused about the chronological order mate, FIRST we find the hyperplane ( pie neutral) and THEN we find ( pie + and pie -) AND THEN calculate the distance between the parallels ( pie + and pie -)

    • @NischitSapkota
      @NischitSapkota 3 หลายเดือนก่อน

      1.) the hyperplane SHOULD be equidistant from pie + and pie - , so the magnitude SHOULD be opposite to each other....and about why +1 and -1 and why not our custom favourite numbers like +69 and -69 ?..
      It's because essentially we're concerned about the relationship between the numbers and not the numbers themselves because they're arbitrary ( the numbers) .. this is the fundamental essence of feature engineering...
      For 100 and 1000 , on log scale are 2 and 3 .. 100 and 1000 , and 2 and 3 they are arbitrary, but essentially represent the same thing (in its magnitude essence)

  • @vasoyarutvik2897
    @vasoyarutvik2897 10 หลายเดือนก่อน

    Girls :- Let me do makeup properly i have to take online class
    Le boys:- .............🤣
    btw very good lecture sir

  • @heetbhatt4511
    @heetbhatt4511 ปีที่แล้ว

    Thank you sir

  • @vishvadeepmohanpandey129
    @vishvadeepmohanpandey129 2 ปีที่แล้ว +1

    How can we repalce minus c by positive b

  • @momscookbook2222
    @momscookbook2222 3 หลายเดือนก่อน

    Thank you

  • @ankanmazumdar5000
    @ankanmazumdar5000 2 ปีที่แล้ว +1

    detailed intuition & derivation

  • @MrKeats-bm9dh
    @MrKeats-bm9dh หลายเดือนก่อน

    How +b come in place of -c ?

    • @vaibhaavtiwari8220
      @vaibhaavtiwari8220 26 วันที่ผ่านมา

      its just a notational change to align with the standard SVM formulation

    • @hemadevi7106
      @hemadevi7106 7 วันที่ผ่านมา

      But c is distance from the origin to the π. where the +b is the interceptor of the line

  • @laveenbagai6327
    @laveenbagai6327 3 ปีที่แล้ว +1

    You got a sub :)

  • @VKRealsta
    @VKRealsta ปีที่แล้ว

    sir your voice is very low at starting of this video , remaining was very helpful

  • @ishahzaibkhan
    @ishahzaibkhan 2 หลายเดือนก่อน

    Hazaron krish naik bhi aa jain to ap k 2 seconds main convery kiye gaye content ko bhi defeat nhi kar sktay XD

  • @harshmankodiya9397
    @harshmankodiya9397 3 ปีที่แล้ว +1

    The so-called real math exaplanation.