Support Vector Machines Part 2: The Polynomial Kernel (Part 2 of 3)

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 435

  • @statquest
    @statquest  2 ปีที่แล้ว +12

    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    • @davidonwuteaka2642
      @davidonwuteaka2642 ปีที่แล้ว

      How do I get it from Nigeria.
      I'd love to.

    • @statquest
      @statquest  ปีที่แล้ว +1

      @@davidonwuteaka2642 Unfortunately I don't have distribution of physical (printed) copies in Nigeria, but you can get the PDF.

    • @davidonwuteaka2642
      @davidonwuteaka2642 ปีที่แล้ว

      Yes, I have been trying to but the site kept rejecting my card.
      Thanks for your reply.

    • @statquest
      @statquest  ปีที่แล้ว +1

      @@davidonwuteaka2642 Bummer! I'm sorry to hear that.

  • @RHONSON100
    @RHONSON100 3 ปีที่แล้ว +98

    Your videos should be mandatory tutorial for Data Science/ ML courses in all the Universities. Students throughout the world would get benefited after watching the best ML video.Hats off to you great Josh Starmer..............

    • @statquest
      @statquest  3 ปีที่แล้ว +6

      Wow, thanks!

    • @rameshmitawa2246
      @rameshmitawa2246 2 ปีที่แล้ว +11

      Not mandatory, but my prof recommends this channel after every slide/lecture.

    • @statquest
      @statquest  2 ปีที่แล้ว +5

      @@rameshmitawa2246 That's awesome!

    • @tinacole1450
      @tinacole1450 ปีที่แล้ว +1

      I believe because most instructors don't teach it. They simply give information ....Josh actually explains difficult concepts in a simple way.

  • @stanlukash33
    @stanlukash33 3 ปีที่แล้ว +227

    I will make it easy for you guys:
    3:38 - BAM
    4:49 - DOUBLE BAM
    5:54 - TRIPLE BAM

    • @statquest
      @statquest  3 ปีที่แล้ว +25

      Just the hits! BAM! :)

    • @MrPikkabo
      @MrPikkabo 3 ปีที่แล้ว +7

      Thanks I know statistics now

  • @madhuvarun2790
    @madhuvarun2790 3 ปีที่แล้ว +79

    Dude, You are amazing. The best tutorial on SVM. I have searched the entire Internet to understand but couldn't. Please continue to make videos.

    • @statquest
      @statquest  3 ปีที่แล้ว +9

      Thanks, will do!

  • @Snoozy_FTW
    @Snoozy_FTW ปีที่แล้ว +9

    Best machine learning playlist I have encountered on the TH-cam .
    The animations and your funny way of teaching makes it easy to understand concepts.
    The amount of work you put to create these videos deserves great appreciation.
    I would definitely recommend to go through the videos for anyone who is reading this comment.

    • @statquest
      @statquest  ปีที่แล้ว

      Glad you like them!

  • @itsfabiolous
    @itsfabiolous ปีที่แล้ว +4

    Bro you're just a blessing. Never stop with the dry humor. Lot's of love for you!

    • @statquest
      @statquest  ปีที่แล้ว

      Thank you! Will do!

  • @jacobwalker6891
    @jacobwalker6891 6 หลายเดือนก่อน +2

    I have read and looked at most recommended books and videos on kernels and whilst somewhat familiar with the math, never truly understood the principles.
    Statquest actually makes complex topics simple, arguably one of the best if not the best teacher on youtube and definitely the best stat explanations.
    Thanks Josh much appreciated 👍

    • @statquest
      @statquest  6 หลายเดือนก่อน

      Thank you very much! :)

  • @marcoharfe9812
    @marcoharfe9812 4 ปีที่แล้ว +14

    I want to thank you so much for all your videos. I was lost in a forest of vectors matrices and greek letters when I heard about these topics in lecture and I did not understand a thing. As I was practising for the exam, I discovered your videos and now I do actually understand what is happening. Really love the practical, example driven approach!

    • @statquest
      @statquest  4 ปีที่แล้ว +2

      Awesome!!!! Good luck with your exam and let me know how it goes. :)

  • @priyangkumarpatel9317
    @priyangkumarpatel9317 4 ปีที่แล้ว +3

    This is one of the best explanation for support vector machines... If anyone is interested in why dot products are integral to the idea of SVM, please refer to Professor Wilson's MIT lecture on SVM... It is another great explanation for SVM...

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Thanks! :)

  • @jonathannoll3386
    @jonathannoll3386 5 ปีที่แล้ว +6

    My man. I'm so happy I have my presentation about SVM's after your uploads... Keep up the great work!

  • @606Add
    @606Add 4 ปีที่แล้ว +5

    You are videos are simply amazing! And the level of abstraction is right at the sweet spot! Thank you for the extremely thoughtful and precise illustrations!

    • @statquest
      @statquest  4 ปีที่แล้ว

      Thank you very much! :)

  • @MrZidane1128
    @MrZidane1128 4 ปีที่แล้ว +24

    First of all, thanks for your explanation, after plugging two data points into polynomial kernel function a and b then get the value 16,002.25, then you said we get higher dimensional relationship. Could you elaborate further what "relationship" did you refer to based on the value 16,002.25? Sorry I was not quite sure about that

    • @statquest
      @statquest  4 ปีที่แล้ว +9

      In some sense the "relationships" are similar to transforming the data to the higher dimension and calculating the distances between data points.

    • @vedgupta1686
      @vedgupta1686 2 ปีที่แล้ว +1

      @@statquest But the value 16002.25 alone is a 1-D data point. How do you suppose that helps us classify? Am I missing something?

    • @statquest
      @statquest  2 ปีที่แล้ว +5

      @@vedgupta1686 Think of that number is a loss value that is used as input for an iterative optimization algorithm like gradient descent.

    • @HeduAI
      @HeduAI 2 ปีที่แล้ว

      I thought the whole point of using the kernel trick was to save on the computation cost. If we are using an iterative algorithm anyway, how is that better than transforming the data?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      @@HeduAI Either way you would still have to use an iterative procedure. So that computation is fixed.

  • @kwok9298
    @kwok9298 2 ปีที่แล้ว +3

    I really appreciate how the way it is explained. Please keep on the good job!

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      Thank you!

  • @ahming123
    @ahming123 5 ปีที่แล้ว +74

    What do you mean by high dimension relationship??

    • @huhuboss8274
      @huhuboss8274 4 ปีที่แล้ว +9

      like the distance but in higher dimensions

    • @Actanonverba01
      @Actanonverba01 4 ปีที่แล้ว +1

      a synonym for 'high dimension' is many features or variables. Relationship think connection(s). So if we have a high D. relationship, we have a set of many variables that are connected by some idea or mathematical formula. Does that help?

    • @BrandonSLockey
      @BrandonSLockey 4 ปีที่แล้ว +2

      watch first video (Part I)

    • @leif1075
      @leif1075 3 ปีที่แล้ว

      @@Actanonverba01 that's what I thought but that is irrelevant here because we only have obe variable with two possible categories of values. But of course we can add more connecfions and variables which I think is what you are alluding to

    • @clapdrix72
      @clapdrix72 3 ปีที่แล้ว +1

      @@leif1075 It's not actually what he means and it's not irrelevant. High dimensional space means we take our original input feature space (in this case just X1) and transform it into higher dimensional space by "making up" new dimensions that are functions of our original dimensions (X1) so that the data is linearly separable in that new space. The pair wise relationships (aka similarity) are the dustances between the observations projected into that higher dimensional space (usually referred to as latent space). So it doesn't matter how many features you have in your original dataset nor how many outcome classes you have - those are irrelevant to the SVM algorithm mechanics, they only change the scale.

  • @amalboussere9270
    @amalboussere9270 5 ปีที่แล้ว +20

    thank you a lot you are such a big help in this harsh student world god bless you .

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      I'm glad you like my videos! :)

    • @palashchandrakar1112
      @palashchandrakar1112 4 ปีที่แล้ว +1

      @@statquest we just don't only like them we love your videos XOXO

    • @leif1075
      @leif1075 3 ปีที่แล้ว

      @@statquest this doesnt show where on esrth you dsrive that formula from..WHY do you multiply a times b and then add r
      .why not multiply all three or add all three..see what I mean? I don't see how anyone could figure itnout..not enough info here to derive it

  • @chenghuang4724
    @chenghuang4724 ปีที่แล้ว +1

    Sir, this is the best video for explaining the Kernel!

    • @statquest
      @statquest  ปีที่แล้ว

      Glad you think so!

  • @deashehu2591
    @deashehu2591 4 ปีที่แล้ว +38

    I have grown to love your little songs. They sound like Pheobe's songs!!! I have a little question , what do you use for visualization?

    • @statquest
      @statquest  4 ปีที่แล้ว +12

      Thanks! I draw all the pictures in Keynote.

    • @gargidwivedi7700
      @gargidwivedi7700 4 ปีที่แล้ว +1

      That's exactly what I and my sister agreed just before we saw your comment! haha.

    • @statquest
      @statquest  4 ปีที่แล้ว +2

      @Leila Mohammadzadeh Google "svm lagrange dual" and you will see how SVM uses the dot products to find the classifier.

  • @johnjunhyukjung
    @johnjunhyukjung ปีที่แล้ว +1

    This is how concepts should be introduced to students.. makes so much more sense

  • @rrrprogram8667
    @rrrprogram8667 5 ปีที่แล้ว +7

    After a lonnnnggg waitttt..... MEGAA MEGAAA MEGAAAA BAMMM is back

    • @statquest
      @statquest  5 ปีที่แล้ว

      Ha! Thank you! :)

  • @trashantrathore4995
    @trashantrathore4995 2 ปีที่แล้ว +1

    Earlier i had an intuition of all Algos which was incomplete and which could not be explained to others, Concepts are getting cleared now. Thanks STATQUEST Team, Josh Starmer, will contribute ASA i get a job in DS field.

  • @flaviodefalcao
    @flaviodefalcao 4 ปีที่แล้ว +1

    It is awesome and satisfing to be able to learn an intuition with these videos and reading a textbook understanding everything. THANKS

    • @statquest
      @statquest  4 ปีที่แล้ว

      Awesome! I'm glad the videos are helpful! :)

    • @flaviodefalcao
      @flaviodefalcao 4 ปีที่แล้ว +2

      @@statquest BAM!!!

  • @hayskapoy
    @hayskapoy 5 ปีที่แล้ว +101

    Would love to see more math after seeing the big picture behind these algorithms 😄

  • @tymothylim6550
    @tymothylim6550 3 ปีที่แล้ว +1

    Thank you for this video! It was very helpful in terms of understanding the details of how the kernel function leads to certain equations that need to be solved to obtain the relevant Support Vector Classifier!

  • @evelillac9718
    @evelillac9718 4 ปีที่แล้ว +1

    You literally saved my homework with your videos

  • @billykristianto3818
    @billykristianto3818 11 หลายเดือนก่อน +1

    Thank you very much, the explanation is easier to understand compare to my class!

    • @statquest
      @statquest  11 หลายเดือนก่อน

      Glad it helped!

  • @technojos
    @technojos 3 ปีที่แล้ว +2

    Thanksss Josh Starmer.I am facinated because of your videos.
    Please make a video about how 16002.25 is used bam?.
    Moreover I think that you can make video playlist about how machine learning algorithms has coded double bamm .
    Keep going man, we love you triple bamm!!!

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      Great suggestions!

    • @kevinarmbruster2724
      @kevinarmbruster2724 3 ปีที่แล้ว

      @@statquest How is the relationship of 16.002,25 to be interpreted?
      I understood that if we transfer everything to the higher dimension we can solve it, but I did not understand the part about relationships between the points and how they help.

    • @statquest
      @statquest  3 ปีที่แล้ว +2

      @@kevinarmbruster2724 We plug the relationships into an algorithm that is similar to gradient descent and it can use them to find the optimal classifier. However, the details are pretty complex and would require another video.

  • @muhtasirimran
    @muhtasirimran 3 ปีที่แล้ว +2

    Mr. Starmer almost unconsciously changing machine Learning's future 😀

  • @manasadevadas8685
    @manasadevadas8685 3 ปีที่แล้ว +3

    First of all thankyou so much for explaining with such amazing illustrations. One doubt, how can we actually use relationship between points to find the support vector classifier?

    • @statquest
      @statquest  3 ปีที่แล้ว +4

      Unfortunately that's a difficult question to answer and I'd have to dedicate a whole video to it. However, the simple answer is that it uses a method like Gradient Descent to find the optimal values.

    • @manasadevadas8685
      @manasadevadas8685 3 ปีที่แล้ว +2

      @@statquest Thanks for the response! Hopefully later you'd dedicate a whole video to it :)

  • @commentor93
    @commentor93 2 ปีที่แล้ว

    I've understood more than I ever expected to understand in this topic all thanks to your videos.
    But now I've stumbled a bit: How do you solve a constant like the one in 5:50? Or what does solving mean in that context now that it isn't a formula? Could you please expand on that?

    • @statquest
      @statquest  2 ปีที่แล้ว

      Think of it as a loss value, and it is something we try to optimize with an iterative algorithm that is similar to Gradient Descent: th-cam.com/video/sDv4f4s2SB8/w-d-xo.html

  • @benardmwanjeya8371
    @benardmwanjeya8371 4 ปีที่แล้ว +5

    God bless you Josh STARmer

    • @statquest
      @statquest  4 ปีที่แล้ว

      Thank you very much! :)

  • @nightawaitsusall9607
    @nightawaitsusall9607 4 ปีที่แล้ว +2

    You my friend are a champion. Yes.

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Thank you! :)

  • @sinarb2884
    @sinarb2884 3 ปีที่แล้ว

    I could be wrong, but I think there is a slight mistake in this video. The kernel function should be of the form (ab-1/2)^2. This is because the support vector classifier is essentially thresholding based on whether x>y or not. Let me know please if I am wrong. And, thanks for your cool videos.

    • @statquest
      @statquest  3 ปีที่แล้ว

      Most people define it the way I defined it in the video, (ab + r)^d. For more details, see: en.wikipedia.org/wiki/Polynomial_kernel and Page 352 of the Introduction to Statistical Learning in R.

  • @yulinliu850
    @yulinliu850 5 ปีที่แล้ว +1

    Awesome! Josh is back.

  • @Han-ve8uh
    @Han-ve8uh 3 ปีที่แล้ว

    1. Is there a relationship between the values of d and r in polynomial kernel, and the number of output dimensions?
    2. At 3:27 why is the 3rd term ignored, is this part of the kernel trick? Are the 3rd terms always the same no matter what d or r is used?
    3. It seems that the dot product exists only because d=2 which after expansion allows the expression to be expressed as a dot product, if d=3 then we cannot express as a dot product of 2 terms anymore?
    4. Does this whole video apply to other kernels too?

    • @statquest
      @statquest  3 ปีที่แล้ว

      1) d ends up being the number of dimensions.
      2) Regardless of the values for 'a' and 'b', the last dimension will always have the exact same value, 1/2. Thus, it will not help us establish how 'a' and 'b' are related.
      3) If d=3, then we get a^3b^3 + 3a^2b^2 + 3ab + 1 = the following dot product (a^3 + sqrt(3)2^ + sqrt(3)a + 1) dot (a^3 + sqrt(3)2^ + sqrt(3)a + 1)
      4) This video provides the background for understanding how the RBF kernel works. For details on that, see: th-cam.com/video/Qc5IyLW_hns/w-d-xo.html

    • @Han-ve8uh
      @Han-ve8uh 3 ปีที่แล้ว

      @@statquest Thanks a lot now i get the idea of how you can for 3), always use square roots to split up the constants, then put a and b into the 2 terms of the dot product. Also, for 1) I see the number of terms of (x+y)^n is n+1, but we always throw the last term since it's a constant that square roots to equal constants for both points, so end up having n dimensions.
      How did people invent these kernels? Did the kernel trick come later as a hack to overcome computation constraints, or it came first before a whole class of kernels was discovered possible?
      Also, why is there an obsession with learning straight lines through the data (no matter raw/dimension raised), has this got to do with limitations of the optimization method (I think you mentioned in other comments it uses gradient descent). Because i'm thinking if it could generate non straight lines, then maybe there's no need to raise to higher dimensions?

    • @statquest
      @statquest  3 ปีที่แล้ว

      @@Han-ve8uh I don't know how people came up with the kernel trick. However, straight lines are usually much easier to optimize than curved ones. However, neural networks, which I describe here, fit curved lines to data: th-cam.com/video/CqOfi41LfDw/w-d-xo.html

  • @preeethan
    @preeethan 4 ปีที่แล้ว +3

    Amazing explanation:) We find the High Dimensional Relationship between 2 points to be 16002.25.
    Practically what do we do with this value.? How do we find the Support Vector Classifier with this value.?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      It's quite complicated - way too complicated to be described in a comment.

    • @preeethan
      @preeethan 4 ปีที่แล้ว +1

      StatQuest with Josh Starmer
      Okay. I love all you videos, especially your intro songs! Great work keep it going Josh :)

    • @sanjivgautam9063
      @sanjivgautam9063 4 ปีที่แล้ว

      I want this answer too!

    • @balasubramanian5232
      @balasubramanian5232 4 ปีที่แล้ว

      @@statquest I want answers for the question. It'll be helpful if you could share links to resources on this

    • @statquest
      @statquest  4 ปีที่แล้ว

      @@balasubramanian5232 Google "svm lagrange dual" and you will have lots and lots of resources.

  • @hassanjb83
    @hassanjb83 4 ปีที่แล้ว +1

    ​At 6:33 you mention that we need to determine the value of both r and d through cross validation. If we have one dimensional data then shouldn't be d = 2 only?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Why do you say that?

    • @hemersontacon3168
      @hemersontacon3168 4 ปีที่แล้ว +2

      I think you got too attached to the example. Imagine the same example but with the two colors all mixed up. Then I think that d = 2 would not be enough to split things up!

    • @ccuny1
      @ccuny1 4 ปีที่แล้ว +1

      @@hemersontacon3168 That's an insightful comment that actually opened my eyes. Thank you.

    • @hemersontacon3168
      @hemersontacon3168 4 ปีที่แล้ว

      @@ccuny1 Glad to know and glad to help ^^

  • @tuongminhquoc
    @tuongminhquoc 5 ปีที่แล้ว +2

    First comment! I have turned on notification for your videos. I love all of your videos!

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      Awesome! Thank you! :)

  • @harithagayathri7185
    @harithagayathri7185 5 ปีที่แล้ว +3

    Great explanation 👍 Thanks a ton Josh!!. But, a bit confused here on how to calculate appropriate 'r' coefficient for the eqn.I understand that 'd' value is calculated using Cross Validation

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      'r' is also determined by cross validation, but I am under the impression that it doesn't have as much impact as 'd'. It basically scales things by a constant, rather than adding extra dimensions.

    • @thememace
      @thememace 3 ปีที่แล้ว

      @@statquest What's the point of setting r anyway since it later gets completely ignored?🤔

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      @@thememace I'm not sure

    • @rohanpatel702
      @rohanpatel702 2 ปีที่แล้ว

      @@thememace it doesn't get completely ignored. When r=1/2, the math works out such that the x-axis doesn't get scaled at all. But when r=1, the x-axis gets scaled by sqrt(2). Even though the third element of the vectors combined by dot product is a constant (and thus ignored), the choice of r still affects how the dot product evaluates because of how it changes the first element of each vector.

  • @zheyuanzhou3165
    @zheyuanzhou3165 4 ปีที่แล้ว +1

    super clear tut. Thank you very much! But as a non-English native speaker, I am a little confused, what is BAM trying to express?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      th-cam.com/video/i4iUvjsGCMc/w-d-xo.html

    • @zheyuanzhou3165
      @zheyuanzhou3165 4 ปีที่แล้ว +1

      @@statquest
      A tut for BAM! cool lol

  • @L.-..
    @L.-.. 4 ปีที่แล้ว +1

    After we find the dot product, with that value how we decide whether the new sample belongs to positive class or negative class?
    Please clarify Josh.

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      It's a little too much to put into a comment. The purpose of the video was only to give insight into how the kernel works, not derive the math.

  • @axa3547
    @axa3547 4 ปีที่แล้ว

    machine learning algorithimss!!! is it just me or other who has to learn these again n again to fill the gap in knowledge

  • @ronitganguly3318
    @ronitganguly3318 2 ปีที่แล้ว

    The high dimensional relationship you calculated at the end is a number which tells what exactly? How does it help to pseudo transform into higher dimensions?

    • @statquest
      @statquest  2 ปีที่แล้ว

      Are you familiar with Gradient Descent? th-cam.com/video/sDv4f4s2SB8/w-d-xo.html SVMs use a different algorithm, but the idea is similar, and you can think of the numbers, like 16002.25 as values that the algorithm is trying to optimize.

  • @MrWincenzo
    @MrWincenzo 4 ปีที่แล้ว +1

    since the kernel requires to calculate the dot product for each couple of points, suppose we have 10 points when we do it just for each point with respect to the others and itself we should obtain 10 different dot products for each single point. Which one of those 10 dot products become the new "y" dimension of the point?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      None of them end up being the new "y" dimension. The kernel trick works without having to make that transformation. We use the transformation to give an intuition of how the process works, but the kernel trick itself bypasses the transformation. This is the "kernel trick", and I mention it in the first video in the series on SVMs: th-cam.com/video/efR1C6CvhmE/w-d-xo.html

    • @MrWincenzo
      @MrWincenzo 4 ปีที่แล้ว +1

      @@statquest yes i misunderstood before, now i got it: when we plug the values into the polynomial expression is equivalent to calculate the dot product in higher dimensions. And since the SVM only depends on those dot products among point we have just "improved" the classification mimicking the dot product in higher dimensions as musch as infinite like with RBF.
      Still thank you for all your efforts and your gentle replies to our questions. Regards.

  • @beshosamir8978
    @beshosamir8978 2 ปีที่แล้ว

    quick question : why it is useful to calculate the relationships between every two point regardless in any dimensions , how it can be useful for calculating the decision boundary ?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      SVM's are optimized using an iterative algorithm that is similar to Gradient Descent, and the relationship values are essentially the "loss" values and help move the SVC to the correct spot.

    • @beshosamir8978
      @beshosamir8978 2 ปีที่แล้ว +1

      @@statquest
      So how to know That Is the best dimension i'm looking for according the relationship between every two points?

    • @statquest
      @statquest  2 ปีที่แล้ว

      @@beshosamir8978 www.cs.cmu.edu/~epxing/Class/10701-08s/recitation/svm.pdf

  • @marcelocoip7275
    @marcelocoip7275 2 ปีที่แล้ว

    Visually thinking about the last set of data: if you can draw a line to separate the data if you square each observation to the y-axis, then you can draw a line independently of the scale/ratio of the x-axis. Then I see is that the only thing that it is adding "solving/math value" is increasing the order of the xi-axis to fit a hyperplane (d value). What r contributes to arrive to a better solution?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      I don't think it adds much.

  • @temesgenaberaasfaw5076
    @temesgenaberaasfaw5076 4 ปีที่แล้ว +2

    best tutorial for SVM , YOU DID IT THANKS

    • @statquest
      @statquest  4 ปีที่แล้ว

      Thank you! :)

  • @dok3820
    @dok3820 3 ปีที่แล้ว +1

    Thank you Josh. Just..thank you

  • @NathanPhippsONeill
    @NathanPhippsONeill 5 ปีที่แล้ว +2

    Amazing vid! Thanks helping me prepare for my Machine Learning exam 😁

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      Good luck and let me know how it goes. :)

    • @NathanPhippsONeill
      @NathanPhippsONeill 5 ปีที่แล้ว +1

      @@statquest It went well for a difficult exam. BUT I had a lot to write about thanks to this channel. Appreciate it ❤️

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      @@NathanPhippsONeill Hooray!!! That's awesome and congratulations. :)

  • @stoicism-101
    @stoicism-101 2 ปีที่แล้ว

    Dear Sir,
    Kernels are basically used for finding the relationship between two points using the formulae. How do we further find the Support vector classifier?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      The SVC is found using an iterative process that is a lot like Gradient Descent, and the output from the kernels is like the "loss" values.

  • @tinacole1450
    @tinacole1450 ปีที่แล้ว +2

    Does anyone laugh at how silly yet genius Josh is? Loved the robot.. I rewinded to do the robot.

    • @statquest
      @statquest  ปีที่แล้ว

      You are my favorite! Thank you so much! I'm glad you enjoy the silly sounds.

  • @DeepakSingh-fo2wm
    @DeepakSingh-fo2wm 4 ปีที่แล้ว +1

    I am still not clear what happened after finding a relationship in higher dimension like in the video what happened after finding 16002.25 ?? Can you please add a short video over the same if possible.

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      It would be a long video, but it's on the to-do list.

  • @marijatosic217
    @marijatosic217 4 ปีที่แล้ว +2

    Thank you for the video! And now, what does this number 16002.25 tell us? :D How will we know what the right dosage?

    • @statquest
      @statquest  4 ปีที่แล้ว +2

      That's just an example of the kind of values that are used by the kernel trick to determine the optimal placement of the support vector classifier.

  • @chinzzz388
    @chinzzz388 4 ปีที่แล้ว

    When we calculate relationships between 2 data points, do we calculate relationships between all the points w.r.t all the other points? Ex: if we have 4 data points (1,2,3,4) do we calculate relationship between (1,2) and (3,4) OR do we calculate relationship between (1,2),(1,3),(1,4),(2,3)...etc

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      We calculate all of the relationships.

  • @zeynabmousavi1736
    @zeynabmousavi1736 4 ปีที่แล้ว

    How overfitting is evaluated in SVM? How do you check whether the output of SVM is generalizable or not?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      You compare the classifications made with the training dataset to classifications made with the testing dataset.

    • @zeynabmousavi1736
      @zeynabmousavi1736 4 ปีที่แล้ว

      @@statquest Thank you. I should have mentioned that I have small data set and I take all datapoints as training set and do 10 fold cross validation. I am concerned about having ovefitting.

  • @irondbag1041
    @irondbag1041 หลายเดือนก่อน

    Amazing video as always. The only thing I don't understand is that when you were talking about how the kernel function calculates the "relationships" between the observations in higher dimension without doing transformation. What exactly do you mean by relationships? In the one-dimensional case, the relationship is simply the distance between points, but in the high-dimensional cases, it was the dot product (instead of Euclidean distances), why is that?

    • @irondbag1041
      @irondbag1041 หลายเดือนก่อน +1

      Oh, just realized you mentioned that it was outside the scope of this video, my bad 😂

    • @statquest
      @statquest  หลายเดือนก่อน

      Yeah, one day I hope to do a whole video on how this works.

  • @aryamahima3
    @aryamahima3 2 ปีที่แล้ว

    @5:09, u said that we need to calculate dot product between each pair of point. How do we use this dot product further? could u please clear to me, u r the only person on whole internet who can clear this. :D

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      We use it as input to an iterative optimization algorithm similar to gradient descent. For details on gradient descent, see: th-cam.com/video/sDv4f4s2SB8/w-d-xo.html

    • @aryamahima3
      @aryamahima3 2 ปีที่แล้ว +1

      @@statquest thank u so much ☺️

  • @TaylorSparks
    @TaylorSparks 2 ปีที่แล้ว +1

    bam. love it homie. keep it up

  • @geo1997jack
    @geo1997jack 3 ปีที่แล้ว

    I did not understand what that 16000 value means or how it helps us. Could you please clarify? Everything else was crystal clear :)

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      It's used as a measure of the relationship between two points. Once we calculate the relationships between all of the points, they are used in a method similar to Gradient Descent to find the optimal classifier.

  • @MohitSharma-xy7xc
    @MohitSharma-xy7xc ปีที่แล้ว

    Guys at 06:45 what do I do with the value of 16,002.25??Also what does a relationship mean between two points?

    • @statquest
      @statquest  ปีที่แล้ว

      The algorithm that finds the optimal Support Vector Classifier uses that number. The algorithm is iterative, like Gradient Descent. So, maybe think of that value as sort of like part of the derivative that we need to find a step size.

    • @MohitSharma-xy7xc
      @MohitSharma-xy7xc ปีที่แล้ว +1

      Thank you for the reply Josh, that cleared some doubts for me

  • @shivakiranreddy4654
    @shivakiranreddy4654 2 ปีที่แล้ว

    Hi Josh,
    I couldn't get how 16002.25 will help us in drawing the Support Vector Classifier,
    In comments below you mentioned: "In some sense the "relationships" are similar to transforming the data to the higher dimension and calculating the distances between data points."
    even this above explanation did not help, if 16002.25 is one of the 2-dimensional relationships that we need to solve for the support vector classifier, what is the other one? how do we get the classifier?

    • @statquest
      @statquest  2 ปีที่แล้ว

      Are you familiar with Gradient Descent? th-cam.com/video/sDv4f4s2SB8/w-d-xo.html SVMs use a different algorithm, but the idea is similar, and you can think of the numbers, like 16002.25 as values that the algorithm is trying to optimize.

  • @Kiddzzvideos
    @Kiddzzvideos 4 ปีที่แล้ว +1

    To find the value of r and d, can we use GridSearhCV as well?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Yes. GridSearchCV is just a way to do CV.

  • @khanhtruong3254
    @khanhtruong3254 4 ปีที่แล้ว

    You mentioned "We need all pairs relationship in high dimensional space to find the Support Vector Classifier". But how? For example, we have 10 data points in 1-dimensional space (i.e. 10 scalars), each of them is either Red or Green. Suppose we use Polynomial Kernel with d=2. So we would end up with 10C2 pairs relationship. From these 10C2 numbers, how can we define the Support Vector Classifier?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Explaining exactly how the kernel trick works and is optimized would take a whole own its own. This video was just intended to help understand what the polynomial kernel does.

    • @khanhtruong3254
      @khanhtruong3254 4 ปีที่แล้ว

      @@statquest Thank you very much. I was afraid that point was too obvious to explain. Your confirmation gives me back confidence :D. Big fan of ya.

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      @@khanhtruong3254 Cool. The process for converting the dot products into an optimal classifier is iterative an not a lot different from Gradient Descent, but it's different enough that I'd have to go into all kinds of details.

  • @sornamuhilan.s.p
    @sornamuhilan.s.p 4 ปีที่แล้ว +1

    John Starmer, you are a genius sir!!

    • @statquest
      @statquest  4 ปีที่แล้ว

      Thank you! :)

  • @dimitrismarkopoulos3964
    @dimitrismarkopoulos3964 2 ปีที่แล้ว

    First of all congratulations! your videos are super explanatory! One question: The equation of the polynomial kernel has always the same form?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      As far as I know. However, the variables might have different names.

  • @harshitamangal8861
    @harshitamangal8861 4 ปีที่แล้ว +3

    Hi Josh, the explanation is amazing. I had a question- you said that the equation (a*b + r) ^d is used for finding the relationship between two points, how is this found relationship used for getting where the Support Vector Classifier?

    • @statquest
      @statquest  4 ปีที่แล้ว +2

      Unfortunately the details of how it is used would require a whole video and I can't cram it into a comment. However, making the video is on the to-do list.

  • @alternativepotato
    @alternativepotato 3 ปีที่แล้ว +1

    i love u my man you really are a life saver. Just because of that i am gonna buy a tshirt

    • @statquest
      @statquest  3 ปีที่แล้ว

      BAM! Thank you very much! :)

  • @abrahamjacob7360
    @abrahamjacob7360 4 ปีที่แล้ว

    Josh, this is a great video. One question on the Polynormal Kernal derivation. So the original problem was to find a classification point to find drug usage limits that cures or doesnt cure the disease. When we increased the value of 2, you mentioned it introduced a second dimension. I understood, how squaring the value helped to find a better Marginal classifier line, but ideally there is no meaning to the y axis here right, because the case still remains the same. We are just finding if the drug usage had a positive or negative impact. we could still use the y axis to determine its efficity, but if we increase the value to 3, what would Z axis represent here. Sorry if the question was confusing

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      The new dimensions don't mean anything at all - they are just extra dimensions that allow us to curve and bend the data so that we can separate it. The more dimensions, the more we can curve and bend the data.

  • @digitalzoul57
    @digitalzoul57 3 ปีที่แล้ว

    Hi StatQuest. you said the 'a' and 'b' are two different observations is this means that the k(a, b) depends on the number of classes. For example, if I have 4 classes does it means k(a, b, c, d)?

    • @statquest
      @statquest  3 ปีที่แล้ว

      I'm not sure how this works with more than 2 classes. Usually when there are more than 2 classes, people create one classifier per class and do 1 vs all other classification. So each classifier is still only separating 2 classes.

  • @pratyanshvaibhav
    @pratyanshvaibhav ปีที่แล้ว

    respected josh sir, thank you for such amazing explanation..sir please help me i have a doubt. will we take the dot products for every pair of points like first red point with all the green points and then so on or we will take first red point with first green point and so on..

  • @thawinhart-rawung463
    @thawinhart-rawung463 2 ปีที่แล้ว +1

    Good job Josh

  • @shahbazsiddiqi74
    @shahbazsiddiqi74 5 ปีที่แล้ว +1

    waited too long... Thanks a ton

  • @sabbirakhand7120
    @sabbirakhand7120 4 ปีที่แล้ว

    After getting the value of r and d by cross validation we get the value of 16002.25. But how to use this value to determine the high dimensional relationship??
    This video was really helpful to understand the topic despite of me being from a different background. Thanks.

    • @statquest
      @statquest  4 ปีที่แล้ว

      The actual method for finding the classifier would require a whole video on it's own. It's like gradient descent, but with a few important differences.

    • @jaideepkukkadapu2600
      @jaideepkukkadapu2600 4 ปีที่แล้ว

      @@statquest Please make a video on that ,it will be very helpful for us.

  • @suyashmishra8821
    @suyashmishra8821 2 ปีที่แล้ว

    Hello sir,
    In the above example It was clear that new transformed axes were a,a^2 but It wasnt clear the mechanism how classifier draws line.
    Do we get the equation of that classification line from kernel function,dot product or something related?

    • @statquest
      @statquest  2 ปีที่แล้ว

      The output of the kernel function (the dot-products) is fed into an iterative algorithm (similar to gradient descent) to find the optimal support vector classifier.

    • @RowoonSamshu
      @RowoonSamshu 7 หลายเดือนก่อน

      I don't understand why we need to calculate the dot products at all.
      I have a basic idea that the loss function for svm includes calculation of dot products between the observations but I don't understand the intuition behind it.
      i.e. what actually dot products (similarities) between observations do in finding the hyperplane that classifies the observations.
      And also they say we have to minimize |w| to get the optimal hyperplane but what is the geometrical intuition behind minimizing the |w|

  • @yancheeyang2918
    @yancheeyang2918 3 ปีที่แล้ว

    Question: The video ends as getting the relationship. I wonder what does it mean and how can we get the optimal hyperplane from here? thanks!

    • @statquest
      @statquest  3 ปีที่แล้ว

      It's an iterative method that is like gradient descent.

  • @harishh.s4701
    @harishh.s4701 3 ปีที่แล้ว

    Hi,
    Thanks a lot for your content. It is very easy to understand and I appreciate your way of explaining things. I had one doubt. Can you please explain how does Cross-validation help to determine the optimal degree of the polynomial kernel used in SVM's?

    • @statquest
      @statquest  3 ปีที่แล้ว

      I do that in this video: th-cam.com/video/8A7L0GsBiLQ/w-d-xo.html

  • @berknoyan7594
    @berknoyan7594 5 ปีที่แล้ว +1

    Hi Josh,Thanks for the video. You are helping me a lot. I have just one question. What do you mean by "high dimensional relationship"? Because It can be achieved by any 2 numbers that has multiplication result of 126 which is Infinite.Its just a dot product of two 3 dimensional data.Cross Validation uses misclassification rate to select best r and d as far as i know. Do CV use these numbers on any calculation?

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      Cross Validation does not use these high-dimensional relationships. Instead, the algorithm that finds optimal fits, given constraints (like the number of misclassifications you will allow) uses them. Although the dot product seems like it would be too simple to use, it has a geometric interpretation related to how close the points are to each other. For more details, check out the Wikipedia article: en.wikipedia.org/wiki/Dot_product

  • @moisesdiaz9852
    @moisesdiaz9852 3 ปีที่แล้ว

    5:27 But what happens if your classes are not balanced?

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      This is a general problem, and here are some ideas on how to deal with it: th-cam.com/video/iTxzRVLoTQ0/w-d-xo.html

    • @moisesdiaz9852
      @moisesdiaz9852 3 ปีที่แล้ว

      @@statquest Awesome thanks!

  • @hamidomar3618
    @hamidomar3618 2 ปีที่แล้ว

    Hey, great video, thanks!
    What happens after the transformation though? I mean, how does the final result. i.e. a scalar corresponding to relationship between each observation, help in identifying an optimally classifying hyperplane?

    • @statquest
      @statquest  2 ปีที่แล้ว

      The value is used in a way similar to how loss values are used in Gradient Descent. There is an iterative algorithm that uses the values to optimize the fit.

  • @adarshprusty8280
    @adarshprusty8280 หลายเดือนก่อน

    In this example why r = 1/2 and d =2 can you explain? I got what is r and d but why its 1/2 and 2 i didnot get.

    • @statquest
      @statquest  หลายเดือนก่อน

      Those are user defined parameters, which is why I also illustrate the effects of choosing alternative values at 3:41. Typically you use cross validation to find the best values to use.

  • @redaouazzani7120
    @redaouazzani7120 5 ปีที่แล้ว

    Great explanation ! But what are the math reasons to choose RBF Kernel or Polynomial Kernel ? It depends on what ?

    • @statquest
      @statquest  5 ปีที่แล้ว

      Usually people just start with the RBF kernel and see how well it performs. If it doesn't do well, they might try to polynomial kernel.

  • @yashmishra12
    @yashmishra12 4 ปีที่แล้ว

    So what do we do after we have the value for every green and red dot? Draw the hyperplane between the pair which had the maximum value (doesn't make a lot of sense though)?

    • @statquest
      @statquest  4 ปีที่แล้ว

      SVMs use an iterative procedure to find a hyperplane that has the largest margin given the number allowed misclassifications.

    • @yashmishra12
      @yashmishra12 4 ปีที่แล้ว

      @@statquest No, suppose we have 16002.25 after applying kernel for a=9 and b=14. Then what? How is the SVC made with a value of 16,000 for values of 9 and 14...for this current case (I understand that a lot of SVC will be made and checked before deciding upon the final SVC)

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      @@yashmishra12 Unfortunately, the answer to your question is way out of the scope of the space provided for comments. However, for more information that have all of the details, check out: www.cs.cmu.edu/~epxing/Class/10701-08s/recitation/svm.pdf

    • @yashmishra12
      @yashmishra12 4 ปีที่แล้ว +1

      @@statquest Thanks

  • @eric752
    @eric752 2 ปีที่แล้ว

    One suggestion: if at the beginning, if the all the topics are listed in a logical way, it would even better. Big thanks for the videos, really appreciate it 🙏

    • @statquest
      @statquest  2 ปีที่แล้ว

      Thanks!

    • @eric752
      @eric752 2 ปีที่แล้ว +1

      @@statquest thank you

  • @hrdyam865
    @hrdyam865 4 ปีที่แล้ว +1

    Thanks for the videos 😊, Can we use SVM for multinomial classification?

    • @statquest
      @statquest  4 ปีที่แล้ว

      I believe you just create one SVM per classification, and each SVM compares one classification to all the others (i.e. a sample either has that classification or not).

  • @ruowentan2771
    @ruowentan2771 5 ปีที่แล้ว

    Sorry, why green dots are below red dots in 2D? If you compute dosage^2, green dots should be above the first few red dots, right?

    • @statquest
      @statquest  5 ปีที่แล้ว

      What time in the video are you referring to?

  • @Beenum1515
    @Beenum1515 5 ปีที่แล้ว

    What I understood the function of kernel is to transform the data into high dimension so that there exists a classifier in that dimension which seperates those points. Right?
    If yes than why not just square each value instead of getting each pair to kernel function?

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      The kernel provides us with the high-dimensional relationships between points without actually doing the transformation. This is the "kernel trick" and it saves time and makes it possible to determine the relationships between points in infinite-dimensions (which is what the radial kernel does).

  • @repackgamers5191
    @repackgamers5191 ปีที่แล้ว

    from where did you get values (9,14)
    at 5:35

    • @statquest
      @statquest  ปีที่แล้ว

      Those are the x-axis coordinates for those points and correspond to different dosages of a drug.

  • @newbie8051
    @newbie8051 ปีที่แล้ว

    I didn't actually get where are these used, like I've read abt stuff you explained in the first video regarding SVM's.
    Any supporting text you recommend that I should read to grasp this better...
    Like I understood the formula, what is it for, what are the terms abrd and all that, but I feel something is missing

    • @statquest
      @statquest  ปีที่แล้ว

      The idea of SVMs is to move the data into a higher dimension so that we can separate it with a support vector classifier. The Polynomial Kernel is one way to move the data into a higher dimension. For more details, see: th-cam.com/video/efR1C6CvhmE/w-d-xo.html Specifically - keep watching until the end because we'll touch on how the polynomial kernel works in that setting.

    • @newbie8051
      @newbie8051 ปีที่แล้ว

      @@statquest Ooooo, so this (using the polynomial kernel) is a technique for finding out what dimension (parameter) is the best for which we can draw a hyperplane to separate different classes ?
      Thanks for replying sir :)

    • @statquest
      @statquest  ปีที่แล้ว

      @@newbie8051 Yep! That's the idea.

  • @geogeo14000
    @geogeo14000 3 ปีที่แล้ว

    Great video thank you. But I don't understand why the kernel with dot product is more computationally efficient than transform the data in 2 dimension, because in any case we have to raise a and b, the observations (the data so), to the square, and that is transform the data in 2 dimensions no ? and then we do the dot product... So I'm missing sthg here but I can't see what. thank you very much !

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      I think in 2-d, it is not very efficient, but SVM can work in infinite dimensions, and I believe it is more efficient in that case.

    • @geogeo14000
      @geogeo14000 3 ปีที่แล้ว

      @@statquest sorry my question was not really clear. What is data transformation exactly ? It s raising data/feature to the square ? If this is the case, why svm is more efficient whereas we still need to compute observarions/data to the square i.e a² and b² ? Again sorry if there is misunderstanding from myself i m new to maths and ML

    • @statquest
      @statquest  3 ปีที่แล้ว +1

      @@geogeo14000 It's not that the transformation itself is inefficient, it's finding the optimal Support Vector Classifier. This is done using an iterative procedure that operates on just the dot products, rather than the distances in higher dimensions.

    • @geogeo14000
      @geogeo14000 3 ปีที่แล้ว +1

      @@statquest ok thank you very much.

  • @The_Mashrur
    @The_Mashrur 2 ปีที่แล้ว

    When you say relationships between observations, what exactly do you mean? You didn't really go over how such relationships allow you to find an SVC in the higher dimension?

    • @statquest
      @statquest  2 ปีที่แล้ว +2

      In the case of SVM, the relationship is a rather abstract metric of distance.

  • @yokeshnsr1976
    @yokeshnsr1976 5 ปีที่แล้ว

    How do you transform 2D data to 3D? How will you get the Z-coordinate?

    • @statquest
      @statquest  5 ปีที่แล้ว +1

      If you remember how we moved from 1-D to 2-D (at 2:54)...then if you start with 2-D data then 'a' and 'b' are 2-D vectors. Now set r=1/2 and d=2 and you will get the original 2-D vectors, plus the values in the 2-D vectors, squared. So this will move you from 2-D to 4-D. So I'm not sure you can go from 2-D to 3-D directly.

    • @yokeshnsr1976
      @yokeshnsr1976 5 ปีที่แล้ว +1

      @@statquest Okay, thanks for the fast reply!

  • @krzysztofkolmus6936
    @krzysztofkolmus6936 5 ปีที่แล้ว

    In a nutshell, why do you need to calculate dot product or does it require a separate video?

    • @statquest
      @statquest  5 ปีที่แล้ว

      That would be a whole different video.

  • @lonandon
    @lonandon 3 ปีที่แล้ว

    What does the result of the dot product mean when it represents the relationship of two dots?

    • @statquest
      @statquest  3 ปีที่แล้ว

      It's the input to an iterative algorithm, much like gradient descent, that can find the optimal classifier.

  • @abhishekanand5974
    @abhishekanand5974 3 ปีที่แล้ว

    What exactly is meant by relationships between observations?

    • @statquest
      @statquest  3 ปีที่แล้ว

      It's some metric of distance.

  • @keltonwang4834
    @keltonwang4834 4 ปีที่แล้ว

    One doubt, though u multiple a and b by sqrt2, shouldn't u calculate a^2 and b^2 using original a and b rather than sqrt2*a and sqrt2*b on y-axis?

    • @statquest
      @statquest  4 ปีที่แล้ว +1

      Yes, and that is what is done in the video. We use the original a and b values to calculate the y-axis coordinates. However, the x-axis values are still the sqrt(2)*a and sqrt(2)*b, so the values have coordinates (sqrt(2)*a) , a^2)

    • @keltonwang4834
      @keltonwang4834 4 ปีที่แล้ว +1

      @@statquest Awesome!

  • @raktimnaskar2333
    @raktimnaskar2333 ปีที่แล้ว

    Can anyone explain to me how the dot products of the feature vectors can find the separating hyperplane?

    • @statquest
      @statquest  ปีที่แล้ว

      First, think of a dot product as a type of measure of similarity (the larger the absolute value, the more similar) and that similarity can be a proxy for closeness. Then those measures are plugged into an iterative algorithm, somewhat like gradient descent (see: th-cam.com/video/sDv4f4s2SB8/w-d-xo.html ), to find the optimal classifier.

  • @hamedbahramiyan
    @hamedbahramiyan 3 ปีที่แล้ว

    I didn't get how the kernel relationship value is used to transform the data. In the theory, the kernel relationship is a square matrix, containing kernel values for all samples 2 by 2. You calculated one of these at the end, but how to use them to transfer the initial data? what is the function or algorithm?

    • @statquest
      @statquest  3 ปีที่แล้ว

      Explaining how the kernel values are used would require a bunch more videos. However, the quick and easy version is that it uses an iterative algorithm that is similar (but not the same) as gradient descent to find the optimal classifier.

    • @hamedbahramiyan
      @hamedbahramiyan 3 ปีที่แล้ว

      @@statquest Thanks a lot. You have shown a manual transformation in 3:30. My question is what are 'a' and 'b'? any pair of samples regardless of their labels? if so, how it works when a=x1 is used in different dot products (first with b=x2 and then with b=x4)? I hope I could clarify my point

    • @statquest
      @statquest  3 ปีที่แล้ว

      @@hamedbahramiyan If a=x1, then it's 2-D coordinates will be a, a^2, regardless of what b is set to.

    • @hamedbahramiyan
      @hamedbahramiyan 3 ปีที่แล้ว +1

      @@statquest thank you

  • @gunamrit
    @gunamrit 10 หลายเดือนก่อน

    Hello at 6.04 you said why is it a dot product is beyond the scope of this video.. can you help me materials which can help me go through it to find out why it's a dot product and not a cross product? maybe a book will do.
    Thanks

    • @statquest
      @statquest  10 หลายเดือนก่อน

      Just google "support vector machine optimization"

  • @UjjwalGarg09
    @UjjwalGarg09 5 ปีที่แล้ว +1

    please make a video on this 5:58

    • @statquest
      @statquest  5 ปีที่แล้ว

      It's on the to-do list.

  • @madhuvarun2790
    @madhuvarun2790 3 ปีที่แล้ว

    could you please provide an example where 3d data is converted to 4d data using polynomial kernel? I am stuck

    • @statquest
      @statquest  3 ปีที่แล้ว

      It's very difficult to draw something in 4-dimensions.

    • @madhuvarun2790
      @madhuvarun2790 3 ปีที่แล้ว

      @@statquest okay. How about 2d to 3d?

  • @vianadnanferman9752
    @vianadnanferman9752 2 ปีที่แล้ว

    Thanks for the amazing video. Please, if I have 100 samples for training and each with 5 features and try to apply 2D polynomial. So how my data is converted to a higher dimension? in other words what do we mean by the two input vectors in the polynomial equation?

    • @statquest
      @statquest  2 ปีที่แล้ว +1

      In this case, the vectors for each point, a and b, contain values for all 5 features and the arithmetic shown in the video is applied to the values for all 5 features.

    • @vianadnanferman9752
      @vianadnanferman9752 2 ปีที่แล้ว

      @@statquest Thanks Dr Josh. Actually, my question is about how to choose the vectors themselves? As I understand from your video, I guessed that the relationship is taken of each point with all others, so the the resulted points are higher in number than the previous! And the features dimensions are already enlarged! Am I right?

    • @statquest
      @statquest  2 ปีที่แล้ว

      @@vianadnanferman9752 Forgive me if I'm just repeating what you wrote, but we calculate all of the different relationships between all of the points. The process of calculating the relationships gives us the features in the higher dimensional space.