Singular Value Decomposition : Data Science Basics

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ย. 2024

ความคิดเห็น • 73

  • @VR-fh4im
    @VR-fh4im 4 ปีที่แล้ว +18

    I looked all over for SVD for 3 hours, and your video in 10 minutes explained it, so nicely. Thanks.

    • @ritvikmath
      @ritvikmath  4 ปีที่แล้ว +1

      no problem! happy to help

  • @JD-jl4yy
    @JD-jl4yy 3 ปีที่แล้ว +14

    You have a gift for explaining things clearly! This is so much better than the 5 SVD videos I watched prior to this haha

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Glad it was helpful!

  • @SupremeChickenx
    @SupremeChickenx 4 ปีที่แล้ว +8

    bros who make youtube math tutorials are the real MVPs

  • @saidisha6199
    @saidisha6199 3 ปีที่แล้ว +10

    Just amazing explanation. I had a blurry understanding of svd after taking a class and your video made the concept absolutely clear for me. Thanks a lot.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Glad it helped!

  • @SLopez981
    @SLopez981 11 หลายเดือนก่อน

    Been watching your videos for months now. Very much enjoy how general your videos can be for someone outside of data science. I generally like watching math videos from non-math educators because they have a great balance of an explanation.
    One thing I really enjoy about your videos is at the end you bring it back to your field and why this is useful in your world.
    Reduced in entries for storage or for further calculations is very tangible to see the real world application.

  • @EdouardCarvalho82
    @EdouardCarvalho82 4 ปีที่แล้ว +10

    Absolutely love your videos! Just to clear possible confusion for learners at 4:05 abt VtV=I because of orthonormality, not merely independence which is only a consequence. Great job!

    • @ritvikmath
      @ritvikmath  4 ปีที่แล้ว +1

      Thanks for that!

  • @subandhu39
    @subandhu39 2 ปีที่แล้ว +1

    I think this video is amazing. I have been wanting to watch videos of this channel since the past 2 years but never could because i lacked the basic knowledge to gain from the explanations here. I was taught this concept in class very poorly, i immediately knew i could finally come here. The way this ties in with pca, if i am correct and the ease with which the mathematical kinks were explained was phenomenal. Glad to finally benefit from this channel. Thanks a ton.

  • @scorpio19771111
    @scorpio19771111 2 ปีที่แล้ว

    Thank you. Short video that packs all the right punches.

  • @TrangTran-dw2vw
    @TrangTran-dw2vw 3 ปีที่แล้ว +4

    Until you learn PCA and revisit this video, everything really makes sense!!

  • @chunchen3450
    @chunchen3450 4 ปีที่แล้ว +3

    I really like your videos! 👍 Methods are very clearly and concisely explained, explaining the applications of the methods in the end also helps a lot to remember what it really does. The time span of the video is also perfect! Thanks and hope to see more videos from you

  • @haiderali-wr4mu
    @haiderali-wr4mu ปีที่แล้ว

    Your explained most important things about SVD. Thank you

  • @amritpalsingh6440
    @amritpalsingh6440 3 ปีที่แล้ว +1

    I was struggling to understand the concept in the class and this video made it very clear for me. Thank you so much. Keep them coming :)

  • @AkshayRakate
    @AkshayRakate 2 ปีที่แล้ว

    Thanks so much for clearing the concepts. now I can connect the dots for what's the reason we use SVD in recommended systems. 👍

  • @nujelnigsns5376
    @nujelnigsns5376 3 ปีที่แล้ว

    I was really struggling with leaner algebra ,your video's are really a saviour

  • @hameddadgour
    @hameddadgour 2 ปีที่แล้ว

    Wow, now I have a real appreciation for SVD :)

  • @matthewchunk3689
    @matthewchunk3689 4 ปีที่แล้ว +1

    Very relevant subject right now. Thanks

  • @bittukumar-rv6rx
    @bittukumar-rv6rx หลายเดือนก่อน

    Thanks man for posting! Loved the explanation!

  • @rugahun
    @rugahun 2 ปีที่แล้ว

    I am glad of the note about 4:06, I freaked out when that was said. GREAT VIDEO!!

  • @msrizal2159
    @msrizal2159 4 ปีที่แล้ว +1

    Thanks ritvik, I'm phd candidate from Malaysia. Your videos are helping me a lot.

    • @ritvikmath
      @ritvikmath  4 ปีที่แล้ว

      It's my pleasure!

  • @hamade7997
    @hamade7997 3 ปีที่แล้ว +2

    I like this explanation too, thank you. Wish I had discovered you earlier during my data analysis course, but oh well :P

  • @emptyjns
    @emptyjns 2 ปีที่แล้ว +3

    The columns of M, before SVD, could mean features. Do the columns of U and V (the left and right singular vectors) carry any physical meaning? The video keeps two singular values. How many do people usually keep?

  • @protosstassadar20
    @protosstassadar20 8 วันที่ผ่านมา

    I think there is an error in the decomposition, the sigma matrix should be the same shape as M, and U & V should be square matrices because they are orthogonal matrices; for example, for U, the colums or the rows should form an ortonormal basis of the same dimension and it should be of the same exact dimension because for every matrix the rowspace and columm space are of dimension r, the rank of the matrix as it states in the fundamental theorem of linear algebra. Or for exaple U*U^T = U^T*U = Id_m

  • @Andynath100
    @Andynath100 4 ปีที่แล้ว +1

    Thanks for the regular quality content !!

  • @Mertyy3
    @Mertyy3 3 ปีที่แล้ว +1

    Unbelivable it has only 9k views... Video is great!

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      Glad you liked it!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 ปีที่แล้ว

    Your explanation is awesome

  • @Fat_Cat_Fly
    @Fat_Cat_Fly 4 ปีที่แล้ว +1

    Super well explained.

  • @IrfanPeer
    @IrfanPeer 2 ปีที่แล้ว

    marker flip on point

  • @iWillieDR
    @iWillieDR 3 ปีที่แล้ว

    Great vid man, keep up the good work!

  • @juanguang5633
    @juanguang5633 ปีที่แล้ว +1

    can we say that using SVD we are extracting significant features?

  • @lingtan6742
    @lingtan6742 2 ปีที่แล้ว

    Super clear! Thank you so much!

  • @srs.shashank
    @srs.shashank 10 หลายเดือนก่อน

    Thanks, the applications which I can think of rightaway is: PCA and Matrix Factorization. what could be other possible applications?

  • @sarvagyagupta1744
    @sarvagyagupta1744 ปีที่แล้ว

    This is really realy informative. Just one question. What are sigmas? Are they eigenvalues from SVD or something else? How did you get 2 and 3 in your example?

  • @Josh-di2ig
    @Josh-di2ig 2 ปีที่แล้ว

    thanks for a great video. do you also have a video on how to find those lambda values?

  • @Actanonverba01
    @Actanonverba01 4 ปีที่แล้ว +1

    I usually hear SVD used synonymously with PCA. The way you described it, SVD is like a compression of the data but how is that different from PCA?

  • @qiushiyann
    @qiushiyann 4 ปีที่แล้ว +1

    This is really helpful

  • @kancherlapruthvi
    @kancherlapruthvi 3 ปีที่แล้ว

    The explanations are good but for Linear Algebra the best videos are from Prof. Gilbert Strang

  • @mohamedhossam1717
    @mohamedhossam1717 3 ปีที่แล้ว

    i don't understand how to get independent vectors of a matrix to get its rank and what is mean by independent vector

  • @sanjeetwalia5077
    @sanjeetwalia5077 4 ปีที่แล้ว +1

    Hi Ritvik, thanks for the very explanatory video. Really very helpful to understand. However, when you see that you achieved a 75% computation reduction in this case, was it really because we assumed sigma(3) onwards to be approximately equal to zero. Does this assumption sway away from reality or this is how it always happens. Eager to hear your thoughts. Happy to learn from this video.

    • @sanjeetwalia5077
      @sanjeetwalia5077 4 ปีที่แล้ว

      Also, if you could do Moore-Penrose Pseudoinverse video as well. TIA

  • @kurtji8170
    @kurtji8170 ปีที่แล้ว

    Hi Ritvik, what if the data is well constructed and there are 10 significant non-zero singular values? What can we do about this data

  • @saraaltamirano
    @saraaltamirano 4 ปีที่แล้ว +16

    I am a big fan of your videos, but I think I liked the old format better, where you do the math step-by-step and write it on the whiteboard :/

    • @ritvikmath
      @ritvikmath  4 ปีที่แล้ว +10

      Thanks for the comment! I was debating whether to go back to the marker and paper style where I would write more stuff in real time. This suggestion is very helpful to me.

    • @saraaltamirano
      @saraaltamirano 4 ปีที่แล้ว

      @@ritvikmath thanks for the reply! I am especially grateful for your PCA math video since I am currently doing research with a functional data analysis algorithm that uses multivariate functional PCA and I've looked EVERYWHERE for an easy explanation. Your PCA video (and the required videos) is hands down the best explanation out there. I am forever grateful :-)

    • @teegnas
      @teegnas 3 ปีที่แล้ว

      @@saraaltamirano I initially had the same PoV but then after consuming the whiteboard type content for a while, I've gotten used to it, and recently he has started moving away from the board at the end of the video so that we can pause and ponder upon it.

    • @kristofmeszaros5505
      @kristofmeszaros5505 3 ปีที่แล้ว +2

      I prefer this current style greatly, no need to spend time writing. Can concentrate on explaining more.

  • @yingguo3683
    @yingguo3683 3 หลายเดือนก่อน

    All the 'u' are linear independent to each other, which means we multiply matrix of U with its transpose, we will get identity matrix. Don't get why linear independent leads to orthonormal?

    • @yingguo3683
      @yingguo3683 3 หลายเดือนก่อน

      forget it, i just saw the *note*

  • @abhyuditgupta7652
    @abhyuditgupta7652 2 ปีที่แล้ว

    Thank you so much!

    • @abhyuditgupta7652
      @abhyuditgupta7652 2 ปีที่แล้ว

      I have always thought of such applications of matrices but never worked on finding how. beauty of maths.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w ปีที่แล้ว

    but don't orthonormal matrix have to be square?

  • @kickfloeb
    @kickfloeb 3 ปีที่แล้ว

    I feel like such a baby because I laughed everytime you said u p and sigma p. Anyway, great video as always :).

  • @_kage_
    @_kage_ 2 หลายเดือนก่อน

    How do we get U and V?

  • @robertleo3561
    @robertleo3561 3 ปีที่แล้ว +1

    Wait what at 4:06 you said that matrices with full rank always have their transposes as inverses?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      almost, it's a small but important distinction.
      A full rank matrix has its columns linearly independent to each other.
      An orthogonal matrix (like the ones in this video) are full rank but also satisfy the property that its rows and columns are orthogonal to each other (have dot product 0 for any pair of different rows/columns).
      So for an orthogonal matrix, like you said, its transpose is its inverse. But that's not generally true for any full rank matrix. Looking back, I did say the wrong thing, and I'll go correct it in the comments. Thanks for pointing it out!

  • @mohammadmuneer6463
    @mohammadmuneer6463 3 ปีที่แล้ว

    Awesome !!

  • @yulinliu850
    @yulinliu850 4 ปีที่แล้ว

    Awesome 👍

  • @erlint
    @erlint ปีที่แล้ว

    Isn't this the thin form of SVD? And aren't you using the numerical rank in the praxis example? Because A = U [Σ; 0] V^T, but since Σ in nxn and 0 (m-n)xn then U = [U_1 U_2] where U_1 mxn and U_2 mxm-n therby A = U_1*Σ*V^T. Also U_2 will be in the null space of A (?). And the skip to rank truncated matrix instead of explaining how u_r+1, ..., u_m will be the basis for N(A^T) and v_r+1, ..., v_n be the basis for N(A).
    Also I'm still unsure on how the eigenvectors of A^T A and A A^T tells you the most important information in A. Are we projecting the data onto the eigenvectors like in PCA?
    The eigendecomposition and SVD videos are some of the most compact and understandable videos I have found on those topics, it made the link between the change of basis to the eigenbasis, then calculating linear transformation then back to original basis much more clear to me, thanks.

  • @Pukimaxim
    @Pukimaxim 4 ปีที่แล้ว

    Hi ritvik, I might missed it in your video but how do you get sigma?

  • @SoumyajitGanguly_ALKZ
    @SoumyajitGanguly_ALKZ 4 ปีที่แล้ว +1

    this is ~75% reduction. from 1000 to 222

    • @ritvikmath
      @ritvikmath  4 ปีที่แล้ว

      true! thanks for pointing that out. I think I meant now you only need around 25% storage.

  • @singnsoul6443
    @singnsoul6443 ปีที่แล้ว

    tysm

  • @zigzagarmchair2367
    @zigzagarmchair2367 8 หลายเดือนก่อน

    OMG again!!!

  • @nine9nineteen193
    @nine9nineteen193 2 หลายเดือนก่อน

    bro open your head , put the information , and close it

  • @pixieofhugs
    @pixieofhugs ปีที่แล้ว

    Thank you! You have been a life saver 🛟

  • @xinyuan6649
    @xinyuan6649 2 ปีที่แล้ว

    The best! This is 101 on how to have fun with math in ds🫰