ME 565 Lecture 27: SVD Part 1

แชร์
ฝัง

ความคิดเห็น • 24

  • @muhammadsohaib681
    @muhammadsohaib681 4 ปีที่แล้ว +6

    Dear Professor
    Many thanks for sharing this Lecture.
    Your teaching style is amazing.
    👍👍👍

  • @mohammadaminmousavi5011
    @mohammadaminmousavi5011 4 ปีที่แล้ว +12

    You are a really amazing professor.
    Thanks a lot for sharing these contests to all students all around the world.
    Thanks again :)

  • @bibekdhungana2182
    @bibekdhungana2182 3 ปีที่แล้ว

    Thank you for such an amazing lecture. Never thought that SVD can be this intuitive.

  • @charlielu05
    @charlielu05 3 ปีที่แล้ว +1

    Being able to break down a difficult topic and explain in an intuitive manner is an art that professor Brunton has in spades. Outstanding lecture as usual.

  • @MrStudent1978
    @MrStudent1978 3 ปีที่แล้ว +1

    Great lecture!! 👏👏👏

  • @poiuwnwang7109
    @poiuwnwang7109 3 ปีที่แล้ว +1

    Super lecture!

  • @geogeo14000
    @geogeo14000 3 ปีที่แล้ว +1

    One of the most insightful and interesting video I've seen on this subject, thanks !

  • @ioannisgkan8930
    @ioannisgkan8930 ปีที่แล้ว

    That’s a beautiful explanation of SVD sir

  • @knvrmnd
    @knvrmnd 3 ปีที่แล้ว

    Thank you so much for sharing this sir

  • @heejuneAhn
    @heejuneAhn 3 ปีที่แล้ว

    He interprets the v vectors of V are primary modes for time trends (since it is the horizontal direction of X matrix). Then u vectors of U is primary modes for companies. It is quite insightful. Then what is s in S? And u's in U and v's in V can be independent or not?

  • @marwanelghitany8875
    @marwanelghitany8875 4 ปีที่แล้ว

    ♥♥♥

  • @sohanaryal
    @sohanaryal 2 ปีที่แล้ว

    thank you

  • @nahuel3256
    @nahuel3256 4 ปีที่แล้ว +1

    28:00 I have a question.
    How can you disregard the other matrices arguing that σ_4>σ_5>...>σ_n ?
    what happen if u_k>>...>>u_3>>u_2>>u_1 (for σ_k not equal 0)
    and v_p*>>...>>v_3*>>v_2*>>v_1** (for σ_p not equal 0)
    Wouldn't be the matrices of higher order indexes grater than the first ones regardless of σ_i ?

    • @nahuel3256
      @nahuel3256 4 ปีที่แล้ว

      ok, I've got convinced when the profesor plotted the singular values of σ.
      But, is there a formal explication for this?

    • @chensong250
      @chensong250 4 ปีที่แล้ว +1

      There is no guarantee that the single values will be in descending order therefore the SVD is not unique subject to swapping orders of rows and single values. So what people do is finding the decomposition first and then reorder them in descending order. The SVD is in this way unique.

  • @asdfasdfuhf
    @asdfasdfuhf 3 ปีที่แล้ว +1

    16:50 What do you mean by "map a pair of vectors through a unitary matrix"? How do you map vectors through a unitary matrix?

    • @Eigensteve
      @Eigensteve  3 ปีที่แล้ว +6

      Sorry for the jargon. "Mapping" a vector through a matrix just means multiplying the vector by the matrix to get another vector.

  • @akinakinlolu2519
    @akinakinlolu2519 ปีที่แล้ว

    Hi Prof. Brunton, I am working on a case study of Desalter Process and collected a lot of data with temperature, Pressure, Concentration of species, residence time, velocity of fluid flow, Cross sectional area, etc at different time snapshots and I am incorporating SVD and PCA. My question is should I lump all the data for all different parameters (temperature, Pressure, Concentration of species, residence time) into one big matrices or separate them and the use Covariance??? Thanks (03/09/2022)

    • @TheSwapnilagarwal
      @TheSwapnilagarwal 7 หลายเดือนก่อน

      Intuitively, if you are able to transform this one large matrix into a matrix with lesser dimension matrix with important features, it would reduce computation time. That is what PCA will help you to do. I am less acquainted about covariance, so can't comment about that.

  • @mattbell888
    @mattbell888 3 ปีที่แล้ว

    Got a bit lost around 22 minutes and had to refresh myself on inner and outer products, matrix ranks, and linear dependence. Going to write this comment as I watch. What significance does the matrices in your sum being rank 1 have? That's just a property of outer products, no? Does U and V* being unitary also make them orthogonal, so that each sigma u v term in your sum is linearly independent? Is that significant in making the dominance of u1 or v1 more important?
    Speaking of which, inner products are like a measure of linear dependence or correlation, right?
    Man I also need to brush up on eigenvectors and eigenvalues too, it's been a while since college.
    Lastly, to check my understanding, the 'SV' in SVD is the diagonal of the sigma matrix, right?
    Checking out your book now for fun, because I haven't been able to do a lot of it since I left college and data analysis and data science was fun and interesting and seemed powerful, and I might be able to apply it to my job now. Really cool stuff, glad this is out there to help learn.

  • @sansha2687
    @sansha2687 4 ปีที่แล้ว +2

    30:35

  • @heejuneAhn
    @heejuneAhn 3 ปีที่แล้ว

    Why not deriving the X = U S V* at all? Also need to dervivation X*X V = S'V and XX* U = US''