This was so beautiful! I had heard about the SVD but thought it was something advanced. Now after going through the polar decomposition, when you showed it was just one step beyond, I was like 'So this is it!'
hello and THANK YOU. you mentioned that this is possible to all square matrices, but in an earlier video you said that QS decomposition works even for rectangular matrices. The SVD is "based" on the QS, so souldn't it also worl for rectangular matrices? anton
Could you explain why leading columns of the unitary matrix in SVD represents dominant modes (coordinates) of a linear transformation? This important result has been used without fundamental explanations in many practical engineering techniques such as in Principal Component Analysis (PCA), Proper Orthogonal Decomposition (POD), and Dynamics Mode Decomposition (DMD). It is interesting to see the reasons behind. Thanks.
When I first read about the theorem in my textbook about polar decomposition my first thoughts were "is this really true? If you can write any matrix A as Q and S (let's use the same letters as in these videos), you can also write S as XDX^T and get A=QXDX^T which is just one rotation matrix multiplied by a diagonal matrix, multiplied by another rotation matrix. That can't be right. We can't express every linear transformation as rotating and reflecting, then scaling and then again rotating and reflecting, right?" So, my intuition was wrong then, this is apparently possible to do with every matrix, if I understood the videos correctly.
Sir, Does the rotation performed by X inverse matrix aligns the operand matrix towards coordinate axis (meaning [1,0,0],[0,1,0],[0,0,1])? or towards the eigenvectors of S matrix as eigenvectors for symmetric matrix are perpendicular? Please explain.
I watched through this series from the Eigenvalue decomposition up to here, and I have two main questions that I don't recall being answered: 1: What if the symmetric matrix S isn't invertible? In this case, you cannot define Q to be AS^-1. 2: In the polar decomposition A = QS, what guarantees that S has positive eigenvalues? A^T A isn't guaranteed positive eigenvalues, is it?
Hey Arbitrary Renaissance. 1. Good point. It's not necessarily invertible. This case would require more care. 2. S is the "square root" of a positive-semidefinite matrix. So it is positive-semidefinite in its own right.
1. I can't see other possibilities. 2. From what I know, scaling a vector means multiplying the vector by a constant. 3. So it really doesn't make sense to me to scale along the coordinate axes, or even anything. Shouldn't scaling be along the direction of the vector itself ? 4. I do understand what rotating the vector to align it with the coordinate axes mean 5. What am I missing here, Professor?
He talked about it in an earlier video, but Q is an orthogonal matrix and S is a symmetrical matrix. He proved in an earlier video that any matrix can be created by multiplying some QS.
This playlist has helped immensely in decrypting the math behind the eigenface tutorial: www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm I ran into something I didn't quite understand from that tutorial, though. When A is rectangular and S1 is larger than S2 for A^TA = S1 AA^T = S2 What's happening to the eigenspaces of S1 in relation to S2? Since S1 and S2 have the same eigenvalues and related eigenvectors, are some eigenvalues and eigenvectors of the larger product S1 repeated? Conceptually, are the eigenspaces of S1 being projected onto smaller eigenspaces of S2 when you do AA^T instead of A^TA?
they think you'll be able that the only way to get the best way to get the best way to the point of view of the year and the other hand, the only thing is that the only way to get the best way to the new York city of London and the kids are doing well. I am a beautiful day. I am a beautiful day. I am a beautiful day. I am a very good. I am a beautiful day.
It'll make much sense if you align each steps of your explanation with example problem. This will allow viewers with a quick understanding and a follow up with your further explanations. What you are teaching is pretty easy to understand but not with your style of teaching. Thank you,
Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
This was so beautiful! I had heard about the SVD but thought it was something advanced. Now after going through the polar decomposition, when you showed it was just one step beyond, I was like 'So this is it!'
You're just amazing, Sir! It was a mind blowing explanation!
hello and THANK YOU. you mentioned that this is possible to all square matrices, but in an earlier video you said that QS decomposition works even for rectangular matrices. The SVD is "based" on the QS, so souldn't it also worl for rectangular matrices?
anton
I really love this channel and how you explain things and you're funny too haha, the course about tensor calculus I just love ittt thank you so much
Glad to hear it! Especially the part about being funny. I get so angry when that's not acknowledged.
Could you explain why leading columns of the unitary matrix in SVD represents dominant modes (coordinates) of a linear transformation? This important result has been used without fundamental explanations in many practical engineering techniques such as in Principal Component Analysis (PCA), Proper Orthogonal Decomposition (POD), and Dynamics Mode Decomposition (DMD). It is interesting to see the reasons behind. Thanks.
I figured it out, so never mind! Lol
@@MinhVu-fo6hd what is it, bro ? I too have the same doubt
When I first read about the theorem in my textbook about polar decomposition my first thoughts were "is this really true? If you can write any matrix A as Q and S (let's use the same letters as in these videos), you can also write S as XDX^T and get A=QXDX^T which is just one rotation matrix multiplied by a diagonal matrix, multiplied by another rotation matrix. That can't be right. We can't express every linear transformation as rotating and reflecting, then scaling and then again rotating and reflecting, right?" So, my intuition was wrong then, this is apparently possible to do with every matrix, if I understood the videos correctly.
Almost - the exception in the "defective" case
Sir, Does the rotation performed by X inverse matrix aligns the operand matrix towards coordinate axis (meaning [1,0,0],[0,1,0],[0,0,1])? or towards the eigenvectors of S matrix as eigenvectors for symmetric matrix are perpendicular? Please explain.
I watched through this series from the Eigenvalue decomposition up to here, and I have two main questions that I don't recall being answered:
1: What if the symmetric matrix S isn't invertible? In this case, you cannot define Q to be AS^-1.
2: In the polar decomposition A = QS, what guarantees that S has positive eigenvalues? A^T A isn't guaranteed positive eigenvalues, is it?
ive been looking for the answer to your questions. If uve found one please share.
To the second one, I believe he said that it would be answered in the part 4 of linear algebra series using inner products.
Hey Arbitrary Renaissance.
1. Good point. It's not necessarily invertible. This case would require more care.
2. S is the "square root" of a positive-semidefinite matrix. So it is positive-semidefinite in its own right.
If what S does is rotating, scaling and rotating back, why not just scale in the first place ?
It's a very specific scaling: along the coordinate axes.
1. I can't see other possibilities.
2. From what I know, scaling a vector means multiplying the vector by a constant.
3. So it really doesn't make sense to me to scale along the coordinate axes, or even anything. Shouldn't scaling be along the direction of the vector itself ?
4. I do understand what rotating the vector to align it with the coordinate axes mean
5. What am I missing here, Professor?
We do not scale the vectors. We scale the coordinate axes. The vectors do not generally line up with the coordinate axes.
Outstanding!!!!!
What are Q and S in A=QS
He talked about it in an earlier video, but Q is an orthogonal matrix and S is a symmetrical matrix. He proved in an earlier video that any matrix can be created by multiplying some QS.
Man I want to know more about the SVD. Does anyone know some youtuber who does simillar excellent) videos ?
Gil Strang!
I came across lectures by Steven Brunton here on TH-cam. He has a whole series of lectures just on svd.
This playlist has helped immensely in decrypting the math behind the eigenface tutorial: www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm
I ran into something I didn't quite understand from that tutorial, though. When A is rectangular and S1 is larger than S2 for
A^TA = S1
AA^T = S2
What's happening to the eigenspaces of S1 in relation to S2?
Since S1 and S2 have the same eigenvalues and related eigenvectors, are some eigenvalues and eigenvectors of the larger product S1 repeated?
Conceptually, are the eigenspaces of S1 being projected onto smaller eigenspaces of S2 when you do AA^T instead of A^TA?
What happens to MathTheBeautiful. I love that name. Lemma sounds so generic.
they think you'll be able that the only way to get the best way to get the best way to the point of view of the year and the other hand, the only thing is that the only way to get the best way to the new York city of London and the kids are doing well. I am a beautiful day. I am a beautiful day. I am a beautiful day. I am a very good. I am a beautiful day.
+나나 WOH What prompted you to write this masterpiece? :)
It'll make much sense if you align each steps of your explanation with example problem. This will allow viewers with a quick understanding and a follow up with your further explanations. What you are teaching is pretty easy to understand but not with your style of teaching.
Thank you,
Thank you for your feedback. You can find exercises at lem.ma/LA1