This is exactly what I was looking for. Whenever I tried to research into the transpose I always got confused by all this talk of dual spaces and whatnot, this is the clearest explanation yet.
This is a really clear and good explanation. The introduction of the transpose in basic courses is deceptively easy. Even a child could understand the definition of switching rows and columns. This hides the fact that the transpose is actually one of the deepest matrix operations, mathematically, you encounter in the first courses. You just don't typically realize it.
I remember asking my linear algebra teacher this exact thing and he just looked at me weird and said, "you just change rows and column". I stopped asking him stuff after that.
Such an amazing video. I'm shocked that you only have 924 subscribers. You explained this so well, and so elegantly, it really is truly amazing. Linear algebra is so beautiful. Thank you.
from my heart , i would like to say thank you , with high appreciate for all of your videos, i believe your video are helping alot of student are struggling with LA.😊😊😊
It is good that you talked about inverse transpose matrix and use SVD to show it. And I think there is a more visually intuitional way to show the geometric relationship between them. Say, if we have an full rank 3×3 matrix {a1 | a2 | a3}(a,b,c are vextors)and its inverse transpose matrix {a1' | a2' | a3'} ,we can found that : a1' is perpendicular to the plane of span {a2,a3} while the dot product of a1 and a1' is 1 a2' is perpendicular to the plane of span {a1,a3} while the dot product of a2 and a2' is 1 a3' is perpendicular to the plane of span {a2,a3} while the dot product of a3 and a3' is 1 In fact, in crystallography, if a1, a2, a3 are the basis of some crystal's lattice, then a1', a2', a3' happen to be the basis of its reciprocal lattice. However in crystallography textbooks, the relationship of inverse transpose hardly mentioned, rather, a1' , a2', a3' are defined as: a1' = (a2 × a3) / det {a1 | a2 | a3} a2' = (a1 × a3) / det {a1 | a2 | a3} a3' = (a1 × a2) / det {a1 | a2 | a3} See, in such definition we can easily get the perpendicular properties I have mentioned but hard to notice the inverse transpose nature of matrix {a1' | a2' | a3'} . Could you make an video to visualize the connection between reciprocal lattice, inverse transpose matrix and the intuition that inverse transpose matrix has vectors perpendicular just to vectors of the initial vectors? That really helps a lot.
My first linear algebra course was very abstract, and while I loved it for that, transpose was always so opaque. The = definition is so ungodly complicated and opaque, and while we could use it algebraically, we never understood what it meant. Having this video then would have saved me so much headache. At first I was a little confused why you were talking so much about preserving dot products, but when you introduced vbar->v and I saw where the forumula was headed it absolutely blew my mind. I also totally agree with your conventions - marking x and v as different like with that bar would have saved me so many headaches too. I also saw some places mention you needed *two* scalar products for the transpose which always confused me - but that notation makes it so clear why: one is a scalar product in the input space, and one’s in the output space. TL;DR awesome video and now I’m gonna rewatch in 3 times to make sure I didn’t miss anything 😊
Oh, and I love how clear this makes the spectral theorem too! When you mentioned the case where r1 = r2^-1, I literally thought to myself “oh, I bet that means the eigenvectors are nice, since it’s rotated axis aligned scaling” before even realizing that simplifying the formula would make it obvious that the matrix is symmetric. So cool!
I realised at 11:00 that this falls out really nice in ga. Applying a linear transformation that is an orthogonal transformation is the same as applying a rotor. Applying a rotor looks like: a' = RaR~ if R is a unit versor, a is the original vector, a' is the transformed vector and ~ represents reversion operation If R isn't unit then a' = RaR^-1 as this expands to a' = (RaR~)/RR~ It seems like the transpose is analagous to reversion which is pretty cool.
It’s interesting there doesn’t seem to be a spot in the formula for sigma - I guess that makes sense because GA can’t really do axis-specific stretching with the basic operations?
This is amazing! When you discussed how the dot product of x and v retains by applying different (but almost similar) matrix transformations in 15:11, a nice way to look at it also is through the formula of the dot product (x*v) = ||x|| ||v|| cos\theta. Since x and v are essentially applied with the same orthogonal matrices from the decomposition, theta will be unchanged. But scaling x using Sigma and scaling v using the Sigma inverse will tell that ||x|| is scaled "up" as much as ||v|| is scaled "down" and so the dot product is retained. regardless, great visual intuition! I'm loving the current YT community that teaches mathematics.
This is an excellent video. My linear algebra studies took me through the "Linear Algebra Done Right" approach, and so we didn't use many matrices, which had its pros and cons. It is funny to me that this kind of study can often leave student without a solid grasp of what is really going on unless they put in the extra effort. For me, it wasn't until I did a course on Fourier Analysis (which was really just a functional analysis course in disguise imo) that I really had to understand a lot of this stuff algebraically. While the geometric understanding is helpful, it is funny how much it doesn't matter later on. Math is weird.
I guess it does matter. For me, a geometric intuition helps me understand what exactly is going on and eventually over time, the visual aspects become so engrained in memory and automated that working with more abstract algebraic terms is more convenient. The visual intuition becomes a subconscious way to process the information even though we use algebraic representations consciously.
I'd like to see what the visual intuition would be on unitary transformations as orthogonal matrices can be generalized to the complex numbers through unitary matrices.
Interestingly, this "sort of inverse" behavior (the fact that the transpose is an involution for finite dimensional vector spaces) but where "adding, subtracting or multiplying an element and its dual (=transpose) respectively gives a sort of symmetry, antisymmetry or symmetric square" finds other analogies throughout mathematics. By this I mean, the sum A + Aᵀ = 2 sym(A) is symmetric, the difference A - Aᵀ = 2 skew(A) is antisymmetric, and the product AAᵀ is symmetric, and in fact a "sort of squaring" of A, called the Gram matrix of A (and which is very important in the SVD). One simple analogical case is the complex conjugate (and you have a similar analogy with your four A, Aᵀ, A⁻¹ and (A⁻¹)ᵀ, with respectively z, z̄, z⁻¹ and z̄⁻¹ , neatly making an axis-aligned rectangle on a circle of radius |z|). The sum z + z̄ = 2 Re(z) is real, the difference z - z̄ = 2 Im(z) is imaginary, and the product zz̄ = |z|² is real, and in fact a "sort of squaring" of z, called the quadratic norm of z. Of course, this also applies to conjugate transposes (of which the above case is just a 1×1 example), etc. There's also things to say about any integral operator of the form = ∫ f(x) __ dx (which is pretty much the transpose of the function f seen as a vector, as it acts as a covector (linear form) over function spaces (which are just infinite dimensional vector spaces)), and you similarly have interesting duality properties of some linear operators, etc, but that's worth a video of its own, surely. Great video in any case, beautiful work. Thanks a lot !
Just had my exam about computer vision, which relies in many cases on SVD and all sorts of transformation matrices. This video brings much clarity, if only I saw this before my exam 😅
This is such a great video, it really helped give me an intuition of what transposes and SVDs are. If only I had watched this video before I took my linear algebra final 2 days ago...
Thanks @arbodox . Your comment is valuable as someone who is fresh out of a Linear Algebra final exam, for the key areas of your focus in this video 👍.
didn't quite get it, i'll be referring back to this video a bunch of times in the future throughout my engineering degree. I don't even have to know this yet, but I find it extremely helpful to understand concepts instead of just memorizing the exercises that one must solve in order to pass the test. Thank you for putting this knowledge online
I have thoroughly watched the video once time and I would rewatch again when I have free time ‘cause some of the concepts I don’t entirely understand 😊 Good video appreciate your work
I often find math videos hard to follow, but I really like this one! My question is why is preserving the dot product in orthogonal transformation special? Is there any more interesting properties we can get out of it?
Most matrices do not preserve the dot product, that’s what makes it special. Another special property that arises from preserving dot product: Orthogonal matrices are linear transformations that simply rotate/reflect a vector, and don’t change the length of it (orthogonal matrices are norm-preserving). As an example, suppose v = (1, 1) and we apply the orthogonal matrix A that rotates 45deg counterclockwise (what is this 2x2 orthogonal matrix A?). Then, Av = (-1,1) as expected. Notice how v and Av are the same norm. Now notice that the norm of a vector v is sqrt(v dot v). Thus the fact that orthogonal transformations preserve the dot product leads to our interesting property that orthogonal transformations are norm-preserving.
Is there a real-life situation when we want to find the M transformation that preserve the dot product? ... Super neat exploration and I'm trying to understand better. Thank you!
Thank you for the video. Very clear explanation of transpose and SVD. I am motivated by applications… The SVD is so insanely powerful! Could you make a video that illustrates how the unitary rotation/scale/rotation of the SVD solve a problem? That would be so helpful! Thank you for sharing.
I don't really get the premise of the explanation. Wouldn't there be infinitely many new vectors v bar that satisfy x dot v = x bar dot v bar with x bar = A x and thus infinetly many matrices to get us there? What is so special about the transpose of the inverse of A then? It does of course satisfy the equation, but so could infinetly many other matrices, no?
For a given fixed x that is true, but if you want to use the same matrices for any arbitrary x and v, then you have to use the A and A-inverse-transpose.
hi there, fellow math educator :) this is my first time watching your channel. great visuals and explanations! my only criticism is that the music was a bit loud and distracting. at the very least i'd say reduce the music volume (or do away with it), and if you keep music, i'd choose music with much lower tempo. the choice for this video felt a bit too 'fast' personally. but otherwise, great content! i'm looking forward to future videos. best of luck :)
If I studied more and think more like in this video instead of just memorizing the formula back to college, I would have became.... About the same me but slightly more intellectual superior.
Is the choice of x and v as names for the vectors instead of u and v a deliberate choise so that students don't mess up because u and v are very similarly written ? If yes, this is another proof of the care you put in the video who is very good
Having hard time with the background music. If you are not sure whether the music is loud or not, you should remove it. The content is nice though. Hope you do it better next time.
Okay, I really want to thank you for this content. It's incredible. I saw it in class and was a little lost as to the true meaning of these formulas. This video was perfect for that. Keep going 🫶🏻
This is exactly what I was looking for. Whenever I tried to research into the transpose I always got confused by all this talk of dual spaces and whatnot, this is the clearest explanation yet.
Thanks!
@@samlevey3263 No, thank you!
The easiest looking matrix operation has the most difficult geometric intuition. A very good explanation indeed! Thank you
This is a really clear and good explanation.
The introduction of the transpose in basic courses is deceptively easy. Even a child could understand the definition of switching rows and columns.
This hides the fact that the transpose is actually one of the deepest matrix operations, mathematically, you encounter in the first courses. You just don't typically realize it.
Bro casually explained SVD in 20 seconds better than most can do while covering a different topic.
Thanks!!
I remember asking my linear algebra teacher this exact thing and he just looked at me weird and said, "you just change rows and column". I stopped asking him stuff after that.
same with my professor. I'm so glad I came across this and 3b1b videos, they make me realize just how beautiful all of this is
This is the most underrated video ever! It should have the same amount of views as 3Blue1Brown has
Such an amazing video. I'm shocked that you only have 924 subscribers. You explained this so well, and so elegantly, it really is truly amazing. Linear algebra is so beautiful. Thank you.
Thanks!
from my heart , i would like to say thank you , with high appreciate for all of your videos, i believe your video are helping alot of student are struggling with LA.😊😊😊
It is good that you talked about inverse transpose matrix and use SVD to show it. And I think there is a more visually intuitional way to show the geometric relationship between them. Say, if we have an full rank 3×3 matrix {a1 | a2 | a3}(a,b,c are vextors)and its inverse transpose matrix {a1' | a2' | a3'} ,we can found that :
a1' is perpendicular to the plane of span {a2,a3} while the dot product of a1 and a1' is 1
a2' is perpendicular to the plane of span {a1,a3} while the dot product of a2 and a2' is 1
a3' is perpendicular to the plane of span {a2,a3} while the dot product of a3 and a3' is 1
In fact, in crystallography, if a1, a2, a3 are the basis of some crystal's lattice, then a1', a2', a3' happen to be the basis of its reciprocal lattice. However in crystallography textbooks, the relationship of inverse transpose hardly mentioned, rather, a1' , a2', a3' are defined as:
a1' = (a2 × a3) / det {a1 | a2 | a3}
a2' = (a1 × a3) / det {a1 | a2 | a3}
a3' = (a1 × a2) / det {a1 | a2 | a3}
See, in such definition we can easily get the perpendicular properties I have mentioned but hard to notice the inverse transpose nature of matrix {a1' | a2' | a3'} . Could you make an video to visualize the connection between reciprocal lattice, inverse transpose matrix and the intuition that inverse transpose matrix has vectors perpendicular just to vectors of the initial vectors? That really helps a lot.
My first linear algebra course was very abstract, and while I loved it for that, transpose was always so opaque. The = definition is so ungodly complicated and opaque, and while we could use it algebraically, we never understood what it meant.
Having this video then would have saved me so much headache. At first I was a little confused why you were talking so much about preserving dot products, but when you introduced vbar->v and I saw where the forumula was headed it absolutely blew my mind. I also totally agree with your conventions - marking x and v as different like with that bar would have saved me so many headaches too. I also saw some places mention you needed *two* scalar products for the transpose which always confused me - but that notation makes it so clear why: one is a scalar product in the input space, and one’s in the output space.
TL;DR awesome video and now I’m gonna rewatch in 3 times to make sure I didn’t miss anything 😊
Oh, and I love how clear this makes the spectral theorem too! When you mentioned the case where r1 = r2^-1, I literally thought to myself “oh, I bet that means the eigenvectors are nice, since it’s rotated axis aligned scaling” before even realizing that simplifying the formula would make it obvious that the matrix is symmetric. So cool!
Thanks for the kind words, and I'm glad it helped! I agree, I spent a long time staring at that equation before it clicked.
I realised at 11:00 that this falls out really nice in ga.
Applying a linear transformation that is an orthogonal transformation is the same as applying a rotor.
Applying a rotor looks like:
a' = RaR~ if R is a unit versor, a is the original vector, a' is the transformed vector and ~ represents reversion operation
If R isn't unit then
a' = RaR^-1 as this expands to
a' = (RaR~)/RR~
It seems like the transpose is analagous to reversion which is pretty cool.
It’s interesting there doesn’t seem to be a spot in the formula for sigma - I guess that makes sense because GA can’t really do axis-specific stretching with the basic operations?
@@minerharry Kind of true, shearing doesn't really happen
This is amazing! When you discussed how the dot product of x and v retains by applying different (but almost similar) matrix transformations in 15:11, a nice way to look at it also is through the formula of the dot product (x*v) = ||x|| ||v|| cos\theta. Since x and v are essentially applied with the same orthogonal matrices from the decomposition, theta will be unchanged. But scaling x using Sigma and scaling v using the Sigma inverse will tell that ||x|| is scaled "up" as much as ||v|| is scaled "down" and so the dot product is retained.
regardless, great visual intuition! I'm loving the current YT community that teaches mathematics.
Simply amazing, it is by far the best video I've found about this topic.
Bro you're 2nd 3B1B! Keep going✨🔥
This is an excellent video. My linear algebra studies took me through the "Linear Algebra Done Right" approach, and so we didn't use many matrices, which had its pros and cons. It is funny to me that this kind of study can often leave student without a solid grasp of what is really going on unless they put in the extra effort. For me, it wasn't until I did a course on Fourier Analysis (which was really just a functional analysis course in disguise imo) that I really had to understand a lot of this stuff algebraically. While the geometric understanding is helpful, it is funny how much it doesn't matter later on. Math is weird.
I guess it does matter. For me, a geometric intuition helps me understand what exactly is going on and eventually over time, the visual aspects become so engrained in memory and automated that working with more abstract algebraic terms is more convenient. The visual intuition becomes a subconscious way to process the information even though we use algebraic representations consciously.
@@physiologic187 Yeah, that's a fair take
@@physiologic187 I think you're right!
Just so you know I have been searching for this for years! Also the music is very nice!
As a graduate student in physics, this was very helpful in grounding the definition of unitary transformations, thanks so much and beautiful video!
I'd like to see what the visual intuition would be on unitary transformations as orthogonal matrices can be generalized to the complex numbers through unitary matrices.
Interestingly, this "sort of inverse" behavior (the fact that the transpose is an involution for finite dimensional vector spaces) but where "adding, subtracting or multiplying an element and its dual (=transpose) respectively gives a sort of symmetry, antisymmetry or symmetric square" finds other analogies throughout mathematics.
By this I mean, the sum A + Aᵀ = 2 sym(A) is symmetric, the difference A - Aᵀ = 2 skew(A) is antisymmetric, and the product AAᵀ is symmetric, and in fact a "sort of squaring" of A, called the Gram matrix of A (and which is very important in the SVD).
One simple analogical case is the complex conjugate (and you have a similar analogy with your four A, Aᵀ, A⁻¹ and (A⁻¹)ᵀ, with respectively z, z̄, z⁻¹ and z̄⁻¹ , neatly making an axis-aligned rectangle on a circle of radius |z|). The sum z + z̄ = 2 Re(z) is real, the difference z - z̄ = 2 Im(z) is imaginary, and the product zz̄ = |z|² is real, and in fact a "sort of squaring" of z, called the quadratic norm of z.
Of course, this also applies to conjugate transposes (of which the above case is just a 1×1 example), etc. There's also things to say about any integral operator of the form = ∫ f(x) __ dx (which is pretty much the transpose of the function f seen as a vector, as it acts as a covector (linear form) over function spaces (which are just infinite dimensional vector spaces)), and you similarly have interesting duality properties of some linear operators, etc, but that's worth a video of its own, surely.
Great video in any case, beautiful work. Thanks a lot !
Wow, finally someone that explained it well! magnificant!
NEW MANIM TH-camR ❤❤
Just had my exam about computer vision, which relies in many cases on SVD and all sorts of transformation matrices. This video brings much clarity, if only I saw this before my exam 😅
This was a great video! I thought the background music was distracting though.
Thanks for the feedback :)
This is such a great video, it really helped give me an intuition of what transposes and SVDs are. If only I had watched this video before I took my linear algebra final 2 days ago...
😅
Thanks @arbodox . Your comment is valuable as someone who is fresh out of a Linear Algebra final exam, for the key areas of your focus in this video 👍.
didn't quite get it, i'll be referring back to this video a bunch of times in the future throughout my engineering degree. I don't even have to know this yet, but I find it extremely helpful to understand concepts instead of just memorizing the exercises that one must solve in order to pass the test.
Thank you for putting this knowledge online
I came here from MIT ocw and this video is too good!!! Thank you.
So if I understand correctly, is a transpose sort of like an "inversion" for the reflection of Ax over the line y=x, which brings you back to x?
Really amazing video! All concepts were clearly explained with enough math and geometrical support, amazing.
Thanks!
Great video, thank you for posting. Just thought the background music was a tad too loud, though.
My Mann you popped up in my recommedation for both of my accounts and I am glad
What an amazing explanation ! Please keep posting more such quality content. Hats off !!
I have thoroughly watched the video once time and I would rewatch again when I have free time ‘cause some of the concepts I don’t entirely understand 😊 Good video appreciate your work
I often find math videos hard to follow, but I really like this one! My question is why is preserving the dot product in orthogonal transformation special? Is there any more interesting properties we can get out of it?
Here's some more reading on orthogonal matrices. Reflections and rotations are pretty useful for all sorts of things :)
Most matrices do not preserve the dot product, that’s what makes it special.
Another special property that arises from preserving dot product:
Orthogonal matrices are linear transformations that simply rotate/reflect a vector, and don’t change the length of it (orthogonal matrices are norm-preserving). As an example, suppose v = (1, 1) and we apply the orthogonal matrix A that rotates 45deg counterclockwise (what is this 2x2 orthogonal matrix A?). Then, Av = (-1,1) as expected. Notice how v and Av are the same norm.
Now notice that the norm of a vector v is sqrt(v dot v). Thus the fact that orthogonal transformations preserve the dot product leads to our interesting property that orthogonal transformations are norm-preserving.
@@samlevey3263 _"Here's some more reading on orthogonal matrices."_ Does that mean you intended to include a link with your post?
@@godfreypigott Woops, I meant to link to the Wikipedia page :) en.wikipedia.org/wiki/Orthogonal_matrix
Thanks man i discovered retrosynthwave!! Have an icecream from my side!
I like your video as i often get confused while watching other linearly algebra videos. The recap part in the beginning is very neat.
Truly a superb explanation!!
This video is so beautiful !
Is there a real-life situation when we want to find the M transformation that preserve the dot product? ... Super neat exploration and I'm trying to understand better. Thank you!
Can yu explain what is cofactor geometrically?..❤
Axler explored this from pure algebraic point of view👍🏻
Thank you for the video. Very clear explanation of transpose and SVD. I am motivated by applications… The SVD is so insanely powerful! Could you make a video that illustrates how the unitary rotation/scale/rotation of the SVD solve a problem? That would be so helpful! Thank you for sharing.
was doing some graphics programming and this clears things up. thank you!
I wish you had shown what A transpose does to the output of A. I get that it isn't clean, but its also what we came here for.
thanks man, really love that you share the source code as well!
Amazing video!! Thanks
I don't really get the premise of the explanation. Wouldn't there be infinitely many new vectors v bar that satisfy
x dot v = x bar dot v bar
with x bar = A x
and thus infinetly many matrices to get us there?
What is so special about the transpose of the inverse of A then?
It does of course satisfy the equation, but so could infinetly many other matrices, no?
For a given fixed x that is true, but if you want to use the same matrices for any arbitrary x and v, then you have to use the A and A-inverse-transpose.
@@samlevey3263 Ok, that makes sense. Thanks for the response!
I am going to watch this again ❤
Excellent exposition!!!
@15:00 ... I like the 3D Ring/cascade illustration that suddenly appears. i have to rewatch it again to see where it came from, but kiuddos in advance
Simply beautiful. Thank you so much!
Please continue creating videos
You are going to blow up in millions very quickly...mark my words!!
Commenting here to get atleast thousands of likes from million views😉😁
hi there, fellow math educator :) this is my first time watching your channel. great visuals and explanations! my only criticism is that the music was a bit loud and distracting. at the very least i'd say reduce the music volume (or do away with it), and if you keep music, i'd choose music with much lower tempo. the choice for this video felt a bit too 'fast' personally. but otherwise, great content! i'm looking forward to future videos. best of luck :)
If I studied more and think more like in this video instead of just memorizing the formula back to college, I would have became....
About the same me but slightly more intellectual superior.
Impressive!
Thanks!
This was very well done!
An excellent presentation. It must have taken a lot of work to put together. ⭐️
Is the choice of x and v as names for the vectors instead of u and v a deliberate choise so that students don't mess up because u and v are very similarly written ?
If yes, this is another proof of the care you put in the video who is very good
underrated af
11:48 Thank you.
Well done, many thanks!
I like this video
Great video❤😊
I have no words❤❤
Beautiful 😻
Excellent video.
The algebraic operation still seems a bit mysterious. I would imagine there's more to a transpose hidden in its algebraic operation?
What do you have in mind?
Man you f****ing Rock!!! 😮 the best video on the subject! And i have watched many…. Top notch explanation and video
Great video.
Thank you.
Bro I remember last month doing a rabbit hole on this thing
Yeah, that's what happened to me too, and I figured I'd report back 😅
Great video
Awesome video
Thanks!
5:02 vTw is a 1 by 1 matrix and the dot product is a number so these don't seem to be the same thing
A 1×1 matrix is considered the same is its single element.
Sehr gut
Is the manim code for this video available?
Thanks for the question, I've just uploaded it here: github.com/slevey087/transpose-video
@@samlevey3263
Thanks!
Great! New subscriber!
+1 sub
The constant background music is unnecessary and distracting. Great video though. Thanks for sharing.
Wowwwwwwwww22❤❤❤❤❤❤
鼠疫
Perfect explanition doesn't exsi...
Having hard time with the background music. If you are not sure whether the music is loud or not, you should remove it. The content is nice though. Hope you do it better next time.
累人
Okay, I really want to thank you for this content. It's incredible. I saw it in class and was a little lost as to the true meaning of these formulas. This video was perfect for that. Keep going 🫶🏻
Great video