Check out ProPrep with a 30-day free trial to see how it can help you to improve your performance in STEM-based subjects: www.proprep.uk/info/TOM-Crawford
I subscribed to your channel a couple months ago, but have not watched a single video. This video showed up on my home page. This is the best presentation and proof of the spectral theorem I have seen. Beautiful logic and clarity of thought. Thank you.
Do I understand correctly that _v'_ is the component-wise conjugate, i.e. _v = (a + bi, c + di) => v' = (a - bi, c - di)?_ If so, is the inner product of v with its conjugate v', i.e. _v^T * v',_ really equal to the inner product of v with itself, i.e. _v^T * v,_ as shown at ~10:55?
I guess the base case for induction here is where A = [a], a 1X1 (automatically symmetric) matrix with single entry any real number a. Then R = [1] with R^-1 = R^T = [1], so that R^TAR = [1] [a] [1]= [a] (which is diagonal as all 1X1 matrices are diagonal).
In case anyone wants to know why the first statement of part (II) is equivalent to saying there is an orthogonal matrix R such that R-1AR is diagonal, the intuition can be found from 3b1b's video about eigenvectors starting from here: th-cam.com/video/PFDu9oVAE-g/w-d-xo.html Thanks a lot for this really clear proof Tom - there's loads of examples online but thanks for actually walking us through it :)
@TomRocksMaths we assumed that as the matrices are symmetric therefore we will get exactly n mutually orthogonal eigenvectors which we will make mutually orthonormal eigenvectors and continue with your proof, but why is it so that the assumption will be always true
What does II of the thm. say if R was changed to C or some other field? Is the proof any different if it was done on linear maps between arbitrary inner-product spaces instead of Euclidean spaces? What does the thm. say if the dimension was infinite?
The theorem also works over C, but you need to change from symmetric matrices to Hermitian matrices (i.e. matrices that equal their *conjugate* transpose). The proof works in the same way for arbitrary inner product spaces. If the dimension is infinite, one essentially gets into functional analysis and there are various spectral theorems - e.g. the same statement as the basic spectral theorem holds for compact self-adjoint operators on a (real or complex) Hilbert space. Generalising beyond that, in order for the statement to remain true, you also have to generalise your notion of eigenvectors, and this rapidly gets rather complicated.
Isn't the proof by induction a bit of overkill her? ;-) Just considering e_i instead of e_1 and deducing that A_{i,i} = 1, and A_{i,j} = 0 for i e j does the trick, no?
Check out ProPrep with a 30-day free trial to see how it can help you to improve your performance in STEM-based subjects: www.proprep.uk/info/TOM-Crawford
I just realized I had Linear Algebra 17 years ago. In 3 more years, it will be the median of my life. I'm starting to feel old.
I subscribed to your channel a couple months ago, but have not watched a single video. This video showed up on my home page.
This is the best presentation and proof of the spectral theorem I have seen. Beautiful logic and clarity of thought. Thank you.
love how clear your explanations are, proprep seems super worth getting too 😋
I just went over this before thanksgiving, thank you for the clarification of this.
Very clear explanation!
Would it be possible to have a video explaining the proof for Cochran's theorem? Thank you!
Thank you, that was really interesting. I've come bake to my course of linear algebra at first year of university :')
Literaly just started doing this 5m ago. Thank you
that theorem blew my mind when i was in college...
Do I understand correctly that _v'_ is the component-wise conjugate, i.e. _v = (a + bi, c + di) => v' = (a - bi, c - di)?_ If so, is the inner product of v with its conjugate v', i.e. _v^T * v',_ really equal to the inner product of v with itself, i.e. _v^T * v,_ as shown at ~10:55?
謝謝!
I guess the base case for induction here is where A = [a], a 1X1 (automatically symmetric) matrix with single entry any real number a. Then R = [1] with R^-1 = R^T = [1], so that R^TAR = [1] [a] [1]= [a] (which is diagonal as all 1X1 matrices are diagonal).
Nice video sir Tom!
Thanks 🙏🏻 Thanks a lot for this informative and useful video ❤️
nice, but what kind of symmetry does the matrix have? Symmetry of rotation? Center of symmetry? Axis of symmetry? Any of the above?
A matrix is said to be symmetric if it's equal to its transpose.
In case anyone wants to know why the first statement of part (II) is equivalent to saying there is an orthogonal matrix R such that R-1AR is diagonal, the intuition can be found from 3b1b's video about eigenvectors starting from here: th-cam.com/video/PFDu9oVAE-g/w-d-xo.html
Thanks a lot for this really clear proof Tom - there's loads of examples online but thanks for actually walking us through it :)
Gorgeous; thank you :)
@TomRocksMaths we assumed that as the matrices are symmetric therefore we will get exactly n mutually orthogonal eigenvectors which we will make mutually orthonormal eigenvectors and continue with your proof, but why is it so that the assumption will be always true
Lot's of thanks from India sir 😅
This is extraordinary
Just one question, how do se prove that A actually has any eigenvalue? Does it come directly from its simmetry?
What exactly is "v bar"? Is it the 'conjugate' of vector v? I'm confused
He's got good chalk writing
What does II of the thm. say if R was changed to C or some other field? Is the proof any different if it was done on linear maps between arbitrary inner-product spaces instead of Euclidean spaces? What does the thm. say if the dimension was infinite?
The theorem also works over C, but you need to change from symmetric matrices to Hermitian matrices (i.e. matrices that equal their *conjugate* transpose). The proof works in the same way for arbitrary inner product spaces. If the dimension is infinite, one essentially gets into functional analysis and there are various spectral theorems - e.g. the same statement as the basic spectral theorem holds for compact self-adjoint operators on a (real or complex) Hilbert space. Generalising beyond that, in order for the statement to remain true, you also have to generalise your notion of eigenvectors, and this rapidly gets rather complicated.
Isn't the proof by induction a bit of overkill her? ;-)
Just considering e_i instead of e_1 and deducing that A_{i,i} = 1, and A_{i,j} = 0 for i
e j does the trick, no?
We only know that v1 is an eigenvector. So the other columns don’t necessarily reduce to be diagonal.
@@TomRocksMaths Ha, forgot about that fact. Thanks!
Hi Tommy 😁
You are an example of "don't judge a book by it's cover."
Nice
This is cool and all but never forget that this is the same guy who forgot his circle theorems :)
Jk man you are awesome!
💐 ᵖʳᵒᵐᵒˢᵐ
You haven’t shown there is at least one real eigenvalue for A.
It should follow easily from the fundamental theorem of algebra. Great video nonetheless.
Uh easy
He doesn't even look like a mathematician, cuz when I saw him the first time, I thought he was some kind of musician