People who got confused how he did compute [0 0 -1] as a generalized eigenvector. First: Use Gaussian-Jordan for the augmented matrix 3 -2 -1 | 1 3 -2 -1 | 1 2 -1 -1 | 1 We get 1 0 -1 | 1 0 1 -1 | 1 0 0 0 | 0 Put them in equations x -z = 1 y -z = 1 0 0 0= 0 Or x = 1 + z y = 1 + z Notice z is a free variable. We could choose any value for it, so for simplicity , let z = -1 ---> x = 0 , y = 0 hence, the generalized eigenvector is v=[0 0 -1] which it will be used for the next computation.
Thank you for adding these details. I hesitate to jump in because your comment is actually so helpful and it was so nice of you to take your time to write such a detailed description... but there is a better way to approach linear systems, which I describe in the early part of the course. You should never ever convert matrices into equations in order to solve a linear system.
@@MathTheBeautiful I wasn't expect a swift response. Thank you for your invaluable comment. It could be nice to add the link the playlist for this course. Cheers
In this case, as there was only one eigenvector, we used that one to derive the generalized eigenvectors. What happens for a 3x3 matrix which has two eigenvalues, one with a multiplicity of 2 and the other with 1. Which of the two eigenvectors would we use to derive the other generalized eigenvector?
One of the eigenvectors would be associated with the single eigenvalue and has nothing to do with the second eigenvalue. The eigenvalue associated with a multiplicity of 2 has either one or two eigenvectors. If it has 2 you are done. If it has only one the only choice for the Jordan chain is that single eigenvector. So there's nothing to choose between here.
If we have a 3x3 matrix with only 1 eigenvalues of algebraic multiplicity 3 and geometric multiplicity 2, which eigenvector in the 2 dimensional eigenspace would we use to solve for the generalized eigenvector?
@@dw61w You try both. Only one will form a Jordan chain. It is also possible that neither (alone) will form a Jordan chain in which case you will need to try a linear combination of both of them. I posted this on another video concerning the matrix [1 1/2 1/2; 0 1/2 -1/2; 0 1/2 3/2] (in matlab notation). If your original eigenvectors do not allow forming of a Jordan chain some linear combination of them will do so. E.g., in this example I find two vectors in the null space of (A - lambda*I) to be (1 0 0) and (0 -1 1). Using row reduction it is easy to determine that a consistent set of equations will result only if we choose a scalar multiple of the sum of these two eigenvectors as the start of our Jordan chain. So choose v1 = (1 - 1 1) as the first eigenvector and then solve for (A - lambda*I)*v2 = v1. This gives v2 = ( 0 2 0 ). v3 can then be taken to be ( 1 0 0 ) which was found initially.
Hello Sir. I haven't been watching your entire series, so perhaps this is why I don't recognize what you're referring to. How is the first generalized vector [0 0 -1] ^T found in the column space of the Matrix A? At least right now I don't see it. See 5min46second mark. -- Ok I see it. Thanks for effort you put in. :)
To find the generalised vector of rank 3 in this example, would it be possible to simply derive it from the cross product of the eigenvector and the generalised eigenvector of rank 2?
Call the matrix A. I got A[x]=[0,0,0] Used G-J elimination to get |1 0 -1 1| |0 1 -1 1| |0 0 0 0| So that's the vector [1 1 0] +z[1 1 1] Set z=-1 [1 1 0] -[1 1 1] =[0 0 -1]
It might seem illogical at first but what prof. skipped to mention is that the vector [1,1,1] is in the nullspace lies in the un-transformed subspace itself (i.e x) and column space lies in the transformed sub space of x (i.e. y = Ax). that concept might get hazy when considering a square Matrix (A) as both dimensions are same (3) Great teaching by the man btw. appreciated.
What if you want to go to [1;1;1] for the second generalized eigenvector? I mean, you target [0;0;-1], but let us target [1;1;1] again and we may find [1;1;0] which is not same as [0;0;-1]. Can we take this vector as the second generalized eigenvector?
Sir, The eigenvectors of the defective matrix do not form a basis. So if (as shown in this lecture) we have one eigenvector, we can always arbitrarily choose other two vectors which will be LI and form a right-handed system (using dot and cross products). What is the need of going through this procedure if all we wanted was a basis with originally found eigenvector as one of the vectors of basis? I am not getting the use/application of these specially found generalized eigenvectors. Please clarify.
Vineet Mukim Generalized eigenvectors can be used to obtain the Jordan normal form of the defective matrix. This is useful in computing matrix functions like exponential of a matrix. Defective matrix is not diagonalizable but still can be represented in the form M*J*inv(M) using generalized eigenvectors, J is in Jordan normal form.
how do you know that the null space of the matrix (A-3I) is one-dimensional, as you stated directly without giving any arguments that lead to this conclusion.
dude, thank you SOOO much! I'm currently taking Adv Lin Alg II for the summer (fast paced) and I had no idea how they found that other e.vector (gen. e.vector) Liking this and sharing it to my peeps! haha :D
Sir, is it possible for us to determine the Jordan Form of the 3x3 given here, by using the P matrix( where P = [v1,v2,v3] & v1=eigvec AND v2 and v3 are the generalized eigen vector which are found in this video-) J=inv(P)*A*P---->If the answer is yes, I did it and get the the matrix as following, [0 1 0 0 0 1 0 0 0]------>Question is, Why is that? Aren't we suppose to get the matrix as following, thanks in advance. [3 1 0 0 3 1 0 0 3]
I know you've probably solved this problem by now, but I figured I'd throw in the solution for anybody else who stumbles across this. Let P be the matrix of generalised eigenvectors: P = [1 0 0 1 0 -1 1 -1 2] let A be the original matrix: A = [6 -2 -1 3 1 -1 2 -1 2] and let matrix for the eigenvalue equation A-3I be the matrix D = [3 -2 -1 3 -2 -1 2 -1 -1] Your question was "shouldn't Pinv * A * P give [3 1 0 0 3 1 0 0 3]?" The answer is yes, and it does. What you've calculated was simply Pinv * D * P, which gives [0 1 0 0 0 1 0 0 0]
rref(A) = [1 0 -1; 0 1 -1; 0 0 0 ], if X1=X2=0 then X3 = -1 or X1=X2=1 then X3 = 0. Thus I get X = [1 ; 1 ; 0] OR X = [0 ; 0 ; -1]. How can you be sure that X = [0 ; 0 ; -1] and not [1 ; 1 ; 0]?
The third column is (-1) * (Right-Hand-Side of the Equation). So a solution is [ 0, 0, -1 ]. I can see how this was a frustrating moment. There are earlier videos in this series that explain this point.
By inspection isn't really an answer dude. Solve 3x1-2x2-x3 = 1 2x1-x2-x3 = 1 .... X1 = X2, X1-X3 = 1 soo..... If X1=X2 = 1 then X3 = 0 or X1 = X2 = 0 then X3 = -1 or X1 = X2 = 2 then X3 = 1 ... So I'm seeing an infinite number of solutions. You arbitrarily picked one of them?
Exactly. The matrix in all cases is singular with the null space of alpha*[1,1,1]. So in all cases there are infinitely many solutions (any two of which vary by alpha[1,1,1]). According to the algorithm, you can choose any solution. The same is the case when determining conventional eigenvectors.
Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
Fantastic short lecture. Your blackboard is SO “clean”. Thanks. Reading Axler.
Thank you!
From 2:00 to 4:28 , why did you dub over yourself?
because his mic broke silly boy
People who got confused how he did compute [0 0 -1] as a generalized eigenvector.
First: Use Gaussian-Jordan for the augmented matrix
3 -2 -1 | 1
3 -2 -1 | 1
2 -1 -1 | 1
We get
1 0 -1 | 1
0 1 -1 | 1
0 0 0 | 0
Put them in equations
x -z = 1
y -z = 1
0 0 0= 0
Or
x = 1 + z
y = 1 + z
Notice z is a free variable. We could choose any value for it, so for simplicity , let z = -1 ---> x = 0 , y = 0 hence, the generalized eigenvector is v=[0 0 -1] which it will be used for the next computation.
Thank you for adding these details. I hesitate to jump in because your comment is actually so helpful and it was so nice of you to take your time to write such a detailed description... but there is a better way to approach linear systems, which I describe in the early part of the course. You should never ever convert matrices into equations in order to solve a linear system.
@@MathTheBeautiful I wasn't expect a swift response. Thank you for your invaluable comment. It could be nice to add the link the playlist for this course. Cheers
why did u dub over urself
Thank you professor I was really confused in this part, this video helped so much
Thanks so much man, I was confused about how to find a specific generalized eigenvector and you really helped. Great videos.
In this case, as there was only one eigenvector, we used that one to derive the generalized eigenvectors. What happens for a 3x3 matrix which has two eigenvalues, one with a multiplicity of 2 and the other with 1. Which of the two eigenvectors would we use to derive the other generalized eigenvector?
One of the eigenvectors would be associated with the single eigenvalue and has nothing to do with the second eigenvalue. The eigenvalue associated with a multiplicity of 2 has either one or two eigenvectors. If it has 2 you are done. If it has only one the only choice for the Jordan chain is that single eigenvector. So there's nothing to choose between here.
If we have a 3x3 matrix with only 1 eigenvalues of algebraic multiplicity 3 and geometric multiplicity 2, which eigenvector in the 2 dimensional eigenspace would we use to solve for the generalized eigenvector?
@@dw61w You try both. Only one will form a Jordan chain. It is also possible that neither (alone) will form a Jordan chain in which case you will need to try a linear combination of both of them. I posted this on another video concerning the matrix [1 1/2 1/2; 0 1/2 -1/2; 0 1/2 3/2] (in matlab notation). If your original eigenvectors do not allow forming of a Jordan chain some linear combination of them will do so. E.g., in this example I find two vectors in the null space of (A - lambda*I) to be (1 0 0) and (0 -1 1). Using row reduction it is easy to determine that a consistent set of equations will result only if we choose a scalar multiple of the sum of these two eigenvectors as the start of our Jordan chain. So choose v1 = (1 - 1 1) as the first eigenvector and then solve for (A - lambda*I)*v2 = v1. This gives v2 = ( 0 2 0 ). v3 can then be taken to be ( 1 0 0 ) which was found initially.
Very old video yet still as phenomenal. Very easy to understand explanation!
Thank you! This is my favorite kind of comment.
Hello Sir. I haven't been watching your entire series, so perhaps this is why I don't recognize what you're referring to. How is the first generalized vector [0 0 -1] ^T found in the column space of the Matrix A? At least right now I don't see it. See 5min46second mark. -- Ok I see it.
Thanks for effort you put in. :)
Thanks! Great job figuring it out!
Phenomenal explanation.
Can someone explain to me what "matrix pencils" have to do with the generalized eigenvalue problem?
I was wondering, what is the physical meaning of the generalized eigenvector?
I don't know! I only know its algebraic meaning.
Not sure about physical meaning, but geometrically you are shearing the space in the direction of eigenvectors (or other generalized eigenvectors).
The pace drives me crazy
To find the generalised vector of rank 3 in this example, would it be possible to simply derive it from the cross product of the eigenvector and the generalised eigenvector of rank 2?
you are AWESOME, SIR !
How do I solve for [0 ; 0 ; -1]?
Call the matrix A. I got
A[x]=[0,0,0]
Used G-J elimination to get
|1 0 -1 1|
|0 1 -1 1|
|0 0 0 0|
So that's the vector
[1 1 0] +z[1 1 1]
Set z=-1
[1 1 0] -[1 1 1]
=[0 0 -1]
Could you please help explain why the null space is contained in the column space? Thanks!
It might seem illogical at first but what prof. skipped to mention is that the vector [1,1,1] is in the nullspace lies in the un-transformed subspace itself (i.e x) and column space lies in the transformed sub space of x (i.e. y = Ax). that concept might get hazy when considering a square Matrix (A) as both dimensions are same (3)
Great teaching by the man btw. appreciated.
What if you want to go to [1;1;1] for the second generalized eigenvector? I mean, you target [0;0;-1], but let us target [1;1;1] again and we may find [1;1;0] which is not same as [0;0;-1]. Can we take this vector as the second generalized eigenvector?
i dont understand why s the eigenvector in the column space ? and what is the proof that this algorithm will always ?
The proof is highly technical. It can be found in Gelfand's book on Linear Algebra.
Sir,
The eigenvectors of the defective matrix do not form a basis. So if (as shown in this lecture) we have one eigenvector, we can always arbitrarily choose other two vectors which will be LI and form a right-handed system (using dot and cross products). What is the need of going through this procedure if all we wanted was a basis with originally found eigenvector as one of the vectors of basis? I am not getting the use/application of these specially found generalized eigenvectors. Please clarify.
Vineet Mukim Generalized eigenvectors can be used to obtain the Jordan normal form of the defective matrix. This is useful in computing matrix functions like exponential of a matrix. Defective matrix is not diagonalizable but still can be represented in the form M*J*inv(M) using generalized eigenvectors, J is in Jordan normal form.
Thanks for the intuition
Glad you found it helpful!
Thank you, I love these videos!
Thank you so much
how do you know that the null space of the matrix (A-3I) is one-dimensional, as you stated directly without giving any arguments that lead to this conclusion.
E.G. the first two columns of (A-3I) are linearly independent.
Rank plus dimension of nullspace= dimension of the matrix
dude, thank you SOOO much! I'm currently taking Adv Lin Alg II for the summer (fast paced) and I had no idea how they found that other e.vector (gen. e.vector) Liking this and sharing it to my peeps! haha :D
Sir, is it possible for us to determine the Jordan Form of the 3x3 given here, by using the P matrix( where P = [v1,v2,v3] & v1=eigvec AND v2 and v3 are the generalized eigen vector which are found in this video-)
J=inv(P)*A*P---->If the answer is yes, I did it and get the the matrix as following,
[0 1 0
0 0 1
0 0 0]------>Question is, Why is that?
Aren't we suppose to get the matrix as following,
thanks in advance.
[3 1 0
0 3 1
0 0 3]
I know you've probably solved this problem by now, but I figured I'd throw in the solution for anybody else who stumbles across this.
Let P be the matrix of generalised eigenvectors: P =
[1 0 0
1 0 -1
1 -1 2]
let A be the original matrix: A =
[6 -2 -1
3 1 -1
2 -1 2]
and let matrix for the eigenvalue equation A-3I be the matrix D =
[3 -2 -1
3 -2 -1
2 -1 -1]
Your question was "shouldn't Pinv * A * P give
[3 1 0
0 3 1
0 0 3]?"
The answer is yes, and it does. What you've calculated was simply Pinv * D * P, which gives
[0 1 0
0 0 1
0 0 0]
Are you Paul Scheer’s smarter brother?
No clue how you solved for [ 0 ; 0 ; -1] I don't see any mathmatical way of doing that - just "poof" and you have the answer.
rref(A) = [1 0 -1; 0 1 -1; 0 0 0 ], if X1=X2=0 then X3 = -1 or X1=X2=1 then X3 = 0. Thus I get X = [1 ; 1 ; 0] OR X = [0 ; 0 ; -1]. How can you be sure that X = [0 ; 0 ; -1] and not [1 ; 1 ; 0]?
The third column is (-1) * (Right-Hand-Side of the Equation). So a solution is [ 0, 0, -1 ].
I can see how this was a frustrating moment. There are earlier videos in this series that explain this point.
By inspection isn't really an answer dude.
Solve 3x1-2x2-x3 = 1
2x1-x2-x3 = 1
....
X1 = X2, X1-X3 = 1
soo.....
If X1=X2 = 1 then X3 = 0
or
X1 = X2 = 0 then X3 = -1
or
X1 = X2 = 2 then X3 = 1
...
So I'm seeing an infinite number of solutions. You arbitrarily picked one of them?
Exactly. The matrix in all cases is singular with the null space of alpha*[1,1,1]. So in all cases there are infinitely many solutions (any two of which vary by alpha[1,1,1]). According to the algorithm, you can choose any solution. The same is the case when determining conventional eigenvectors.
Okay thank you, you probably mentioned infinite # of solutions and I missed it.
Good stuff
Thank you!
thank you
you absolute LEGEND!!!
remix
wouw ty