For those who had to google what null space is (like me), here's a quick refresher: It is defined as the set of all vectors x that satisfy the equation Ax = 0, where A is a given matrix. Here are some key points about the null space: - The null space contains all solutions to the homogeneous system of linear equations represented by Ax = 0. - It forms a vector space, meaning it is closed under both addition and scalar multiplication. - The null space of a matrix A is a subspace of R^n, where n is the number of columns in A. - If the only solution to Ax = 0 is x = 0, the null space consists of the zero vector alone. This subspace, {0}, is called the trivial subspace. - The null space can provide insights into the properties of the matrix and the system of equations it represents.
No professor of my university was able to explain properly how to determine the eigenvectors. They were just computing the end result and never explained how they came up with this result. Thank you very much, you are a genius.
It could be either (A - lambda*I)v=0 or (lambda*I - A)v=0 . The two are the same, just differing by a multiple of (-1). Because (-1) is a constant, it can multiply into the parentheses and flip the expression inside, leaving the equation unchanged.
Thank you - You pushed my Math AND English skill through the roof - Funny that the German word: Eigenvector became a "special" word (It could have been just be translated to "own - vector") =)
Should have put emphasis on v3 being the free variable (row not containing a leading 1) which is why you chose v3=t. other than that very clear explanation!
I should had come here earlier, so many tutorials, they avoided taking a 3x3 matrix or explain in detail what's happening, like it is a big deal to work on 2x2 matrix. Thanks a lot! I am sad to say but once again is proven that internet is full of bad quality job (tutorials)!
@khanacademy I'm looking at my book now, shouldn't the eigenvalue solutions be derived from the equation: det ( A - [lambda] I) = 0 ? @1:50, I can see the equation from which the eigenvalues are derived from as: ([lambda] I - A) V =0 , which is the reverse. The book says to "find the null space of the matrix A - [lambda]I. This is the eigenspace E_lambda, the nonzero vectors of which are the eigenvectors of A..." The book is: "Linear Algebra: A Modern Introduction", 3rd E, Poole, p303
@ashk0n also eigenmatrices have many applications to number theories aka that if the dominant singular valueso f a matrix P is greater than the dimension of any other matrix then the supremem of P times Q is always equal to the eigenvalues of something
When finding the eigenvectors, do we really have to do gaussian elimination and reduce one of the rows to all 0's? Because I sometimes I have different results than the book provides.
I thought for every NxN matrix you have a character polynomial to the Nth degree with N number of eigenvalues that correspond with the same N number of eigenvectors. So wouldnt you need 3 eigenvalues that have 3 eigenvectors each for this example?
This is a strange method for solving for the nullspace. It looks like you're arbitrarily picking either v1, v2, or v3 to be equal to 1t. You should specify that v3=1 because it is a pivot variable.
+DanO Yes, even I feel this method is strange. I checked some 3 text books and numerous pages on internet and couldn't find anything similar to this. But this really works. When I created a modal matrix M using these eigen vectors and then diagonalised it using M^(-1)AM, I actually obtained a diagonal matrix. (My original objective was to diagonalise a matrix but I didn't know how to obtain M for repeated eigen values, so I watched this video). And this is the easiest method to obtain eigen vector for repeated eigen values.
is E-3 perpendicular to E3. both span of E-3 is perpendicular to each other, but E3 is not perpendicular to both. this is my thinking. please explain me.
this guy is definitely jesus. i mean, his voice doesn't sound exactly like what you'd expect it to, but still, he must be jesus. he has come back to help us with maths!
It could be either (A - lambda*I)v=0 or (lambda*I - A)v=0 . The two are the same, just differing by a multiple of (-1). Because (-1) is a constant, it can multiply into the parentheses and flip the expression inside, leaving the equation unchanged.
you choose v3= t out of free choice! but if i choose v2=t my vector will be completely different. or can the "t" adjust the vector? does it even make a difference? this is the only thing stopping me from understand this subject i math! i understand how to work with it, but i dont understand the outcome!!
Um I don't think you got this right. An eigenvector is not a basis of a subspace. It is a collection of eigenvalues that are spread out from eachother. For example, if the eigvenvalues for a matrix A are 1 and 3, then the eigenspace is 3+1 = 4. The same is true for complex eigenvalues and their corresponding eigenspaces.
***** the null space is composed of only the zero vector, because the rows of the matrix are linearly independent. This means that there is no eigenvector because the eigenspace has 0 dimension. Or actually.. Maybe it means the eigenvector is [0,0,0]. Anyone know?
By definition the eigenvector is a nonzero vector. If you would allow it to be one, than every matrix would have unlimmited amount of eigenvalues, because zero-vector is allways maped (at least in linear transformations) to (another) zero-vector and the later multiplied with any number is zero-vector again. It`s like excluding the zero-vector from basis. Its is L.I. from all other vectors, but he brings no new or even any information to the basis.
Is it just me or is there an actual mistake in the calculations of the rows for the second eigenvector? The second row: -2-(-2)= 0 -5-(-2)= -3 1-(-2)= 3, and not -3 similarly in the third row: -2-(-2)= 0 1-(-2)= 3, and not -3 -5-(-2)= -3, and not 3
I have a matrix A = {{7,-5,0},{-5,7,0},{0,0,-6}} I have found the Eigenvalues, 2,12,-6 but I'm only getting one Eigenvector, (0,0,1).. Can someone please help?
This explanation is not generalizable. Lets say R1 has 1 for X_3. What do you do then? I'm just assuming, which means I get it wrong on the homework and test and it takes me longer to do my homework. You need to explain the edge cases better. Thanks.
For those who had to google what null space is (like me), here's a quick refresher:
It is defined as the set of all vectors x that satisfy the equation Ax = 0, where A is a given matrix.
Here are some key points about the null space:
- The null space contains all solutions to the homogeneous system of linear equations represented by Ax = 0.
- It forms a vector space, meaning it is closed under both addition and scalar multiplication.
- The null space of a matrix A is a subspace of R^n, where n is the number of columns in A.
- If the only solution to Ax = 0 is x = 0, the null space consists of the zero vector alone. This subspace, {0}, is called the trivial subspace.
- The null space can provide insights into the properties of the matrix and the system of equations it represents.
No professor of my university was able to explain properly how to determine the eigenvectors. They were just computing the end result and never explained how they came up with this result. Thank you very much, you are a genius.
You are in a wrong university.
10:32 "Free real estate"
Awesome video btw!!
he quoted a meme from the future :)
Dear Khan,
You da real MVP.
Great explanation. Now that I have got the theory down, I will somehow need to figure out how to translate all that into Python code 😄.
wow thanks im from university of cape town,i had a problem in reducing ..now im mastering this! you're the real hero!
It could be either (A - lambda*I)v=0 or (lambda*I - A)v=0 . The two are the same, just differing by a multiple of (-1). Because (-1) is a constant, it can multiply into the parentheses and flip the expression inside, leaving the equation unchanged.
I guess I am kinda off topic but do anybody know a good place to stream new series online?
@Kace Cannon thanks, I went there and it seems like a nice service :) I appreciate it!!
@Kingsley Milan No problem =)
Thank you, my lecturer sucks. You made something he made complicated easy again.
The real estate part really helped me out!
Thanks to you I'm going to be able to pass my class.... Thank you soooooo much ;)
Thank you - You pushed my Math AND English skill through the roof - Funny that the German word: Eigenvector became a "special" word (It could have been just be translated to "own - vector") =)
Thank you so much!!! This helped me on a problem I was stuck on forever!
Should have put emphasis on v3 being the free variable (row not containing a leading 1) which is why you chose v3=t. other than that very clear explanation!
Excellent, bad explanation at college, thank you so much for your video!
I came from precalc, listened to the first minute, and barfed
THANK GOD FOR KHAN ACADEMY
YOU ARE THE BEST!!! :D You just cleared all the questions I sent to my professor 3 hours ago in 30 minutes ahah!!!.. YOU ARE THE BEST :D
thank u vry mch............nw i feel so gud for the eigen vectors....although i watchd ur video jst before a dy of my EXAM :-)
I should had come here earlier, so many tutorials, they avoided taking a 3x3 matrix or explain in detail what's happening, like it is a big deal to work on 2x2 matrix. Thanks a lot!
I am sad to say but once again is proven that internet is full of bad quality job (tutorials)!
Could you possibly do a video of why I am hearing this terminology in my Differential Equations Class?
I'll have an exam this morning and you ARE a lot of help. Thank you veeeery much!
Did u pass though?
Oh wow, was stressing about the last step in finding the Eigenvalues but this made it incredibly clear, thanks a lot :)
Sal I'd really enjoy it if the example you made wasn't of nullity 2, as a full matrix probably would've helped me more.
u just pulled so many knots in my brain
if i had a nickel for every lab this guys helped me with id have 2 nickels
His explanations are pretty clear though he's a little disorded . Very good overall!!
Because elementary row operations change the value of the determinant, so you'd have to "undo" them again anyway; might as well only do them once.
much better than my books! thanks a lot
you my respected mare r an absolute legend..!SAVIOUR
GREAT!!! Really clear and helpful!!!!!!
Thank you! Very clear and comprehensible.
thanks god..!! you are great!!!
It'd be cool if I had a professor who who any of this stuff.
All respect to your effort man ....wish that all the world is like you :)
you saved me on my final last spring.
my exam is good now !!
thank you very much
Thanks Sal!
@unkown1414
totally agree
must be tablets man, he's too precise
Thank you this clear many pictures for me :)
@khanacademy
I'm looking at my book now, shouldn't the eigenvalue solutions be derived from the equation: det ( A - [lambda] I) = 0 ? @1:50, I can see the equation from which the eigenvalues are derived from as: ([lambda] I - A) V =0 , which is the reverse. The book says to "find the null space of the matrix A - [lambda]I. This is the eigenspace E_lambda, the nonzero vectors of which are the eigenvectors of A..." The book is: "Linear Algebra: A Modern Introduction", 3rd E, Poole, p303
thank you
Jazak allah
@ashk0n also eigenmatrices have many applications to number theories aka that if the dominant singular valueso f a matrix P is greater than the dimension of any other matrix then the supremem of P times Q is always equal to the eigenvalues of something
@MartinRyleOShea
if the det(A - lambda*identity) = 0 then lambda is an eigenvalue of A.
When finding the eigenvectors, do we really have to do gaussian elimination and reduce one of the rows to all 0's? Because I sometimes I have different results than the book provides.
Hallelujah! PRAISE THE LORD!
I LOVE METH!!!!!.....i mean MATH!!!!
thank you! my teacher aint got nothin on you
I thought for every NxN matrix you have a character polynomial to the Nth degree with N number of eigenvalues that correspond with the same N number of eigenvectors. So wouldnt you need 3 eigenvalues that have 3 eigenvectors each for this example?
Isn't it |A - (lambda)(I)| -> [determinant of {A minus (lambda x Identity matrix)}]?
YES
This is a strange method for solving for the nullspace. It looks like you're arbitrarily picking either v1, v2, or v3 to be equal to 1t. You should specify that v3=1 because it is a pivot variable.
+DanO Yes, even I feel this method is strange. I checked some 3 text books and numerous pages on internet and couldn't find anything similar to this. But this really works. When I created a modal matrix M using these eigen vectors and then diagonalised it using M^(-1)AM, I actually obtained a diagonal matrix. (My original objective was to diagonalise a matrix but I didn't know how to obtain M for repeated eigen values, so I watched this video). And this is the easiest method to obtain eigen vector for repeated eigen values.
where can I find electricity and magnetism videos which would explain everything just like this.
@ Khan Academy.
is E-3 perpendicular to E3. both span of E-3 is perpendicular to each other, but E3 is not perpendicular to both. this is my thinking. please explain me.
What happens if when you row reduce your matrix you get a zero column, how can you find the eigenvectors.
What happens when the reduced row echelon form of a 3 x 3 is
hi can i ask if it is necessary to reduce the matrix?
yes
Is it because we have free variables, we don't need to normalize it? Thank you
Thanks for your useful videos. But can you please get a new microphone the noise sometimes makes it hard to follow the video all the way
This was made almost 7 years ago, I'm pretty sure he got a new mic since then.
also why have you overcomplicated the eigenvector for eigenvalue=3? what’s wrong with (1,1,1)
You can't just choose any eigenvalues, in the previous video he found them: th-cam.com/video/11dNghWC4HI/w-d-xo.html
Eigen space would be same if you were to keep (1,1,1) and (0,1,1) just calculate
The real superman!
so eigen vectors and eigen space is the same thing?
Basically there are infinite eigen vectors, eigen space is the collection of those eigen vectors
this guy is definitely jesus. i mean, his voice doesn't sound exactly like what you'd expect it to, but still, he must be jesus. he has come back to help us with maths!
don’t understand why you use row reduction when it really isn’t necessary, the eigenvectors are obvious just from looking at A-lamda x identity
totally saving my ass for my exam tomorrow.
can you explain why it is (lambda I - A) V = A instead of (A - lambda I) v = 0
It could be either (A - lambda*I)v=0 or (lambda*I - A)v=0 . The two are the same, just differing by a multiple of (-1). Because (-1) is a constant, it can multiply into the parentheses and flip the expression inside, leaving the equation unchanged.
8:07 what in the fuq. Bruh. LMAOOO
you choose v3= t out of free choice! but if i choose v2=t my vector will be completely different. or can the "t" adjust the vector?
does it even make a difference? this is the only thing stopping me from understand this subject i math! i understand how to work with it, but i dont understand the outcome!!
Um I don't think you got this right. An eigenvector is not a basis of a subspace. It is a collection of eigenvalues that are spread out from eachother. For example, if the eigvenvalues for a matrix A are 1 and 3, then the eigenspace is 3+1 = 4.
The same is true for complex eigenvalues and their corresponding eigenspaces.
How do i find the eigenvector if when I reduce the nullspace I get the vector [100, 010, 001] instead of [100,010,000]?
***** the null space is composed of only the zero vector, because the rows of the matrix are linearly independent. This means that there is no eigenvector because the eigenspace has 0 dimension. Or actually.. Maybe it means the eigenvector is [0,0,0]. Anyone know?
By definition the eigenvector is a nonzero vector. If you would allow it to be one, than every matrix would have unlimmited amount of eigenvalues, because zero-vector is allways maped (at least in linear transformations) to (another) zero-vector and the later multiplied with any number is zero-vector again. It`s like excluding the zero-vector from basis. Its is L.I. from all other vectors, but he brings no new or even any information to the basis.
he's a teacher. he has like 6 degrees, just look him up on wikipedia
Lets just change colours for fun :D
I love you.
the gods have answered...
Is it just me or is there an actual mistake in the calculations of the rows for the second eigenvector?
The second row:
-2-(-2)= 0
-5-(-2)= -3
1-(-2)= 3, and not -3
similarly in the third row:
-2-(-2)= 0
1-(-2)= 3, and not -3
-5-(-2)= -3, and not 3
I love you
eigenkosommak
v1+v3=0
v2=0
I have a matrix A = {{7,-5,0},{-5,7,0},{0,0,-6}}
I have found the Eigenvalues, 2,12,-6 but I'm only getting one Eigenvector, (0,0,1)..
Can someone please help?
This explanation is not generalizable. Lets say R1 has 1 for X_3. What do you do then? I'm just assuming, which means I get it wrong on the homework and test and it takes me longer to do my homework. You need to explain the edge cases better. Thanks.
I love you
"V2 is equal to... I'm just gonna put some random number"
random number: *A*
didnt understand
LOL
Not as good as your other videos in the same area.
Is this all the same guy? He teaches the Org Chem too. Is this guy just a professor by hobby?
Thank You