This video concludes my linear algebra series. You can watch the whole series here: th-cam.com/play/PLug5ZIRrShJHNCfEiX6l5CKbljWayGEcs.html Next series coming up: differential equations!
can u do more numeric methods like QR and housholder vector and mehods to reach P(A) factors like faddev , levrrie and krylov . thank u very much for ur simple explication
Thanks for the help :) I came across this theorem while reading Spectral Graph Theory and never learnt this in normal linear algebra courses. You explained it really well.
When I got the notification of this video almost 4 months ago, I didn't knew anything about linear algebra, but now I have learned it fully, so now I have understood everything, I can say the video was great 👍👍👍
you are a saviour! our professor is only uploading the scripts due to corona crisis, no classroom teaching is there. and its bit time consuming and difficult to understand such concepts by our own. God bless you, brother!
See now this is quality information. Thank YOU!!! Seriously Thank YOU!!! And NO my matrix/LA class does not cover anything useful. You should come to my EE department and slap every pof.
Best regards from your colleagues in Germany! Very helpful video! Thanks for sharing it with us all. I learned a lot and now I do understand what the theory is based on. Keep going! Amazing videos. :-)
At 7:55, you claim that 2 is an eigenvalue. If I'm not mistaken, all the eigenvalues belong to the union of the Gershgorin discs, so your claim holds only if that disk is disjoint from the others. Of course here it is the case, but there's no way to know this up front ;)
This helped me so much. I have a stupid question though, so from what I'm understanding, which actual row i is isn't really important at all in proving this theorem, correct?
Yeah really fluent and easy to follow. Just one question. Could you explain why do we substitute vector v by its element v(i) in the second step while other part of the equation stays as sum?
In that step, we are looking at the i'th component of the vector. On the right side, the i'th component is just v_i by definition. On the left side, since there is matrix multiplication, that sum is the result of matrix multiplication to get the i'th component of the product.
The transposed matrix has the same eigenvalues, so you can also look at the column, cant you? In your example, you can aproximate the 3rd one in a circle of .2 radius of -20? Im not sure though
What a great video!!! Thank you! Could you please mention what is this extension of the theorem you mentioned? What is the name of this corollary? EDIT: ah wiki has a nice explanation, found it
So because we are choosing vi to be the largest value in v doesnt that mean we can only use this for the i-th column of the Matrix? This is where this theorem confused me. In other words if I choose the i-th column then it is assuming that the i-th value of the corresponding eigenvector is the largest value of the eigenvector?
@@MuPrimeMath Doesnt this imply that for a given matrix that has nxn entries each of the corresponding eigenvectors have their absolute largest value in a different Index? I am just thinking about if that makes any sense or if that is helpful in any way or if I am wrong :D Thanks for answering so quickly. Regards
If two eigenvectors have the largest value at the same index, then the Gershgorin disks for each eigenvalue will overlap. For example, take the matrix [[10,-4],[-1,10]]. The eigenvectors are [2,1] and [-2,1], so both have the first entry as the largest value. The two Gershgorin disks are both centered at 10. The second disk (radius 1) has no eigenvalues in it, which makes sense because the second entry of the eigenvectors is never the largest. The first disk (radius 4) has both eigenvalues in it. This is allowed because the disks intersect, so we only need the two eigenvalues to be in the union of the two disks.
@@MuPrimeMath So from my understanding because of the derivation the 2nd circle makes no sense because both of the EVec have the first value as their largest. It just happens to be where the 1st circle is centered. Meaning if we know the EVec we could choose to ignore the 2nd circle but in reality of course we wouldnt know them so in most cases we wouldnt ignore it. Am I correct? Sorry I think I need to stop here and move on.
I got rid of the Vi because we don't actually know the value of i. i is defined as the entry in the eigenvector that has the biggest magnitude, but we don't know the eigenvectors yet, so we can't identify i. We just use the fact that Vi is the biggest entry by definition to get rid of the Vc/Vi term!
Your proof is not correct, because restricting the index of 'i' at the beginning of the proof results into final inequality valid only for that one case, only for one diagonal element. But the theorem relates to all diagonal elements.
For a fixed eigenvector, the final inequality at the bottom of the board at 7:01 is not true for all choices of index i. This is easy to see because the Gershgorin disks can be pairwise disjoint, and in that case the eigenvalue cannot be in all disks at once. For each eigenvector v, the theorem guarantees that its eigenvalue is contained in the disk centered at the diagonal element of A with index equal to the index of the largest-magnitude entry of v. I explain the reason for the largest-magnitude assumption at 5:55. You may be thinking of a theorem other than the one I proved in the video.
@@MuPrimeMath Then what guarantees that the index i of the largest magnitude of entry of eigenvector corresponds to the fact that its eigenvalue is located in the disk centered by diagonal element a_ii of matrix of the same indexes? And that if we choose indexes of the largest magnitude of all eigenvectors we cover all diagonal indexes in the matrix, so that no index will repeat and no index will be left?
So we cannot apply this to the 3 components of the diagonal, but only the one in the line which has a for number the number where the component of v is max in absolute value
Note that i is the index of the largest-magnitude entry of the eigenvector v, not the largest entry in the matrix. We can have eigenvectors with different largest-magnitude entries, and the theorem shows that they are each contained in the disk corresponding to their largest-magnitude index. For a justification of why we expect an eigenvector in each disk (i.e. why we can apply the theorem to each component of the diagonal), see 8:56.
nobody can understand the theorem, even brilliant student unless you solve at least 1 or 2 similar examples. I hope you must clear in practical what actually theorem states. The weakness of the lecture in this was, not stating and drawing the Gerschgorin circle and plotting eigenvalues in the figure. If that was done so, then it would be the first-class lecture and then could draw students attention.
This video concludes my linear algebra series. You can watch the whole series here: th-cam.com/play/PLug5ZIRrShJHNCfEiX6l5CKbljWayGEcs.html
Next series coming up: differential equations!
can u do more numeric methods like QR and housholder vector and mehods to reach P(A) factors like faddev , levrrie and krylov . thank u very much for ur simple explication
Whoa!! This theorem is very cool! It’s almost like a simple method to find the eigenvalues.
Hi blackpenredpen I have made a new video
@@chirayu_jain Oh yea, I saw! Very nice!
Find is a strong word. More like refine, or corral.
Dankeschön from a struggling german student who now has some understanding of this :)
Very good and fast explanation!
Thanks for the help :) I came across this theorem while reading Spectral Graph Theory and never learnt this in normal linear algebra courses. You explained it really well.
When I got the notification of this video almost 4 months ago, I didn't knew anything about linear algebra, but now I have learned it fully, so now I have understood everything, I can say the video was great 👍👍👍
you are a saviour! our professor is only uploading the scripts due to corona crisis, no classroom teaching is there. and its bit time consuming and difficult to understand such concepts by our own. God bless you, brother!
THANKS BRO...........MAY GOD KEEP ON BLESSING U N UR FAMILY
A well communicated easy explanation. Hard to come by these days! Thank you.
Damn, That's an awesome way to guess eigen values. Triangle inequality never ceases to surprise me. You teach really good. Keep it up.
See now this is quality information. Thank YOU!!! Seriously Thank YOU!!! And NO my matrix/LA class does not cover anything useful. You should come to my EE department and slap every pof.
Thanks Man!! May very well have saved my numerical Math Exam.
This was a very helpful video, you explained this way better than my professor with a PhD. You are destined for success keep up the good work 👍
Great vid! Im not a mothertongue, but due to clear pronunciation and good explanation I was able to understand everything! thx for that.
excellent explaination! you have the skills of a good teacher! Thanks!
Perfectly explained and proved!! Thank you so much for the video
Thanks! I needed for my numerical computing class
That's cool
So by restricting the value of lampda we can choose a good alpha for the shifted inverse method.
Brilliant and clear explanation. Thank You for this.
Very well explained! Many thanks.
Thanks a lot from Spain!
I love this theorem
Thank you, very clear explanation and interesting theorem
Best regards from your colleagues in Germany! Very helpful video! Thanks for sharing it with us all. I learned a lot and now I do understand what the theory is based on. Keep going! Amazing videos. :-)
Amazing explanation, thank you!
Wow. This is such a great explanation, wow!!! Thanks so much! If your plan is to become a professor or teacher, I cross my fingers for you :)
thanks for the simple and clear video! this is very helpful.
At 7:55, you claim that 2 is an eigenvalue. If I'm not mistaken, all the eigenvalues belong to the union of the Gershgorin discs, so your claim holds only if that disk is disjoint from the others. Of course here it is the case, but there's no way to know this up front ;)
Where can I find the generalized version of gershgorin theorem??
how to isolate if there are two disk intersect each other ?
That's pretty neat!
thank you, it was very useful. Please make the sound much much louder
This helped me so much. I have a stupid question though, so from what I'm understanding, which actual row i is isn't really important at all in proving this theorem, correct?
I explain my choice of i at 5:55.
Concerning stronger version of the theorem: what if n by n matrix has less than n eigenvalues?
My guess is that you would then look at multiplicities. For example, an eigenvalue occurring with multiplicity 2 would count as 2 eigenvalues.
Yeah really fluent and easy to follow. Just one question. Could you explain why do we substitute vector v by its element v(i) in the second step while other part of the equation stays as sum?
In that step, we are looking at the i'th component of the vector. On the right side, the i'th component is just v_i by definition. On the left side, since there is matrix multiplication, that sum is the result of matrix multiplication to get the i'th component of the product.
@@MuPrimeMath Yeah right, I just missed that i is fixed on the left side. Thanks)
The transposed matrix has the same eigenvalues, so you can also look at the column, cant you? In your example, you can aproximate the 3rd one in a circle of .2 radius of -20?
Im not sure though
That's correct!
What a great video!!! Thank you! Could you please mention what is this extension of the theorem you mentioned? What is the name of this corollary?
EDIT: ah wiki has a nice explanation, found it
Another great video with a great explanation. Are you an instructor somewhere? Are you working on a PhD?
I'm currently a freshman at Caltech!
@@MuPrimeMath caltech is a great school, im currently working on my masters in mathematics. What's your major?
Very helpful..thankyou Sir
could you please prove 09:21 ? (2nd Gershgorin theorem)
bravo! very well manuel
Great , thank you
Thank you a lot. I subscribed
So because we are choosing vi to be the largest value in v doesnt that mean we can only use this for the i-th column of the Matrix? This is where this theorem confused me.
In other words if I choose the i-th column then it is assuming that the i-th value of the corresponding eigenvector is the largest value of the eigenvector?
See 8:56
@@MuPrimeMath Doesnt this imply that for a given matrix that has nxn entries each of the corresponding eigenvectors have their absolute largest value in a different Index?
I am just thinking about if that makes any sense or if that is helpful in any way or if I am wrong :D Thanks for answering so quickly.
Regards
If two eigenvectors have the largest value at the same index, then the Gershgorin disks for each eigenvalue will overlap.
For example, take the matrix [[10,-4],[-1,10]]. The eigenvectors are [2,1] and [-2,1], so both have the first entry as the largest value.
The two Gershgorin disks are both centered at 10. The second disk (radius 1) has no eigenvalues in it, which makes sense because the second entry of the eigenvectors is never the largest. The first disk (radius 4) has both eigenvalues in it. This is allowed because the disks intersect, so we only need the two eigenvalues to be in the union of the two disks.
@@MuPrimeMath So from my understanding because of the derivation the 2nd circle makes no sense because both of the EVec have the first value as their largest. It just happens to be where the 1st circle is centered. Meaning if we know the EVec we could choose to ignore the 2nd circle but in reality of course we wouldnt know them so in most cases we wouldnt ignore it. Am I correct? Sorry I think I need to stop here and move on.
That is correct
You call them discs doesn't the equation you wrote describe a rotated square? This shape is included in a disc but is more restrictive.
Since the complex absolute value is defined as |a+bi| = sqrt(a^2 + b^2), the equation describes a disk.
At 10:33 why did u get rid of Vc/Vi ? I think it will help us restrict the value of lampda since it's less than 1.
I got rid of the Vi because we don't actually know the value of i. i is defined as the entry in the eigenvector that has the biggest magnitude, but we don't know the eigenvectors yet, so we can't identify i. We just use the fact that Vi is the biggest entry by definition to get rid of the Vc/Vi term!
Thanks =)
Thanks! 🙂🙂🙂
Awsome video
Amazing
Thanks
nice
Tnkx bro
Fuck when i first saw you coming from the left i thought it was a bloodied shirt.
Tnx sir g
I'm sure it's a brilliant explaination but all I understood was Avicii.
Guess that's on me then
so cool
Cool
GOATED
👍
Your proof is not correct, because restricting the index of 'i' at the beginning of the proof results into final inequality valid only for that one case, only for one diagonal element. But the theorem relates to all diagonal elements.
For a fixed eigenvector, the final inequality at the bottom of the board at 7:01 is not true for all choices of index i. This is easy to see because the Gershgorin disks can be pairwise disjoint, and in that case the eigenvalue cannot be in all disks at once. For each eigenvector v, the theorem guarantees that its eigenvalue is contained in the disk centered at the diagonal element of A with index equal to the index of the largest-magnitude entry of v. I explain the reason for the largest-magnitude assumption at 5:55. You may be thinking of a theorem other than the one I proved in the video.
@@MuPrimeMath Then what guarantees that the index i of the largest magnitude of entry of eigenvector corresponds to the fact that its eigenvalue is located in the disk centered by diagonal element a_ii of matrix of the same indexes? And that if we choose indexes of the largest magnitude of all eigenvectors we cover all diagonal indexes in the matrix, so that no index will repeat and no index will be left?
But this doesn't seem to work with any i... it only works for the i corresponding to the maximum magnitude of the vector
So we cannot apply this to the 3 components of the diagonal, but only the one in the line which has a for number the number where the component of v is max in absolute value
Note that i is the index of the largest-magnitude entry of the eigenvector v, not the largest entry in the matrix. We can have eigenvectors with different largest-magnitude entries, and the theorem shows that they are each contained in the disk corresponding to their largest-magnitude index.
For a justification of why we expect an eigenvector in each disk (i.e. why we can apply the theorem to each component of the diagonal), see 8:56.
nobody can understand the theorem, even brilliant student unless you solve at least 1 or 2 similar examples. I hope you must clear in practical what actually theorem states. The weakness of the lecture in this was, not stating and drawing the Gerschgorin circle and plotting eigenvalues in the figure. If that was done so, then it would be the first-class lecture and then could draw students attention.
Thank you
SO cool