This problem is also a great example of the power of using invariant subspaces. Here is an alternate proof that uses invariant subspaces: Let E be the eigenspace of A for some eignevalue lambda. You showed that v \in E => Bv \in E, i.e. B is E invariant. But then B can be thought of as an operator acting on the vector space E. (i.e. B restricted to E is a totally valid operator) Since we are working over the complex numbers, this operator must have eigenvectors which are in E. But everything in E is eigenvectors for A, they must be eignevectors for both! (The polynomial approach used in the video is essentially the proof that every linear operator has eigenvectors) I believe with a bit more thought, this could tell you somethign about how the dimensions of the eigenspaces of A and B must be compatible. (For example if E is k dimensional, then the eigenvectors of B restricted to E must have algebraic multiplicy that sum to k).
A lot of people in the comments seem to be confusing this result with the (somewhat famous in quantum mechanics) result that *hermitian* linear transformations are simultaneously *unitarily* diagonalizable iff they commute. By contrast, this video gives a different result which says *any* pair of commuting linear transformations shares at least 1 eigenvector (but, importantly, may still fail to be simultaneously diagonlizable. Indeed, neither of the matrices need be diagonlizable in the first place). No assumption on the linear transformations is required other than that the field is algebraically closed and the vector space is finite dimensional.
@@chuckaway6580 Well if "two matrices are simultaneously diagonalizable if and _only_ if they commute" is true, then two matrices which commute _must_ be simultaneously diagonalizable, and therefore they must also both be individually diagonalizable. If they are not individually diagonalizable, then they won't ever commute, apparently.
@@APaleDot yes, but the statement "two matrices are simultaneously diagonalizable iff they commute" isn't true. What is true, as michael shows, is that they share at least one eigenvector. But we can't say more than that without some additional hypotheses on the matrices (such as being hermitian).
The topic has applications in quantum mechanics. For example, that there are identical (up to a factor depending on other variables) eigenfunctions of the operators of the square of the orbital moment and its projection on any axis. Which means the existence of a quantum state in which these quantities can have simultaneously certain values determined by the orbital and magnetic quantum numbers.
I love your videos of olympiad and similar problems! Great work, all the way to explaining basic mathematical facts. Here I think you made yourself more work than necessary: the vectors p(B)v, for arbitrary polynomials p, form a linear subspace V of C^n, which is spanned by v, Bv, B^2v, etc, and thus is non-empty. At the same time it consists entirely of lambda-eigenvectors of A, and is stable under B, i.e. B maps V to V. The restriction of B to this subspace, say of dimension k=dim V is equivalent to a kxk-matrix; denote the restriction of B to V by B'. Thus B' has at least an eigenvector w in V (something you took for granted for A, using already the fact that we're over an algebraically closed field): Bw = B'w = mu.w, and as noted before Aw = lambda.w.
NOT THE RIGTH WAY TO DO IT !!! A has at least one eigenvalue (we’re in C), let’s call it m with x as one of its eigenvector, ABx =BAx = mBx so S = the sub-eigenspace of m for A is stable by B, so B is an endomorphism of S => because we’re in C, like at the start, there is r, and y in S (it’s important) such that By = ry, but by definition y is in S so : Ay = my y is what u want
13:10 you can prove that each term of the factorization commute with the others and put the factor that gives zero(miu m) to be the first one that multiply V so V would be a eigenvector of B. This would prove that each eigenvector of A is an eigenvector of B and vice versa.
This is a nice solution. Still there is a bit of discussion that was omitted for the case that Bv is actually 0. Of course one could pick a different v. In fact, if there exists some eigenvector of A such that Bv=0, the problem is much easier. Namely, in this case, due to commutativity, BAv=0. If Av=0 as well (namely the eigenvalue of A associated to v was 0), then we have that Av=Bv=0 so v is the common eigenvector corresponding to the 0 eigenvalue for both matrices. If Av eq 0, then, since v was chosen as an eigenvector of A, let the associated eigenvalue be a eq 0, i.e., Av=av. But then BAv=0 implies Bav=0, so Bv=0 and thus v is an eigenvector of B as well (with eigenvalue 0).
If E is an eigenspace of A, then BE is contained in E. So B restricts to E. Call this restriction B*. B* has an eigenvector by the algebraic closure of C, which is a common eigenvector of B and A. Am i missing something ?
This confused me as well a little bit, but it's not actually about one of the matrices being zero, it's about the *product* being zero at that point. Since the whole expression is zero, either the left-most matrix takes a non-zero vector to zero, or the vector was already zero by that point, so we can ask the same question of the second matrix, and so on. At some point we must have a non-zero vector being taken to zero by a (B-µI) matrix I hope that helps!
If A is the identity matrix, its eigenvectors are the unit base vectors (1,0,0,0,...), (0,1,0,0,...),... Does it mean any matrix will have at least a unit base vector as eigenvector?
Not quite, but good question. Remember that if two eigenvectors have the same eigenvalue, then you can take any linear combination of those eigenvectors to get another eigenvector. For the identity matrix, with unit eigenvectors, all the eigenvalues equal 1. Which means you can take any linear combination to get another eigenvector i.e. literally every vector is an eigenvector of the identity matrix, as you'd expect given it's the identity! So the eigenspace for I_n is just the whole space, C_n. So naturally every matrix will share eigenvectors with the identity, given that any eigenvector must be in C_n. Those shared eigenvectors just won't necessarily be the unit basis vectors themselves.
But then A - λI, B - μI (λ, μ the corresponding Eigenvalues) operating on a complement of span v (v being the common Eigenvector) also commute and hence also share at least one Eigenvector and so on ... so in this finite case, shouldn't they share a common diagonalization basis? What happens in infinite dimensions? EDIT: this works if and only if A,B have a diagonalization (otherwise, in the finite case, at some point no complement of span v is invariant under A,B). E.g. the shift operator only has one Eigenvector (up to scalar multiples).
This may be a dumb question, but is there an analogous thing for infinite dimensional vector spaces like function spaces or such? That is, what conditions would be needed to be fulfilled so that it'd apply to those? This proof obviously isn't going to work in those cases, so..
It is a bit strange as a German when words beginning with eigen are also used in english language. Because eigen is a german word and it wont be translated. In the most cases it would be translated as own. For Example eigenvector in english is Eigenvektor in german with the same spelling, but the german Eigentum is Ownership.
There was a time that Germany dominated the math world. There are many things that we still use. For instance, the notation Z for the set of integers. But then something happened…
@@arrotino9501 Piccola curiosità storica: i termini italiani hanno ricalcato praticamente sempre il variare della terminologia inglese per quanto riguarda questo argomento. Sono stati usati, in ordine: radici o valori latenti, radici proprie o caratteristiche e, in fine, autovalori
Does the converse hold true? I noticed that there was no assumption on lambda, so we could immediately strengthen this to, corresponding to each eigenvalue there is a shared eigenvector. In this case would the converse hold true? And if still no, is there any known further strengthening of the proposition in the video to make it a double implication?
linear algebra is so abstract that it can be hard to follow. introducing nondescriptive constants mutliplied by some variables just to transform them into other nondescriptive constants multiplied by a different set of variables just to prove that a redefinition of those variables equals some function that, rewritten with yet another set of variables proves that the share this feature.... got it lol. this coming from someone who understands how differential equations and fourier transforms work
No problem, if you can get the v, Bv, B^2v, etc to be l.d. before n, just use that a_k instead of a_n. If you need the n+1 vectors, it's because a_n is nonzero (else you would have a 0 combination before). I speak spanish so sorry if something is badly written.
It’s definitely the most important for computer science and quantum mechanics applications, but I’m not sure if that’s Michael’s reason for saying that
I've also seen applications in mechanical eng., chemical eng., electrical eng., (pretty much all engineering fields, really). I think it's so widely applicable because most tractable problems are just systems of linear equations, and if you have an intractable problem, you very often can get away with discretizing it and solving the linear approximation on a computer.
No, and it's pretty easy to construct a counter-example. Consider two transformations in 3D space: the matrix A rotates vectors around the x-axis, the matrix B scales vectors along the y-axis. Both of these transformations leave all vectors lying along the x-axis unchanged, therefore Ax = λx and Bx = λx (and λ=1). But it's pretty clear to see that rotating and then scaling will give a different result than scaling then rotating. For instance, if you have a vector along the y-axis and rotate it around the x-axis 90°, it now lies on the z-axis and the scaling doesn't affect its length at all. But if you scale first, its length is different.
No. Not even if they share an eigenvector, even with the same eigenvalue: counter-example is (1 2; 0 -1) and (1 -2; 0 -1). Those both have (1;0) as eigenvector for eigenvalue 1, but they don't commute.
@@immakiku but it finds different vectors for different lambdas. So there is as much different vectors as different eigenvalues of matrix A (or B) exist.
@@skvortsovalexey And why should one suppose that there are different lambdas? A could have only one eigenvalue lambda (with the algebraic multplicity n).
@@skvortsovalexey (edit - yes I think it’s true that A and B share all eigenvectors), but at least this proof doesn’t seem sufficient still. You can have second set of different eigval and eigvec, but how do you know from this proof the w you find isn’t just a rescaled version of the w you found earlier?
Completely off topic but I had a math exam this afternoon had not looked at any L.A. on ytb for months and what you know it shows this in my recommandations , like wow the odds ! Well what may be true is I am on a Google account althought not having any direct contacts to people in my class, I think google linked phone numbers to Instagram and WhatsApp or smth were some people of my class have perhaps looked up such a video after the test and then It decided to suggest it. Anyways this is very freaky, I refuse to believe its a coincidence greetings from Switzerland anyways. Like legit a whole problem on [A,B]->AB-BA
Michael, you should state more explicitly (at this low level) that "each matrix has at least one eigenvector" th-cam.com/video/0elP0J91gew/w-d-xo.html People who don't know that already would perhaps not get that message from what you say in the video...
Your logic is commutative. Non commutative logic has curvature caused by hidden '0 ' or '1' of higher dimension logic. It's my own as layman(programmer) opinion. '0' is externity, '1' is internity. For example "if a=b then b:=a+1". In this line "b:=a+1" causes this curvature. In our case higher dimension logic is computer. It's "0" is processor(energy), it's "1" is memory(space). "b:=a+1" goes from "0" to "1" and see curved "b" because while going through "0" "b" was incremented. "a=b" goes from "1" to "0" and see "b" not incremented. Curvature is positive because caused by "0".
15:05 we do not have Aw = lambda w because f(z) is not q(z). The proof is obviously correct except for that last transition which is weird and probably incorrect.
It says "for all f(z)" i.e. f(z) is any generic polynomial, remember from the proof of the earlier lemma. q(z) is an example of an f(z) and therefore fits the rule and is correct.
“Or as I like to call it, Linear Algebra” 😂😂😂
And "Advanced" Calculus, or as MP likes to call it, Calculus.
You already know his exams are fucking hard just by this statement
I heard a math professor once say: "You can never get enough linear algebra."
This problem is also a great example of the power of using invariant subspaces. Here is an alternate proof that uses invariant subspaces:
Let E be the eigenspace of A for some eignevalue lambda. You showed that v \in E => Bv \in E, i.e. B is E invariant. But then B can be thought of as an operator acting on the vector space E. (i.e. B restricted to E is a totally valid operator) Since we are working over the complex numbers, this operator must have eigenvectors which are in E. But everything in E is eigenvectors for A, they must be eignevectors for both! (The polynomial approach used in the video is essentially the proof that every linear operator has eigenvectors) I believe with a bit more thought, this could tell you somethign about how the dimensions of the eigenspaces of A and B must be compatible. (For example if E is k dimensional, then the eigenvectors of B restricted to E must have algebraic multiplicy that sum to k).
A good linear algebra video is always appreciated. Thank you and I look forward to your next video.
A lot of people in the comments seem to be confusing this result with the (somewhat famous in quantum mechanics) result that *hermitian* linear transformations are simultaneously *unitarily* diagonalizable iff they commute. By contrast, this video gives a different result which says *any* pair of commuting linear transformations shares at least 1 eigenvector (but, importantly, may still fail to be simultaneously diagonlizable. Indeed, neither of the matrices need be diagonlizable in the first place). No assumption on the linear transformations is required other than that the field is algebraically closed and the vector space is finite dimensional.
But by the stronger result, any two matrices that commute _never_ fail to be simultaneously diagonalizable, right? Am I missing something?
@@APaleDot What if the matrices aren't diagonalizable in the first place?
@@chuckaway6580
Well if "two matrices are simultaneously diagonalizable if and _only_ if they commute" is true, then two matrices which commute _must_ be simultaneously diagonalizable, and therefore they must also both be individually diagonalizable. If they are not individually diagonalizable, then they won't ever commute, apparently.
@@APaleDot yes, but the statement "two matrices are simultaneously diagonalizable iff they commute" isn't true. What is true, as michael shows, is that they share at least one eigenvector. But we can't say more than that without some additional hypotheses on the matrices (such as being hermitian).
The topic has applications in quantum mechanics.
For example, that there are identical (up to a factor depending on other variables) eigenfunctions of the operators of the square of the orbital moment and its projection on any axis. Which means the existence of a quantum state in which these quantities can have simultaneously certain values determined by the orbital and magnetic quantum numbers.
Yes my physics mate told this application and I got this new theorem in the list
I love your videos of olympiad and similar problems! Great work, all the way to explaining basic mathematical facts. Here I think you made yourself more work than necessary: the vectors p(B)v, for arbitrary polynomials p, form a linear subspace V of C^n, which is spanned by v, Bv, B^2v, etc, and thus is non-empty. At the same time it consists entirely of lambda-eigenvectors of A, and is stable under B, i.e. B maps V to V. The restriction of B to this subspace, say of dimension k=dim V is equivalent to a kxk-matrix; denote the restriction of B to V by B'. Thus B' has at least an eigenvector w in V (something you took for granted for A, using already the fact that we're over an algebraically closed field): Bw = B'w = mu.w, and as noted before Aw = lambda.w.
What is the restriction of B to V? I think this requires significantly more work for change of basis.
Another great video. Today, for some reason, I finally thought, "gotta be a climber." Looked you up, and sure enough, a really good one!
NOT THE RIGTH WAY TO DO IT !!!
A has at least one eigenvalue (we’re in C), let’s call it m with x as one of its eigenvector,
ABx =BAx = mBx so S = the sub-eigenspace of m for A is stable by B, so B is an endomorphism of S
=> because we’re in C, like at the start, there is r, and y in S (it’s important) such that By = ry, but by definition y is in S so :
Ay = my
y is what u want
13:10 you can prove that each term of the factorization commute with the others and put the factor that gives zero(miu m) to be the first one that multiply V so V would be a eigenvector of B. This would prove that each eigenvector of A is an eigenvector of B and vice versa.
This is a nice solution. Still there is a bit of discussion that was omitted for the case that Bv is actually 0. Of course one could pick a different v.
In fact, if there exists some eigenvector of A such that Bv=0, the problem is much easier. Namely, in this case, due to commutativity, BAv=0.
If Av=0 as well (namely the eigenvalue of A associated to v was 0), then we have that Av=Bv=0 so v is the common eigenvector corresponding to the 0 eigenvalue for both matrices.
If Av
eq 0, then, since v was chosen as an eigenvector of A, let the associated eigenvalue be a
eq 0, i.e., Av=av. But then BAv=0 implies Bav=0, so Bv=0 and thus v is an eigenvector of B as well (with eigenvalue 0).
If Bv is 0, then v is eigenvector of B with eigenvalue = 0, so v is shared eigenvector of A and B ( Bv = 0*v ).
@@skvortsovalexey Oh, yes that's even more straightforward then what i said.
7:16 shirt transformation. But is it linear?
If E is an eigenspace of A, then BE is contained in E. So B restricts to E. Call this restriction B*. B* has an eigenvector by the algebraic closure of C, which is a common eigenvector of B and A. Am i missing something ?
That's how I solved it. Seems quick and efficient :)
And thus the center of GL(n, CC) is the subgroup of the scalar matrices.
At 12:30, why must one of the matrices be 0? There can exist nonzero matrices that multiply to zero.
This confused me as well a little bit, but it's not actually about one of the matrices being zero, it's about the *product* being zero at that point.
Since the whole expression is zero, either the left-most matrix takes a non-zero vector to zero, or the vector was already zero by that point, so we can ask the same question of the second matrix, and so on. At some point we must have a non-zero vector being taken to zero by a (B-µI) matrix
I hope that helps!
If A is the identity matrix, its eigenvectors are the unit base vectors (1,0,0,0,...), (0,1,0,0,...),...
Does it mean any matrix will have at least a unit base vector as eigenvector?
Not quite, but good question. Remember that if two eigenvectors have the same eigenvalue, then you can take any linear combination of those eigenvectors to get another eigenvector. For the identity matrix, with unit eigenvectors, all the eigenvalues equal 1. Which means you can take any linear combination to get another eigenvector i.e. literally every vector is an eigenvector of the identity matrix, as you'd expect given it's the identity! So the eigenspace for I_n is just the whole space, C_n. So naturally every matrix will share eigenvectors with the identity, given that any eigenvector must be in C_n. Those shared eigenvectors just won't necessarily be the unit basis vectors themselves.
Not first but still earlier than Good Place to Stop
Also earlier than Good Place to Stop
Also earlier than good place to stop
Good Place to Stop is really late today. :/
STILL earlier than Good Place to Stop 😆
Thank you for this lecture!
But then A - λI, B - μI (λ, μ the corresponding Eigenvalues) operating on a complement of span v (v being the common Eigenvector) also commute and hence also share at least one Eigenvector and so on ... so in this finite case, shouldn't they share a common diagonalization basis? What happens in infinite dimensions?
EDIT: this works if and only if A,B have a diagonalization (otherwise, in the finite case, at some point no complement of span v is invariant under A,B). E.g. the shift operator only has one Eigenvector (up to scalar multiples).
Thumbnail equation is very easy to solve. It’s a pop group.
This is a very important result in quantum mechanics
So cool to me how physics and math go hand in hand almost everywhere.
@@HershO. Well, that's because the most efficient way of modeling our reality is through maths.
Well also normal operators have same eigen vectors
Nice result! And nice costume change between Act 2 and Act 3. 😂
This may be a dumb question, but is there an analogous thing for infinite dimensional vector spaces like function spaces or such? That is, what conditions would be needed to be fulfilled so that it'd apply to those? This proof obviously isn't going to work in those cases, so..
It is a bit strange as a German when words beginning with eigen are also used in english language. Because eigen is a german word and it wont be translated. In the most cases it would be translated as own. For Example eigenvector in english is Eigenvektor in german with the same spelling, but the german Eigentum is Ownership.
There was a time that Germany dominated the math world. There are many things that we still use. For instance, the notation Z for the set of integers.
But then something happened…
French uses the word "propre" for eigen which means own.
We use “auto” in italian (autovettori, autovalori), autovectors could probably work in english, or selfvectors
Egenvektor in swedish
@@arrotino9501 Piccola curiosità storica: i termini italiani hanno ricalcato praticamente sempre il variare della terminologia inglese per quanto riguarda questo argomento. Sono stati usati, in ordine: radici o valori latenti, radici proprie o caratteristiche e, in fine, autovalori
Does the converse hold true? I noticed that there was no assumption on lambda, so we could immediately strengthen this to, corresponding to each eigenvalue there is a shared eigenvector. In this case would the converse hold true? And if still no, is there any known further strengthening of the proposition in the video to make it a double implication?
More linear algebra problems!!!
linear algebra is so abstract that it can be hard to follow. introducing nondescriptive constants mutliplied by some variables just to transform them into other nondescriptive constants multiplied by a different set of variables just to prove that a redefinition of those variables equals some function that, rewritten with yet another set of variables proves that the share this feature.... got it lol. this coming from someone who understands how differential equations and fourier transforms work
Isn't your factorization of p(z) predicated upon a_n being nonzero, so that it actually has n roots? What happens if a_n = 0?
Good point, he should've put a polynomial of degree
I just had the same problem :D
What happens is Godzilla eats the square root of 2.
No problem, if you can get the v, Bv, B^2v, etc to be l.d. before n, just use that a_k instead of a_n. If you need the n+1 vectors, it's because a_n is nonzero (else you would have a 0 combination before). I speak spanish so sorry if something is badly written.
Since an eigenspace of A is B-invariant you can simply restrict the domain of B to an eigenspace of A and find an eigenvector of B there!?
U have to distinguish the case when Bv=0
If Bv = 0 you have Bv = 0.v so v is the shared eigenvector of A and B
So coool thank you
One of my physics mate told this to me
If AB*BA = BA*AB, then A+B = B + A and AB-BA=0.
My fields medal awaits
How do we know lambda and mu are distinct?
Btw I don't see the annotation of the calculus analysis video popping up at the end of the video
That's why he always says:" it should be in the screen.." and not it is in the screen..
@@yoav613 you're right he didn't say anything wrong
[(A+B)/2]²-[(A-B)/2]²= AB=BA
Why is linear algebra the most important math class?
It’s definitely the most important for computer science and quantum mechanics applications, but I’m not sure if that’s Michael’s reason for saying that
I've also seen applications in mechanical eng., chemical eng., electrical eng., (pretty much all engineering fields, really). I think it's so widely applicable because most tractable problems are just systems of linear equations, and if you have an intractable problem, you very often can get away with discretizing it and solving the linear approximation on a computer.
@@leif_p yeah exactly
Is the converse true, ie matrices that share an eigenvalue commute?
No, and it's pretty easy to construct a counter-example. Consider two transformations in 3D space: the matrix A rotates vectors around the x-axis, the matrix B scales vectors along the y-axis. Both of these transformations leave all vectors lying along the x-axis unchanged, therefore Ax = λx and Bx = λx (and λ=1). But it's pretty clear to see that rotating and then scaling will give a different result than scaling then rotating.
For instance, if you have a vector along the y-axis and rotate it around the x-axis 90°, it now lies on the z-axis and the scaling doesn't affect its length at all. But if you scale first, its length is different.
No. Not even if they share an eigenvector, even with the same eigenvalue: counter-example is (1 2; 0 -1) and (1 -2; 0 -1). Those both have (1;0) as eigenvector for eigenvalue 1, but they don't commute.
Wouldn't this reasoning apply to all eigenvectors? Why does the proposition claims only "at least one"?
You don’t know that the procedure doesn’t find the same w for other eigenvectors
@@immakiku but it finds different vectors for different lambdas. So there is as much different vectors as different eigenvalues of matrix A (or B) exist.
@@skvortsovalexey And why should one suppose that there are different lambdas? A could have only one eigenvalue lambda (with the algebraic multplicity n).
@@bjornfeuerbacher5514 We should not suppose. But if different lambdas (of A or B) exist, then...
@@skvortsovalexey (edit - yes I think it’s true that A and B share all eigenvectors), but at least this proof doesn’t seem sufficient still. You can have second set of different eigval and eigvec, but how do you know from this proof the w you find isn’t just a rescaled version of the w you found earlier?
12:18 what if (B-m_n * I)v is 0? Then there's no cutoff point. This case needs to be discussed.
Then q(B) = 1 and the shared eingevector is the original vector v
Doesn't these two matrices have all of their eigenvectors shared (if they are diagonalizable)?
Yes, but that is a fundamentally different result with stronger hypotheses and thus a stronger conclusion.
@@chuckaway6580 what about generalized vectors, what about Jordan form? would they share be simultaneously convertible to Jordan form?
I don't even know what an eigenvector is 😔
cool! it reminds me of Krylov subspaces, something I have not heard about since 20 years plus :-)
Completely off topic but I had a math exam this afternoon had not looked at any L.A. on ytb for months and what you know it shows this in my recommandations , like wow the odds !
Well what may be true is I am on a Google account althought not having any direct contacts to people in my class, I think google linked phone numbers to Instagram and WhatsApp or smth were some people of my class have perhaps looked up such a video after the test and then It decided to suggest it.
Anyways this is very freaky, I refuse to believe its a coincidence greetings from Switzerland anyways.
Like legit a whole problem on
[A,B]->AB-BA
if Av = λv, then ABv = λBv (stumbled across this)
this is nice
One point: in producing p(z) as a product, you assumed that a = a sub n was non-zero, which is not necessarily true.
Michael, you should state more explicitly (at this low level) that "each matrix has at least one eigenvector" th-cam.com/video/0elP0J91gew/w-d-xo.html
People who don't know that already would perhaps not get that message from what you say in the video...
And, In the Reals..?🤔
Nice
nice
If AB=BA, then BA=AB
Your logic is commutative. Non commutative logic has curvature caused by hidden '0 ' or '1' of higher dimension logic. It's my own as layman(programmer) opinion.
'0' is externity, '1' is internity.
For example "if a=b then b:=a+1". In this line "b:=a+1" causes this curvature.
In our case higher dimension logic is computer. It's "0" is processor(energy), it's "1" is memory(space). "b:=a+1" goes from "0" to "1" and see curved "b" because
while going through "0" "b" was incremented. "a=b" goes from "1" to "0" and see "b" not incremented.
Curvature is positive because caused by "0".
It's this just an excuse to celebrate ABBA?
(any younger folk might need to ask a boomer about who ABBA were)
linear algebra - where Mewtwo is just a root of a polynomial
First
first
15:05 we do not have Aw = lambda w because f(z) is not q(z). The proof is obviously correct except for that last transition which is weird and probably incorrect.
It says "for all f(z)" i.e. f(z) is any generic polynomial, remember from the proof of the earlier lemma. q(z) is an example of an f(z) and therefore fits the rule and is correct.
You changed shirts mid video lol