The problems for this video can be found in this link: github.com/LetsSolveMathProblems/Navigating-Linear-Algebra/blob/main/Episode%208.pdf. Please free to post solutions to these problems in the comment section, as well as peer-review other people's proofs. :)
For problem 1: We do, in sequence, the operations: scale first row by 1/2, at negative 5 of first row to second row, add second row to first row and finally multiply second row by -1/2. This gives the matrix [[-2,5/4],[1,1/2]] which can be checked to be correct!
For problem 1, this is almost correct: You should have -1/2, not 1/2. Also, you may wish to specify that you are writing "columns first" (i.e., [-2, 5/4] is the first column, not the first row, as is usually done).
For problem 2: A can be written as RE(A) + iIM(A) for the real and complex entries respectively. Suppose a real inverse exists, then A⁻¹A=A⁻¹RE(A) + iA⁻¹IM(A)=I. So we must have A⁻¹IM(A)=0. But rankA⁻¹=n and rankIM(A)>0 hence rankA⁻¹IM(A)>0 (row echelon form of A⁻¹ is I and the product of the echelon forms of matrices C,D has the same rank as their product I think). So rankA⁻¹IM(A)!=0, a contradiction.
For problem 5: A diagonal matrix with diagonal elements a,b,c... nth's power has diagonal elements aⁿ,bⁿ,cⁿ... and zeros everywhere else. This makes computation nice cause instead of multiplying B n times you get Aⁿ for free and just have to multiply S and S⁻¹ (which is also made a lot easier! AS is just multiplying all rows of S by corresponding diagonal entry of A.)
For problem 6: If it's rank 1 and nilpotent then is must have the form [[0,0],[a,0]] or [[0,b],[0,0]]. We just need to show that there exists a transform S that takes us between these forms and vice versa. To this end, you can do a long computation to then find that the transform [[0,1],[a/b,0]] works when going from [[0,0],[a,0]] to [[0,b],[0,0]], the transform [[1,0],[0,a/b]] works when going from [[0,0],[a,0]] to [[0,0],[b,0]] and the transform [[1,0],[0,b/a]] works when going from [[0,a],[0,0]] to [[0,b],[0,0]]. These should be all the cases.
Pb5: This is too helpful to calculate powers of B if B is similar to a diagnoal matrix A since we can write B^m = SA^mS^-1 where S is invertible and powers of a diagnoal matrix are the easiest to compute : if A = diag(a1,...,an) then A^m = diag(a1^m,..,an^m) (this can be proven by induction if needed) then we will need to multiply it by S^-1 to the left and S to the right to end up with B^m.
For problem 3: (a) the elementary matrix [[1/2,0],[0,1]] has inverse [[2,0],[0,1]] but have different traces so they are not similar. (b) the matrix [[1,0],[0,0]] works.
Pb3: (a) We can guarantee that A a 2x2 invertible matrix is not similar to a^-1 its inverse if we choose |det(A)|1 since if A is similar to A^-1 => det(A)=det(A^-1) but we know as well det(A^-1)=1/det(A) so det²(A)=1 -det(A)|=1. We can take A form the pb1 since det(A)=-4 (not +/- 1) or we can take other more sipmle exemples such as diagonal matrices : diag(2,1), diag(5,1/4), ... diag(a,b) where |ab|1 (b) Yes , absolutely, we can take any non-null projection p from R² to R² and take its matrix with respect to the usual basis for exemple A=diag(1,0) ( e1|--->e1 and e2|--->0) or diag (0,1) (e1|--->0 and e2|--->e2) Since p0 and p is a projection so p²=p so p^n= id (n=0) , p (n>=1) hence A its matrix can't be nilpotent and can't be invertible as well since the only invertible projection is the identity and for our exemple we can see as well taht det(A)=0.
For problem 2: (a) we can write this matrix as A=RE(A)+iIM(A) where we're considering the real and imaginary components respectively. Then if we take an in inverse we have A⁻¹A=A⁻¹RE(A)+iA⁻¹IM(A)=I. This mandates that A⁻¹IM(A)=0, which is impossible (for real matricies) as rankA⁻¹=n while rankIM(A)>0 so rankA⁻¹IM(A)>0 hence A⁻¹IM(A)!=0. (b) Consider S= [[i,1],[0,1]] with inverse S⁻¹= [[-i,1],[0,1]] and A=[[1,1],[0,1]] then B=S⁻¹AS =[[1,0],[-i,1]] so it is possible!
For (a), this is correct. For (b), given that you are writing "columns first", I think your S^{-1} should be [[-i,i],[0,1]]; given this, I think your (b) still works! :)
Pb 1: let's apply the same elementary operations that lead to I form A to I and we wil end up with A^-1. A= [ 2 , 4 ] I= [ 1 , 0 ] [ 5 , 8 ] [ 0 , 1 ] R2
Yes! I think we will have two more episodes (one on eigenvectors and the other on generalized eigenvectors). Even though many important topics (like inner products) will be left out, I hope these 10 episodes are insightful enough for viewers who are new to abstract linear algebra.
The problems for this video can be found in this link: github.com/LetsSolveMathProblems/Navigating-Linear-Algebra/blob/main/Episode%208.pdf. Please free to post solutions to these problems in the comment section, as well as peer-review other people's proofs. :)
I am glad you are back.
Thank you for this :)
Very helpful.
For problem 1:
We do, in sequence, the operations: scale first row by 1/2, at negative 5 of first row to second row, add second row to first row and finally multiply second row by -1/2. This gives the matrix [[-2,5/4],[1,1/2]] which can be checked to be correct!
For problem 1, this is almost correct: You should have -1/2, not 1/2. Also, you may wish to specify that you are writing "columns first" (i.e., [-2, 5/4] is the first column, not the first row, as is usually done).
For problem 2:
A can be written as RE(A) + iIM(A) for the real and complex entries respectively. Suppose a real inverse exists, then A⁻¹A=A⁻¹RE(A) + iA⁻¹IM(A)=I. So we must have A⁻¹IM(A)=0. But rankA⁻¹=n and rankIM(A)>0 hence rankA⁻¹IM(A)>0 (row echelon form of A⁻¹ is I and the product of the echelon forms of matrices C,D has the same rank as their product I think). So rankA⁻¹IM(A)!=0, a contradiction.
Is this help ful for jee
For problem 5:
A diagonal matrix with diagonal elements a,b,c... nth's power has diagonal elements aⁿ,bⁿ,cⁿ... and zeros everywhere else. This makes computation nice cause instead of multiplying B n times you get Aⁿ for free and just have to multiply S and S⁻¹ (which is also made a lot easier! AS is just multiplying all rows of S by corresponding diagonal entry of A.)
For problem 5, this is correct.
For problem 6:
If it's rank 1 and nilpotent then is must have the form [[0,0],[a,0]] or [[0,b],[0,0]]. We just need to show that there exists a transform S that takes us between these forms and vice versa. To this end, you can do a long computation to then find that the transform [[0,1],[a/b,0]] works when going from [[0,0],[a,0]] to [[0,b],[0,0]], the transform [[1,0],[0,a/b]] works when going from [[0,0],[a,0]] to [[0,0],[b,0]] and the transform [[1,0],[0,b/a]] works when going from [[0,a],[0,0]] to [[0,b],[0,0]]. These should be all the cases.
For problem 6, I did not explicitly check your work, but this is likely correct. :)
Pb5:
This is too helpful to calculate powers of B if B is similar to a diagnoal matrix A since we can write B^m = SA^mS^-1 where S is invertible and powers of a diagnoal matrix are the easiest to compute : if A = diag(a1,...,an) then A^m = diag(a1^m,..,an^m) (this can be proven by induction if needed) then we will need to multiply it by S^-1 to the left and S to the right to end up with B^m.
For problem 5, this is correct.
For problem 3:
(a) the elementary matrix [[1/2,0],[0,1]] has inverse [[2,0],[0,1]] but have different traces so they are not similar.
(b) the matrix [[1,0],[0,0]] works.
For problem 3, this is correct.
Pb3:
(a)
We can guarantee that A a 2x2 invertible matrix is not similar to a^-1 its inverse if we choose |det(A)|1 since if A is similar to A^-1 => det(A)=det(A^-1) but we know as well det(A^-1)=1/det(A) so det²(A)=1 -det(A)|=1.
We can take A form the pb1 since det(A)=-4 (not +/- 1) or we can take other more sipmle exemples such as diagonal matrices : diag(2,1), diag(5,1/4), ... diag(a,b) where |ab|1
(b)
Yes , absolutely, we can take any non-null projection p from R² to R² and take its matrix with respect to the usual basis for exemple A=diag(1,0) ( e1|--->e1 and e2|--->0) or diag (0,1) (e1|--->0 and e2|--->e2)
Since p0 and p is a projection so p²=p so p^n= id (n=0) , p (n>=1) hence A its matrix can't be nilpotent and can't be invertible as well since the only invertible projection is the identity and for our exemple we can see as well taht det(A)=0.
For problem 3, this is correct.
For problem 2:
(a) we can write this matrix as A=RE(A)+iIM(A) where we're considering the real and imaginary components respectively. Then if we take an in inverse we have A⁻¹A=A⁻¹RE(A)+iA⁻¹IM(A)=I. This mandates that A⁻¹IM(A)=0, which is impossible (for real matricies) as rankA⁻¹=n while rankIM(A)>0 so rankA⁻¹IM(A)>0 hence A⁻¹IM(A)!=0.
(b) Consider S= [[i,1],[0,1]] with inverse S⁻¹= [[-i,1],[0,1]] and A=[[1,1],[0,1]] then B=S⁻¹AS =[[1,0],[-i,1]] so it is possible!
For (a), this is correct. For (b), given that you are writing "columns first", I think your S^{-1} should be [[-i,i],[0,1]]; given this, I think your (b) still works! :)
Pb 1:
let's apply the same elementary operations that lead to I form A to I and we wil end up with A^-1.
A= [ 2 , 4 ] I= [ 1 , 0 ]
[ 5 , 8 ] [ 0 , 1 ]
R2
For problem 1, this is correct.
At this point, do you have a clearer idea of how many videos in the series are left before you plan to stop?
Yes! I think we will have two more episodes (one on eigenvectors and the other on generalized eigenvectors). Even though many important topics (like inner products) will be left out, I hope these 10 episodes are insightful enough for viewers who are new to abstract linear algebra.
Please speak in your normal accent as it seems too much made up accent and the video's quality degrades only and only due to fake accent