This is (probably) not an easy episode to digest, so if you have any questions, please comment them here. Also, the problems for this video can be found in this link: github.com/LetsSolveMathProblems/Navigating-Linear-Algebra/blob/main/Episode%206.pdf. Feel free to post solutions to these problems in the comment section, as well as peer-review other people's proofs.
Just want to say that this is the most comprehensive and best formatted video on TH-cam for direct sums and quotient spaces, something my book devotes 3 small pages to. Not that i understand it all yet but i will be watching this video probably a couple more times and DEFINITELY going through the problems and probably your other videos in this series.
Thank you for the compliment! I will be keeping an eye out for new comments on this linear algebra series for the next month-ish, so feel free to comment your proofs for the problems here. As time allows, I can check and critique some of them.
For 3, just note that dim A/B = dim A - dim B, and further observe that two vector spaces with equal dimension over the same field are isomorphic (just note that any linear transformation between their corresponding basis vectors is an isomorphism). So, observe that dim (V/U)/(W/U) = dim V - dim U - dim W + dim U = dim V - dim W = dim (V/W), so the two vector spaces are isomorphic.
Pb 4: Not necessarily, let's observe this exemple: V=R[X] the space of polynomials and W is the space of polynomials P where degree of is >=1 or P=0 it's obvious that W is a subspace of V and we have V/W is the space of constant polynomials = R and it is finite-dimensional (dimension=1) but we know that dimV= +oo, as mentionned in 15:00 , in the counter-exemple W should bi infinte-dimensional because dim(V/W) = dimV - dimW so if we want to set dim(V/W)< +oo but dimV =+oo then dimW must = +oo.
For Problem 4, this is correct. However, you can't quite use the argument at 15:00 to deduce W must be infinite-dimensional because that argument assumed that V is finite-dimensional.
Probably not in this series (other than in one problem for this episode), although I may make a separate video treating tensor product space in the future.
For problem 2: Because any vector space V and its dual space V* always have an isomorphism, we see that dimV=dimV*. So to show it's an isomorphism all we have to do is show that dimkerϕ=0, as rank-nullity theorem would then show that dimimϕ=dimV, hence it will be both injective and surjective and an isomorphism (also that it's linear!). To show this, just note that if v!=0 then there exists w such that w.v!=0 hence it is not the zero map. Let w be one of (1,0) or (0,1) and we are done! (update: also you'd need to show it's a linear map. To see this, just notice that for x,y in R^2 we have ax+y mapping to the function (w -> a(w.x)+w.y) which under function addition/multiplication is equivalent to adding the two maps a*(w -> w.x) +(w->w.y).)
For problem 2, this is correct. However, I note that it's not true that "any vector space V and its dual space V* always have an isomorphism". This works only when dim V < \infty, which is not a problem here because V = R^2.
Around 9:10 you use the fact that dim V = dim (V*)* to prove that the two vector spaces are isomorphic. How do you know that their dimensions are equal? And if you know that dim V = dim (V*)*, then don't you get isomorphism for free? Two vector spaces of equal finite dimension over the same field are of course isomorphic by mapping the basis vectors of one space to the basis vectors of the other. Was the purpose of this exercise just to illustrate what a natural isomorphism is?
Since dim V < \infty in the video, the dimensions of V and (V*)* are equal essentially because V and V* are isomorphic (shown at 5:05 in the video). In particular, by exactly the same reasoning, V* and (V*)* are isomorphic, so we have the result since the vector space isomorphism is transitive. As you stated, dim V = dim (V*)* directly implies the existence of an isomorphism, and the point of the example was to illustrate the notion of a *natural* isomorphism.
Question: How does a function call it phi looks like explicitly that sends an elemnt from v to an element of V^**(double dual). I intuitively think of phi to be a composition of matrices. Lets say V=R^2 and F=R then the best I can think of phi as the map from V -->V^** to be a 1by2 Matrix multiplied by a 2by2 Matrix(The 2x2Matrix came in my mind because the Hom(V,F) in my case would be a 1x2Matrix that takes in an Element of V and spits out a number. And so I thought Hom(Hom(V,F),F) in my case is a function that takes in a functon that spits out a number and sends that function to a number. The only thing I came up with that takes 1x2 as input or at least can be composed with is a 2x1Matrix and therefore ending up with a 2x2matrix and to ensure to end up with a number I then compose again with a1x2Matix.) . allows to input an element of V and output an Element of R. But I'm not sure if that makes sense or even that's the way to think of dual spaces.
For problem 5: For the first part, I think the natural candidate for π is the map that takes v in V to v mod W. Then if ϕ from V to U exists, then the unique ψ is just the map that takes v mod W to ϕv. This is allowed because W⊆kerϕ so it doesn't matter that we set all W to zero. For the second part, if U' satisfies the universal property then there exists a unique ψ' from U' to V/W such that ψ'○π'=π. But we also know that ψ○π=π' hence ψ'○ψ○π=π (also ψ○ψ'○π'=π'). So now we just need to show ψ○ψ'=ψ'○ψ=id. This is done the same way as in the video, by just showing ψ'○ψ is a valid unique map when considering two copies of V/W and similarly with ψ○ψ' and U'. This then shows they must be equal to id. Hence V/W and U' must be isomorphic.
For problem 5, this is correct. For the sake of rigor, for part (a), you may want to add more explanation on why exactly there is the unique linear map \psi.
After many rewatches: For problem 1: We just need to show that dimV⊕W=dimV+dimW. To see this, suppose V is n dimensional with basis v1,...,vn and that W is m dimensional with basis w1,...,wm. I claim that (v1,0),...,(vn,0),(0,w1),...,(0,wm) is a basis of V⊕W. Any linear combination of a vector in V⊕W can be written as (a1v1+...+anvn,b1w1+...+bmwm), hence linear combinations of the above basis span the set. These vectors are also linearly independent, as if a1(v1,0)+...+an(vn,0)+(0,b1w1)+...+(0,bmwm)=(0,0)=(a1v1+...+anvn,b1w1+...+bmwm) then a1v1+...+anvn=0 and b1w1+...+bmwm=0 which means a1=...=an=b1=...=bm=0. Hence it is a basis thus dimV⊕W=n+m=dimV+dimW. This shows then that we will not be able to find an isomorphism between V and V⊕W. Idk if this is allowed but: If they are infinite-dimensional then just consider the vector space of infinite basis (a1,a2,a3,...), call it V. Then V and V⊕V are isomorphic, as they have the same dimension (which I think is cardinality??) and surjective (so injective. edit: or not? maybe rank-nullity doesn't work for infinite dimensions lol.) I guess the map would look like: if v in V and v=(a1,b1,a2,b2,...) then map to ((a1,a2,...),(b1,b2,...)).
For the case with a finite-dimensional V, your answer is correct. For the other case with an infinite-dimensional V, your answer is almost correct. You may want to clarify that you're considering a space V with a "countably infinite" basis. Moreover, I think your map is a little unclear: I think you want to say "if v in V and v's coefficients with respect to the basis (a1, a2, ...) are (b1, c1, b2, c2, ...), then map v to (v', v'') where coefficients of v' and v'' with respect to the aforementioned basis is (b1,b2,...) and (c1,c2,...). I also add that you *cannot* deduce injectivity from surjectivity for a linear map in Hom(V, V) for an infinite-dimensional V. The reason that your example works is because your V and V \oplus V indeed have the same dimension, which is the cardinality of their basis as you guessed correctly. For the infinite-dimensional case, there actually is a pretty simple example that doesn't require any knowledge of a basis: You can consider V = R[x] and W = R with the map that acts like (3+4x+5x^2, 2) |-> 2+3x+4x^2+5x^3.
For problem 3: Take v in V and w in W. An element in (V/U)/(W/U) equivalises all v=(v modU) modW/U. Writing v=v'+w where v' and w are linearly independent and v' is in V-W; we get v=(v' modU+w modU) modW/U, we see that w modU is in W/U so v=(v' modU) modW/U. We have thus 'removed' w and set it equal to zero*. This is equivalent to taking v modW. This is a natural isomorphism with the set V/W! * (and is the only thing it did. This is because by extension of removing w we have removed any trace of an element in U, because it subsides in W. The v' which is leftover is thus as far as we can go.)
Pb 1: No for both cases, V will be isomorphic to the external sum V+W if there exists f an isomorphism from V+W to V but there is no such linear map f, in fact as long as dim(W)>0 every linear map from V+W to V shouldn't be injective so it won't be bijective. It is not injective because we can take x a non zero vector from W and let's write 0 from the zero vector in V so we have f(0,x)=f(0,2x)=0 though (0,x) (0,2x) since x is a non zero vector (x can't = 2x or 3x,....) => f in not injective => not bijective => not an isomorphism.
Not quite: a linear map f: V + W -> V doesn't need to be of the form f(v, w) = v (for example, if W is a subspace of V, then f(v,w) = v-w also works). The correct answer is *not* "no for both cases"!
Pb 2: I'll refer to the map by f. f is isomorphic iff f is bijective and f is a linear map. #f is a linear map: Let x=(a,b),y=(c,d) be vectors of R² and a be z real number. f(zx+y): w=(n,m) |----> (z.x+y).w = (az+c,bz+d).(n,m)= azn+cn + bzm+dm = z(an+bm) + (cn+dm) = z.(x.w)+(y.w) and zf(x): w |---> z.(x.w) and f(y): w|---> y.w so we have indeed f(zx+y) = zf(x)+f(y) => f is a linear map. #f is bijective: 1st: f is injective? Let 0 be the zero vector of (R²)* meaning 0: w in R² |----> 0 in R. v=(a,b) in ker(f) => f(v) = 0 => for every w in R², v.w=0 so let's choose w=(1,0) leading to a=0 and w=(0,1) leading to b=0 we can choose as well w=v => a²+b²=0 but this holds only if a=b=0 anyways this implies that v=(0,0) so ker(f) C {(0,0)} but the other inclusion {(0,0)} C ker(f) is trivial and obvious since f is linear map so ker(f) = {(0,0)} => f is injective. 2nd: f is surjective? Let g be a vector of (R²)*.g is a linear map so we can write for every x =(a,b) in R², g(x) = a.g(1,0)+b.g(0,1) and this may guide us to choose v=(g(1,0),g(0,1)) and then we have f(v)=g => f is surjective. => f is bijective. Conclusion: f is an isomorphism. Note: we could prove that f is bijective by saying f is either injective or surjective and we add to that dimR²=2=dim(R²)*.
Pb 3: (V/U)/(W/U) = V/W by multiplying the numerator and denominator by U XD ^^.... Well, we want to prove that E=(V/U)/(W/U) is isomorphic to F=V/W, in this case we can try to prove E=F. x in E => y in V , u1,u2 in U, w1 in W and x = y + u1 + w1 +u2 but since U C W w1+u1+u2 =w in W so x=y+w which is in F. x in F => y in V , w in W and x=y+w = y+0+w+0 and we have 0 in U so x should be in E as well. Conclusion: E=F => E is isomorphic to F
There's a pretty subtle mistake: You can't say "x in E => y in V , u1,u2 in U, w1 in W and x = y + u1 + w1 +u2" because technically y+u_1 is *not* in V/U, but {y+u: u in U} is in V/U, implying that to get the correct quotient (V/U)/(W/U), you can't just fix u_1. Indeed, E and F are *not* equal to each other.
This is (probably) not an easy episode to digest, so if you have any questions, please comment them here. Also, the problems for this video can be found in this link: github.com/LetsSolveMathProblems/Navigating-Linear-Algebra/blob/main/Episode%206.pdf. Feel free to post solutions to these problems in the comment section, as well as peer-review other people's proofs.
Just want to say that this is the most comprehensive and best formatted video on TH-cam for direct sums and quotient spaces, something my book devotes 3 small pages to. Not that i understand it all yet but i will be watching this video probably a couple more times and DEFINITELY going through the problems and probably your other videos in this series.
Thank you for the compliment! I will be keeping an eye out for new comments on this linear algebra series for the next month-ish, so feel free to comment your proofs for the problems here. As time allows, I can check and critique some of them.
For 3, just note that dim A/B = dim A - dim B, and further observe that two vector spaces with equal dimension over the same field are isomorphic (just note that any linear transformation between their corresponding basis vectors is an isomorphism). So, observe that dim (V/U)/(W/U) = dim V - dim U - dim W + dim U = dim V - dim W = dim (V/W), so the two vector spaces are isomorphic.
This is correct, but only when both A and B are finite-dimensional! I encourage you to try to come up with a proof for the infinite-dimensional case.
Gonna need time to process all of this lol
Pb 4:
Not necessarily, let's observe this exemple: V=R[X] the space of polynomials and W is the space of polynomials P where degree of is >=1 or P=0 it's obvious that W is a subspace of V and we have V/W is the space of constant polynomials = R and it is finite-dimensional (dimension=1) but we know that dimV= +oo, as mentionned in 15:00 , in the counter-exemple W should bi infinte-dimensional because dim(V/W) = dimV - dimW so if we want to set dim(V/W)< +oo but dimV =+oo then dimW must = +oo.
For Problem 4, this is correct. However, you can't quite use the argument at 15:00 to deduce W must be infinite-dimensional because that argument assumed that V is finite-dimensional.
@@LetsSolveMathProblems yes indeed but I mentionned it just as an intuition to choose properly a counter exemple.
Are you planning on covering tensor products of vector spaces?
Probably not in this series (other than in one problem for this episode), although I may make a separate video treating tensor product space in the future.
For problem 2: Because any vector space V and its dual space V* always have an isomorphism, we see that dimV=dimV*. So to show it's an isomorphism all we have to do is show that dimkerϕ=0, as rank-nullity theorem would then show that dimimϕ=dimV, hence it will be both injective and surjective and an isomorphism (also that it's linear!).
To show this, just note that if v!=0 then there exists w such that w.v!=0 hence it is not the zero map. Let w be one of (1,0) or (0,1) and we are done! (update: also you'd need to show it's a linear map. To see this, just notice that for x,y in R^2 we have ax+y mapping to the function (w -> a(w.x)+w.y) which under function addition/multiplication is equivalent to adding the two maps a*(w -> w.x) +(w->w.y).)
For problem 2, this is correct. However, I note that it's not true that "any vector space V and its dual space V* always have an isomorphism". This works only when dim V < \infty, which is not a problem here because V = R^2.
Around 9:10 you use the fact that dim V = dim (V*)* to prove that the two vector spaces are isomorphic. How do you know that their dimensions are equal? And if you know that dim V = dim (V*)*, then don't you get isomorphism for free? Two vector spaces of equal finite dimension over the same field are of course isomorphic by mapping the basis vectors of one space to the basis vectors of the other. Was the purpose of this exercise just to illustrate what a natural isomorphism is?
Since dim V < \infty in the video, the dimensions of V and (V*)* are equal essentially because V and V* are isomorphic (shown at 5:05 in the video). In particular, by exactly the same reasoning, V* and (V*)* are isomorphic, so we have the result since the vector space isomorphism is transitive. As you stated, dim V = dim (V*)* directly implies the existence of an isomorphism, and the point of the example was to illustrate the notion of a *natural* isomorphism.
Question: How does a function call it phi looks like explicitly that sends an elemnt from v to an element of V^**(double dual).
I intuitively think of phi to be a composition of matrices. Lets say V=R^2 and F=R then the best I can think of phi as the map from V -->V^**
to be a 1by2 Matrix multiplied by a 2by2 Matrix(The 2x2Matrix came in my mind because the Hom(V,F) in my case would be a 1x2Matrix that takes in an Element of V and spits out a number. And so I thought Hom(Hom(V,F),F) in my case is a function that takes in a functon that spits out a number and sends that function to a number. The only thing I came up with that takes 1x2 as input or at least can be composed with is a 2x1Matrix and therefore ending up with a 2x2matrix and to ensure to end up with a number I then compose again with a1x2Matix.) . allows to input an element of V and output an Element of R. But I'm not sure if that makes sense or even that's the way to think of dual spaces.
For problem 5: For the first part, I think the natural candidate for π is the map that takes v in V to v mod W. Then if ϕ from V to U exists, then the unique ψ is just the map that takes v mod W to ϕv. This is allowed because W⊆kerϕ so it doesn't matter that we set all W to zero.
For the second part, if U' satisfies the universal property then there exists a unique ψ' from U' to V/W such that ψ'○π'=π. But we also know that ψ○π=π' hence ψ'○ψ○π=π (also ψ○ψ'○π'=π'). So now we just need to show ψ○ψ'=ψ'○ψ=id. This is done the same way as in the video, by just showing ψ'○ψ is a valid unique map when considering two copies of V/W and similarly with ψ○ψ' and U'. This then shows they must be equal to id. Hence V/W and U' must be isomorphic.
For problem 5, this is correct. For the sake of rigor, for part (a), you may want to add more explanation on why exactly there is the unique linear map \psi.
After many rewatches:
For problem 1: We just need to show that dimV⊕W=dimV+dimW. To see this, suppose V is n dimensional with basis v1,...,vn and that W is m dimensional with basis w1,...,wm. I claim that (v1,0),...,(vn,0),(0,w1),...,(0,wm) is a basis of V⊕W.
Any linear combination of a vector in V⊕W can be written as (a1v1+...+anvn,b1w1+...+bmwm), hence linear combinations of the above basis span the set. These vectors are also linearly independent, as if a1(v1,0)+...+an(vn,0)+(0,b1w1)+...+(0,bmwm)=(0,0)=(a1v1+...+anvn,b1w1+...+bmwm) then a1v1+...+anvn=0 and b1w1+...+bmwm=0 which means a1=...=an=b1=...=bm=0. Hence it is a basis thus dimV⊕W=n+m=dimV+dimW.
This shows then that we will not be able to find an isomorphism between V and V⊕W.
Idk if this is allowed but: If they are infinite-dimensional then just consider the vector space of infinite basis (a1,a2,a3,...), call it V. Then V and V⊕V are isomorphic, as they have the same dimension (which I think is cardinality??) and surjective (so injective. edit: or not? maybe rank-nullity doesn't work for infinite dimensions lol.) I guess the map would look like: if v in V and v=(a1,b1,a2,b2,...) then map to ((a1,a2,...),(b1,b2,...)).
For the case with a finite-dimensional V, your answer is correct. For the other case with an infinite-dimensional V, your answer is almost correct. You may want to clarify that you're considering a space V with a "countably infinite" basis. Moreover, I think your map is a little unclear: I think you want to say "if v in V and v's coefficients with respect to the basis (a1, a2, ...) are (b1, c1, b2, c2, ...), then map v to (v', v'') where coefficients of v' and v'' with respect to the aforementioned basis is (b1,b2,...) and (c1,c2,...).
I also add that you *cannot* deduce injectivity from surjectivity for a linear map in Hom(V, V) for an infinite-dimensional V. The reason that your example works is because your V and V \oplus V indeed have the same dimension, which is the cardinality of their basis as you guessed correctly.
For the infinite-dimensional case, there actually is a pretty simple example that doesn't require any knowledge of a basis: You can consider V = R[x] and W = R with the map that acts like (3+4x+5x^2, 2) |-> 2+3x+4x^2+5x^3.
For problem 3: Take v in V and w in W. An element in (V/U)/(W/U) equivalises all v=(v modU) modW/U. Writing v=v'+w where v' and w are linearly independent and v' is in V-W; we get v=(v' modU+w modU) modW/U, we see that w modU is in W/U so v=(v' modU) modW/U. We have thus 'removed' w and set it equal to zero*. This is equivalent to taking v modW. This is a natural isomorphism with the set V/W!
* (and is the only thing it did. This is because by extension of removing w we have removed any trace of an element in U, because it subsides in W. The v' which is leftover is thus as far as we can go.)
For problem 3, this is a correct isomorphism. :)
Pb 1:
No for both cases, V will be isomorphic to the external sum V+W if there exists f an isomorphism from V+W to V but there is no such linear map f, in fact as long as dim(W)>0 every linear map from V+W to V shouldn't be injective so it won't be bijective. It is not injective because we can take x a non zero vector from W and let's write 0 from the zero vector in V so we have f(0,x)=f(0,2x)=0 though (0,x) (0,2x) since x is a non zero vector (x can't = 2x or 3x,....) => f in not injective => not bijective => not an isomorphism.
Not quite: a linear map f: V + W -> V doesn't need to be of the form f(v, w) = v (for example, if W is a subspace of V, then f(v,w) = v-w also works). The correct answer is *not* "no for both cases"!
@@LetsSolveMathProblems aa ic 🤔
Pb 2:
I'll refer to the map by f. f is isomorphic iff f is bijective and f is a linear map.
#f is a linear map:
Let x=(a,b),y=(c,d) be vectors of R² and a be z real number.
f(zx+y): w=(n,m) |----> (z.x+y).w = (az+c,bz+d).(n,m)= azn+cn + bzm+dm = z(an+bm) + (cn+dm) = z.(x.w)+(y.w)
and zf(x): w |---> z.(x.w) and f(y): w|---> y.w so we have indeed f(zx+y) = zf(x)+f(y) => f is a linear map.
#f is bijective:
1st: f is injective?
Let 0 be the zero vector of (R²)* meaning 0: w in R² |----> 0 in R.
v=(a,b) in ker(f) => f(v) = 0 => for every w in R², v.w=0 so let's choose w=(1,0) leading to a=0 and w=(0,1) leading to b=0 we can choose as well w=v => a²+b²=0 but this holds only if a=b=0 anyways this implies that v=(0,0) so ker(f) C {(0,0)} but the other inclusion {(0,0)} C ker(f) is trivial and obvious since f is linear map so ker(f) = {(0,0)} => f is injective.
2nd: f is surjective?
Let g be a vector of (R²)*.g is a linear map so we can write for every x =(a,b) in R², g(x) = a.g(1,0)+b.g(0,1) and this may guide us to choose v=(g(1,0),g(0,1)) and then we have f(v)=g => f is surjective.
=> f is bijective.
Conclusion: f is an isomorphism.
Note: we could prove that f is bijective by saying f is either injective or surjective and we add to that dimR²=2=dim(R²)*.
For Problem 2, this is correct. :)
@@LetsSolveMathProblems Thx ^^
Pb 3:
(V/U)/(W/U) = V/W by multiplying the numerator and denominator by U XD ^^....
Well, we want to prove that E=(V/U)/(W/U) is isomorphic to F=V/W, in this case we can try to prove E=F.
x in E => y in V , u1,u2 in U, w1 in W and x = y + u1 + w1 +u2 but since U C W w1+u1+u2 =w in W so x=y+w which is in F.
x in F => y in V , w in W and x=y+w = y+0+w+0 and we have 0 in U so x should be in E as well.
Conclusion: E=F => E is isomorphic to F
There's a pretty subtle mistake: You can't say "x in E => y in V , u1,u2 in U, w1 in W and x = y + u1 + w1 +u2" because technically y+u_1 is *not* in V/U, but {y+u: u in U} is in V/U, implying that to get the correct quotient (V/U)/(W/U), you can't just fix u_1. Indeed, E and F are *not* equal to each other.
@@LetsSolveMathProblems hmmm 🤔 I'll need to get more information about quotient space ig ... thank you!