These lectures are exceptional. Schuller's clarity and thoroughness is simply amazing - he makes the average diff. geom. text look like it was written by apes. The non-witchcraft treatment of determinant towards the end makes me weep for all the dismal linear algebra texts I've ploughed through - how have I never seen this before?
15:46, 20:03, 21:04, 23:20 (field for vector space of functions from V to W is the same as W, V in principle can have a different field), 25:58, 28:16 (T, V*=Hom(V,k) and V are over field k), 29:16, 32:20 (Hom(V,W) has the same field as W, but linear maps demand V has the same field too), 35:15, 40:32, 46:49, 48:37, 52:53, 1:00:44, 1:08:27, 1:09:40 (how do vectors act on covectors? 1:45:00),1:24:46, 1:30:54, 1:39:29, 1:42:00, 1:49:21, 1:52:36 (orange product definition gives V), 1:58:26, 1:59:18, 2:00:25, 2:01:18, 2:03:53, 2:04:48 (eg, d+1 entry is v_1 switch with 1st entry), 2:11:53, 2:14:25, 2:21:49
At 45:09 he goes on to define T̂ in terms of T, he puts the covector w in the place of the second argument of T i.e. T(_,w). But T should take the covector as the first argument. So, the calculation should be done with T(w,_).
Yes, he seems to mix up multiple times (also when defining the tensor product as noted in a different comment) that covectors come before vectors since a tensor maps T: V* x … x V* x V … x V -> K and not T: V x … x V x V* … x V* -> K
3:27 field 7:29 vector space 15:27 linear map 21:49 Hom(V,W) 26:27 End(V),Aut(V),V* 28:05 (p,q) tensor 33:30-41:16 tensor product 49:40 Hamel basis 1:02:00-1:10:13 component of T 1:10:33 change of basis 1:42:28 change of component under change of basis 1:56:31 det 2:01:19 n-form 2:08:13 volume form 2:10:15 det
Around 39:19 when he defines the tensor product of two tensors he changes the order of the slots. The elements of the dual space should appear first an then the elements of V.
He didn't make a mistake per-se though Consider a very simple tensor space T_0,1 := V* := Hom(V, K) := {T: V → K} so as you can see, it has a star in one place, but not in another. What he could also wrote is: V**⊗...⊗V**⊗V*⊗...⊗V*, but he wrote V instead of V**, because V is almost exactly the same as V**. If you wanna know about this "sameness" of V and V**, look up "double dual space".
@@orangus01 I know what you mean, but I think this was not what he intended there. Professor Schuller's clases are very rigorous and once he gives a definition he tends to stick to it.
@@orangus01 I think he indeed made a mistake, or at least he mixed conventions. The tensor T eats p covectors and q vectors, but then he writes T(v_1,..., v_p, w_1, ..., w_q), implying that T eats p vectors and q covectors.
I think it is not a mistake. I saw this discrepancy in another lecture (Gravity and Light). But it is the p-covectors that are eaten by p-vectors and q vectors eaten by q-covectors. SO we have w x w x w x...(p times) v xv x v x...(q times) yielding v (X) w. Actually, I will need to review and study more cause I think you guys's claim that he made a mistake has now confused me again. @@kyubey3166
The dual basis is very natural. You choose it because it gives you the i-th coordinate of the vector v = v_1 e_1 + ... + v_n e_n. That is, epsilon_i(v) = v_i. It works like a "filtering device" that gets rid of the other components. It is analogous to a orthonormal basis when you have an inner product.
@39:08 when we were defining the tensor product of two tensors. It's a new tensor over the same vector space V. Shouldn't the first entries be covectors instead of vectors? The way a tensor is defined is via cartesian product of the dual vector space V* and the vector space V. So naturally the first p slots of a (p,q) tensor should be covectors and the later q slots should be vectors. Right?
I know some people have pointed this out, but the comments are difficult to find (at least for me). There is a mistake at 39:00, when he defines the tensor product, According to his definition, a tensor T of rank (p,q) eats p covectors and q vectors, but then he writes T(v_1,..., v_p, w_1, ..., w_q), implying that T eats p vectors and q covectors. Also, the input order is reversed. I think what he meant to write is (TxS)(w_1,...,w_p,w_p+1,...,w_p+r, v_1,..., v_q, v_q+1, ..., v_q+s) = T(w_1,...,w_p, v_1,..., v_q) S(w_p+1,...,w_p+r, v_q+1, ..., v_q+s).
1:11:22 Why is A^b_a written as a component of the (1,1) tensor? Could we to the same by taking a (0,2) tensor that acts on a pair of vectors directly rather than including covectors in the picture?
Initially, your (good) question made me think. In this equation, the numbers A^b_a are nót to be interpreted as the components of a (1,1) tensor, but as components of an endomorphism End(V). The confusion is that (in finite dimensional cases) the tensor space T^1_1V is canonically isomorphic with the space End(V). So, they have the same behaviour. For some further details, see the document “geometrical anatomy of theoretical physics_Jacques” by the link: drive.google.com/file/d/1rFmpXAH_t_PZ84iUfIHSkQ_SHYcpELEC/view?usp=drive_link Specially see appendix A, section A4.
There is a mistake at 23:17. In particular, if V and W were over different fields then Hom(V,W) wouldn’t be the set of all linear maps from V to W and so this would be an abuse of notation and terminology. If you want to generalise, I think maybe you could define a function space i.e. the set of all maps from X to V, denoted say F(X,V), where X is any set and V is any k-vector space. This set can be given a k-vector space structure in the manner described by Doctor Schuller. In this case, if f in F(X,V), x in X and r in k then (rf)(x):=rf(x) makes sense but f(rx) may be meaningless unless X is endowed with a k-vector space structure. Edit: This mistake is addressed at 32:17.
One thing that it's worth mentioning is the fact that we are always considering an ordered basis. It must seem a pedantic requirement at first sight, but if you ignore the order it's impossible to talk about matrices for example.
@@jonathansharir-smith6683 The relevant algebraic structure is a that of a vector space. Every vector space has a basis and every two such bases (for a fixed vector space) have the same cardinality. When working in a finite-diemensional (say of dimension n) spaces one usually talk about a basis {v_1,v_2,...,v_n} one is implicitly assuming that the order of the vectors is important.
@@mathjitsuteacher Sorry, I should have been clearer. I know we're talking about a basis of a vector space, what I meant is how would you formalize the notion of an ordered basis? As an ordered tuple?
@@jonathansharir-smith6683 Usually one labels the basis vectors using an ordered set for example when we write {e_1,e_2,e_3} we are implicitly defining an order. So when you work with coordinates and matrices everything must respect this order. If you relabel the vectors, you still have a basis, but since you changed the labels, you will get different coordinates and matrices.
If L(V) is the set of all linear transformation from the vector space V to itself(endomorphisms), then Aut(V) is a subspace of L(V). When T, T' ∈ L(V), then T + T' is the mapping whose value at v ∈ V is T (v) + T'(v)
To add a bit more, it's not quite enough to say addition doesn't work so automorphisms don't form a vector space. Perhaps a different binary operation could be considered on the set - and like you said, composition works! So we could try to consider automorphisms as a vector space with the "addition" being composition. Or maybe there's another binary operation we could use.... But scalar multiplication is going to fail, no matter what the "addition" operation is. When T is an automorphism, then for any vector v, 0*(Tv) = T(0*v)=T(0) = 0, so 0*T is the zero operator, by virtue of linearity. No matter what the underlying field is (every field has a 0, and by the axioms of a vector space, it must behave like this), or the underlying binary operation (which we didn't use at all here), this is going to happen. So there's no way to turn automorphisms into a vector space.
@@bentoomey15 your argument for scalars is true, but substructures are always defined to inherit the operations from the superstructers. In this case the subspace should have the same operations of addition and scalar multiplication, but even both fail, not just one.
20:55 This one took me a while to get. If V and W are R-modules and if f in Hom(V,W) then for each r in R the function rf need not lie in Hom(V,W). To see this, let v in V and s in R and consider (rf)(sv):=rf(sv)=rsf(v). Now, if rf is in Hom(V,W) then this would mean (rf)(sv)=s(rf)(v) which need not be true unless R is commutative. So, if R is not commutative then all we can say about Hom(V,W) is that it is an abelian group.
Indeed, this is not explained in the lexure. An extensive description of this topic can be found in the appendix of the document "anatomy of theoretical physics_Jacques" in the link : drive.google.com/file/d/1rFmpXAH_t_PZ84iUfIHSkQ_SHYcpELEC/view?usp=drive_link In case of any questions/remark you can inform me.
2:09:24 isn't there a way to define determinants without referencing a choice of volume form? Namely, define as the top exterior power of the endomorphism?
I can't seem to derive that det(phi • xi) = det(phi) det(xi) in 2:15:59 from the guven definition. Specifically, I can't seem to get another factor of omega(e_1,...,e_n) in the denominator after trying to expand the numerator like in the 1:32:29.
If det(psi) = 0 then psi maps the basis to a set of linear dependend vectors and applying phi on these will give linear dependend vectors again, so then det(phi o psi) = 0. If det(psi) 0 then we can use (psi(e1),...,psi(en)) as a second basis and write (using w for omega): det(phi o psi) = w(phi(psi(e1)),...,phi(psi(en))) / w(e1,...,en) = ( w(phi(psi(e1)),...,phi(psi(en))) / w(psi(e1),...,psi(en)) ) * ( w(psi(e1),...,psi(en)) / w(e1,...,en) ) = det(phi) * det(psi) So using the fact that the determinant is independent of the chosen basis. The trick to divide by 1 = w(psi(e1),...,psi(en)) / w(psi(e1),...,psi(en)) resembles how the chain rule for differentation is derived.
The argument given at 45:30 can simply be reversed as one can merely construct a map say A:V-->V which will then prove that one-one tensor is isomorphic to V instead of V*, I think the problem here is the fact that for infinite dimensional vector space the dim(V)
Hello there! I have difficulties with the construction of an End(V*) element from a bilinear tensor and the other way around, and I don't find references of this on internet, only found isomorphisms with End(V) does someone have a nice reference for that? Or can explain more in details? At 42:37. Btw I think we should have T(w, -) and not T(-, w) as w is in V*
At 46m15s he defines T (v,w), but as a map from VxV* to K. But, to show that T is a (1,1)-Tensor on V and he needs to show that T maps from V*xV to K, right?
1:31:04 "I choose this (your great great grandfather) because you have no emotional attachment to him". Me: What does that mean? (Remembers where this university is.) Oh.
At 23:27 if k is complex field (mult op *) for V and k' is Real field (mult op x) for W then (Lambda diamond g)(m*v)= Lambda x m x g(v); here, Lambda from Real and m from Complex and v from V. but operation x (defined on Reals) can not deal a complex m. what then?
I believe it was a mistake. Without adjusting the definitions it doesn't make sense to talk about linear maps between vector spaces over different fields. The construction of the vector space is fine however. If S is an arbitrary set, and W is a vector space over some field K, then the set of all functions F(S, W) can be made into a vector space over K as the professor described. S could be just a set, a smooth manifold, poset, vector space over a different field E, or whatever else and it wouldn't matter. But still if we take S = V some vector space over a field E, in F(V, W) we are considering all functions from V to W not just the "linear" ones as we don't know what it means to be "linear" between V and W when they are over different fields. If one really wanted to, one could consider the set Hom_{+}(V, W) of maps that preserve addition and do not care about the scalar multiplication, and that would be a vector space (actually a subspace of F(V, W)).
One way to talk about "linear" maps between vector spaces over different fields would be to relate the two fields somehow. If one provided a nice map h:E to K between fields such as a ring/field homomorphism, which preserves addition, multiplication, and "1" (the multiplicative neutral/identity element) between the fields then one could talk about what are called semilinear maps between V (over E) and W (over K). A semilinear map T: V to W (with respect to this h) would be a map that preserves addition and such that T(a*v) = h(a) x T(v) for vectors v in V, scalars a in E, and the scalar multiplications being * and x respectively. This cannot be done if E is the complex numbers, and K is the real numbers (there is no map h). But it can the other way around if one just take the inclusion of the reals into the complex numbers. For a more complicated example, you may take E to be the set of real numbers a + b*2^(1/2) with a, b rational and K to be the set of real numbers a + b*2^(1/4) + c*2^(1/2) + d*2^(3/4) with a, b, c, d rational and then take h(a + b*2^(1/2)) = a + b*2^(1/2) or h(a + b*2^(1/2)) = a - b*2^(1/2) Another example is take E to be the set of rational expressions with coefficients in another field (say the reals), meaning the set of expressions p(x)/q(x) where p(x) and q(x) are polynomials in a variable x. Then we may consider the map h from E to itself by which turns some p(x)/q(x) into p(x^2)/q(x^2). There are many such maps, and understanding the maps and when a map between fields can be created comes down to familiarity with fields.
oh, i think, at this point, he just uses the representation of phi as a linear map by its matrix and so, what he writes down here, is just the image of e_b, which is just the linear combination of the basis-vectors of V, where the coefficients are the entries of the b-th column of the matrix.
I think Frederic got the vectors and covectors mixed up in the definition of the tensor product. Covectors should have come before the vectors in the argument list.
I don't know why I like it so much when he wipes the board with the sponge and then with the squeegee. Wonder how they clean that classroom. Do they wait for the chalk to dry and vacuum it?
@1:58:06 He writes T(0 up)(2down), that means p=0 and q=2 in his tensor notation. Does not that mean V* x V*? And if you try to build g according to the notation that he has used in this video, it turns out to be a g (ab both up). But he has Where am I getting wrong?
It means, it maps from VxV to K. Therefore its components are received by acting on (all combinations of) e_a and e_b (basis vectors). Thus - according to the "rule" he gave - the components will also have bottom indices. The tensor itself is an element of the tensor-space " V* (tensor-product) V* ".
The definitely of field is disturbing. Schuller does not require that multiplication by zero needs to be commutative nor associative nor that 1 behaves as an identity with zero. Is this an error or can these be derived for the other field axioms.
At 21:16, I think his argument is somewhat flawed. You cannot talk about commutativity between the 'scaling' and λ since the 'dot' operation between them is that of W and not of the ring they belong to.
I think he meant after using the "associativity" which yields 𝛌⦿(𝛍⦿w)=(𝛌✄𝛍)⦿w where i denoted ✄ for the ring multiplication. What you would want for the map to be linear is to get (𝛍✄𝛌)⦿w
The problem he mentions arises for R a non-commutative ring since you can use axioms of linearity of the maps and their s-mult as applied to say (l.g)(m.v) in two different orders. Say. l,m are in R and g in Hom V->W i.e. linear maps (on R-modules) and s-mult in Hom i.e. R x Hom -> Hom s.g defined by (s.g)(v) = s.(g(v)) (s-mult in W) FIRST WAY: (l.g)(m.v) 1= m.((l.g)(v)) 2= m.(l.(g(v))) 3= (m.l).g(v) using 1: l.g is linear, 2: s-mult in Hom, 3: the module axiom r(sV) = (rs)V SECOND WAY: (l.g)(m.v) 1= l.(g(m.v) 2= l.(m.g(v)) = (l.m).g(v) using 1: s-mult in Hom, 2: g is linear, 3: the same module axiom r(sV) = (rs)V Then the two RHS coming from the same LHS are (m.l).g(v) and (l.m).g(v) which are not necessarily equal if R is a non-commutative ring, so linearity and s-mult are not compatible in this case and so you don't have a R-module structure on Hom(V,W)
Not quite clear to me around 2:00:45: There is a W^0_2 tensor (2 indices down, zero up) and he's saying the determinant won't be meaningful for it. However, if g_ij is a metric tensor, it's quite often the case that i need to compute the determinant of that; in fact, the square root of the determinant, for example to compute the Laplace operator in some curvilinear system. (Laplace-Beltrami operator). At any rate, I really like this lecture, even if there's no way around cemeteries when it comes to programming to actually compute something rather than philosophizing about it on the blackboard :-)
You are right, in my experience Aut(V) is not thought of as a vector space but a group with the identity map as the neutral element and composition as the group operation. Even in the rare cases when Aut(V) with this operation as "addition" can be made into a vector space, it wouldn't be a subspace of something like End(V) as the addition operation we've defined on Aut(V) is not the one inherited by End(V), which would have as a consequence that the 0 of End(V) lies in Aut(V) like you said.
I believe there are only two instances where Aut(V) can be made into a vector space if you take composition as addition (and in both cases Aut(V) is 0-dimensional). These two cases are when V itself is 0-dimensional, or when V is the field of two elements. To prove this it is enough to show that if V isn't 0 dimensional then V is 1-dimensional over a field of characteristic 2, which can be done by considering (-1) times the identity map.
Yes, watch the end of the video. It's a tensor density, and densities are precisely the objects one can integrate over manifolds because their transformation behaviour exactly cancels that of the chart integration operator under change of charts. That's why you always see it all over the place inside integrals and as a factor in things that are supposed to be integrated.
Hi, thanks for this beatiful lecture. I have a really simple doubt. I know we write vector components with upper indices because is an element of the field, which is equivalent to be an element of the collection of all homomorphisms from V to K. So, what's the specific reason we write covector components with lower indices? Or is it just a notation convention?
You defined a linear map between to vector spaces, but you seem to implicitly required the two vector spaces to be over the same field K. Is that requirement necessary?
+Jim Newton at the end of the video, Schuller changed his mind and requires K=K-prime. I think this is too restrictive. It would probably work if K is a subspace of K-prime. For example if V is a vector space of the reals, and W is a vector space over the complex, you can easily add and multiple reals with complex, but you get a complex. My suspicion is that there are not very many fields anyway, especially if you require them to be complete (if you want convergent sequences) there is probably only R, C, and maybe the quaternions (if they are even a vector space), so the point is probably moot. As I recall (30 years ago) when I studied linear algebra, the professor insisted that the field was either R or C, and claimed nothing else was interesting.
+Jim Newton The requirement k = k' is indeed a bit too restrictive. In fact you only need a morphism of fields h:k->k', so that you can define a linear map F from a k-v.s. V to a k'-v.s. W by requiring that F(av) = h(a)F(v). However, this depends from the map h, which cannot always be chosen in a canonical way.
At the point that you've chosen a field homomorphism h : K -> K', you've equipped V' with the structure of a K-vector space, so there's no point in making such a distinction.
From my experience I've found that courses like this really require that you do the problem sets, not just follow along with the lectures. Often essential proofs and definitions are in there, and the practice applying the concepts really builds the familiarity you need in order to advance the subject. Unfortunately I don't know where to find them.
3:00 "that's the close your eyes operator". dead 😂
These lectures are exceptional. Schuller's clarity and thoroughness is simply amazing - he makes the average diff. geom. text look like it was written by apes. The non-witchcraft treatment of determinant towards the end makes me weep for all the dismal linear algebra texts I've ploughed through - how have I never seen this before?
not only that, but the stamina to lecture for over two hours about technical material while remaining well-spoken... remarkable
@@rrtroutslayer and no looking at notes either
haha i share the sentiment, diff geo books sometimes obscures these basic ideas
Well, technically the average differential geometry textbooks were written by apes, along with any other book ever :D
agree, his lecturing is razor blade sharp
A gentleman only chooses a basis if he really must.
aha
15:46, 20:03, 21:04, 23:20 (field for vector space of functions from V to W is the same as W, V in principle can have a different field), 25:58, 28:16 (T, V*=Hom(V,k) and V are over field k), 29:16, 32:20 (Hom(V,W) has the same field as W, but linear maps demand V has the same field too), 35:15, 40:32, 46:49, 48:37, 52:53, 1:00:44, 1:08:27, 1:09:40 (how do vectors act on covectors? 1:45:00),1:24:46, 1:30:54, 1:39:29, 1:42:00, 1:49:21, 1:52:36 (orange product definition gives V), 1:58:26, 1:59:18, 2:00:25, 2:01:18, 2:03:53, 2:04:48 (eg, d+1 entry is v_1 switch with 1st entry), 2:11:53, 2:14:25, 2:21:49
thanks!
At 45:09 he goes on to define T̂ in terms of T, he puts the covector w in the place of the second argument of T i.e. T(_,w). But T should take the covector as the first argument. So, the calculation should be done with T(w,_).
Yes, he seems to mix up multiple times (also when defining the tensor product as noted in a different comment) that covectors come before vectors since a tensor maps T: V* x … x V* x V … x V -> K and not T: V x … x V x V* … x V* -> K
finally some1 who knows how to explain this properly
agreed
What a nice definition of determinant! In my experience, it is somewhat abstract, but in fact, it is the clearest definition of determinant.
3:27 field
7:29 vector space
15:27 linear map
21:49 Hom(V,W)
26:27 End(V),Aut(V),V*
28:05 (p,q) tensor
33:30-41:16 tensor product
49:40 Hamel basis
1:02:00-1:10:13 component of T
1:10:33 change of basis
1:42:28 change of component under change of basis
1:56:31 det
2:01:19 n-form
2:08:13 volume form
2:10:15 det
Around 39:19 when he defines the tensor product of two tensors he changes the order of the slots.
The elements of the dual space should appear first an then the elements of V.
He didn't make a mistake per-se though
Consider a very simple tensor space T_0,1 := V* := Hom(V, K) := {T: V → K} so as you can see, it has a star in one place, but not in another.
What he could also wrote is:
V**⊗...⊗V**⊗V*⊗...⊗V*, but he wrote V instead of V**, because V is almost exactly the same as V**. If you wanna know about this "sameness" of V and V**, look up "double dual space".
@@orangus01 I know what you mean, but I think this was not what he intended there.
Professor Schuller's clases are very rigorous and once he gives a definition he tends to stick to it.
@@orangus01 I think he indeed made a mistake, or at least he mixed conventions. The tensor T eats p covectors and q vectors, but then he writes T(v_1,..., v_p, w_1, ..., w_q), implying that T eats p vectors and q covectors.
I think it is not a mistake. I saw this discrepancy in another lecture (Gravity and Light). But it is the p-covectors that are eaten by p-vectors and q vectors eaten by q-covectors. SO we have w x w x w x...(p times) v xv x v x...(q times) yielding v (X) w. Actually, I will need to review and study more cause I think you guys's claim that he made a mistake has now confused me again. @@kyubey3166
The dual basis is very natural. You choose it because it gives you the i-th coordinate of the vector v = v_1 e_1 + ... + v_n e_n. That is, epsilon_i(v) = v_i. It works like a "filtering device" that gets rid of the other components. It is analogous to a orthonormal basis when you have an inner product.
mm, I think the best explanation of dual basis is g_ij(e_i)= epsilon_j , then next showing that epsilon_i is unique such that epsilon_j(e_i)=delta_ij.
@31:00 If a p,q tensor takes in p inputs from the dual and q inputs from V, why does he flip the p and q in the definition of the tensor space?
Thanks for pointing that out, I was confused about that but now I'm sure it's a mistake.
@39:08 when we were defining the tensor product of two tensors. It's a new tensor over the same vector space V. Shouldn't the first entries be covectors instead of vectors? The way a tensor is defined is via cartesian product of the dual vector space V* and the vector space V. So naturally the first p slots of a (p,q) tensor should be covectors and the later q slots should be vectors. Right?
Yes, that seems to have been a mistake. Many people pointed it out in the comments.
I know some people have pointed this out, but the comments are difficult to find (at least for me).
There is a mistake at 39:00, when he defines the tensor product, According to his definition, a tensor T of rank (p,q) eats p covectors and q vectors, but then he writes T(v_1,..., v_p, w_1, ..., w_q), implying that T eats p vectors and q covectors. Also, the input order is reversed. I think what he meant to write is
(TxS)(w_1,...,w_p,w_p+1,...,w_p+r, v_1,..., v_q, v_q+1, ..., v_q+s) = T(w_1,...,w_p, v_1,..., v_q) S(w_p+1,...,w_p+r, v_q+1, ..., v_q+s).
This is correct.
чувачок, що тебе звилює, я тобі поясню...
yeah tiny mistake check the notes by simon rea.
1:11:22 Why is A^b_a written as a component of the (1,1) tensor? Could we to the same by taking a (0,2) tensor that acts on a pair of vectors directly rather than including covectors in the picture?
Initially, your (good) question made me think. In this equation, the numbers A^b_a are nót to be interpreted as the components of a (1,1) tensor, but as components of an endomorphism End(V). The confusion is that (in finite dimensional cases) the tensor space T^1_1V is canonically isomorphic with the space End(V). So, they have the same behaviour. For some further details, see the document “geometrical anatomy of theoretical physics_Jacques” by the link: drive.google.com/file/d/1rFmpXAH_t_PZ84iUfIHSkQ_SHYcpELEC/view?usp=drive_link
Specially see appendix A, section A4.
There is a mistake at 23:17. In particular, if V and W were over different fields then Hom(V,W) wouldn’t be the set of all linear maps from V to W and so this would be an abuse of notation and terminology.
If you want to generalise, I think maybe you could define a function space i.e. the set of all maps from X to V, denoted say F(X,V), where X is any set and V is any k-vector space. This set can be given a k-vector space structure in the manner described by Doctor Schuller.
In this case, if f in F(X,V), x in X and r in k then (rf)(x):=rf(x) makes sense but f(rx) may be meaningless unless X is endowed with a k-vector space structure.
Edit: This mistake is addressed at 32:17.
One thing that it's worth mentioning is the fact that we are always considering an ordered basis. It must seem a pedantic requirement at first sight, but if you ignore the order it's impossible to talk about matrices for example.
What is the relevant algebraic structure? A (ordered) tuple?
@@jonathansharir-smith6683 The relevant algebraic structure is a that of a vector space.
Every vector space has a basis and every two such bases (for a fixed vector space) have the same cardinality.
When working in a finite-diemensional (say of dimension n) spaces one usually talk about a basis {v_1,v_2,...,v_n} one is implicitly assuming that the order of the vectors is important.
@@mathjitsuteacher Sorry, I should have been clearer. I know we're talking about a basis of a vector space, what I meant is how would you formalize the notion of an ordered basis? As an ordered tuple?
@@jonathansharir-smith6683 Usually one labels the basis vectors using an ordered set for example when we write {e_1,e_2,e_3} we are implicitly defining an order. So when you work with coordinates and matrices everything must respect this order.
If you relabel the vectors, you still have a basis, but since you changed the labels, you will get different coordinates and matrices.
The automorphisms do not form a vector space, because it is not closed under addition. It forms a group, however, with the composition operator
If L(V) is the set of all linear transformation from the vector space V to itself(endomorphisms), then Aut(V) is a subspace of L(V). When T, T' ∈ L(V), then T + T'
is the mapping whose value at v ∈ V is T (v) + T'(v)
indeed, the two automorphisms v -> -v, and v -> v sums to the zero homomorphism.
To add a bit more, it's not quite enough to say addition doesn't work so automorphisms don't form a vector space. Perhaps a different binary operation could be considered on the set - and like you said, composition works! So we could try to consider automorphisms as a vector space with the "addition" being composition. Or maybe there's another binary operation we could use....
But scalar multiplication is going to fail, no matter what the "addition" operation is. When T is an automorphism, then for any vector v, 0*(Tv) = T(0*v)=T(0) = 0, so 0*T is the zero operator, by virtue of linearity. No matter what the underlying field is (every field has a 0, and by the axioms of a vector space, it must behave like this), or the underlying binary operation (which we didn't use at all here), this is going to happen. So there's no way to turn automorphisms into a vector space.
agreed
@@bentoomey15 your argument for scalars is true, but substructures are always defined to inherit the operations from the superstructers. In this case the subspace should have the same operations of addition and scalar multiplication, but even both fail, not just one.
20:55 This one took me a while to get. If V and W are R-modules and if f in Hom(V,W) then for each r in R the function rf need not lie in Hom(V,W).
To see this, let v in V and s in R and consider (rf)(sv):=rf(sv)=rsf(v). Now, if rf is in Hom(V,W) then this would mean (rf)(sv)=s(rf)(v) which need not be true unless R is commutative.
So, if R is not commutative then all we can say about Hom(V,W) is that it is an abelian group.
1:53:53 I still don´t get, why it can be turned around. 🤨
Indeed, this is not explained in the lexure. An extensive description of this topic can be found in the appendix of the document "anatomy of theoretical physics_Jacques" in the link :
drive.google.com/file/d/1rFmpXAH_t_PZ84iUfIHSkQ_SHYcpELEC/view?usp=drive_link
In case of any questions/remark you can inform me.
Master lecture, thank you very much!!!
he is extremely good
2:09:24 isn't there a way to define determinants without referencing a choice of volume form? Namely, define as the top exterior power of the endomorphism?
I can't seem to derive that det(phi • xi) = det(phi) det(xi) in 2:15:59 from the guven definition. Specifically, I can't seem to get another factor of omega(e_1,...,e_n) in the denominator after trying to expand the numerator like in the 1:32:29.
If det(psi) = 0 then psi maps the basis to a set of linear dependend vectors and applying phi on these will give linear dependend vectors again, so then det(phi o psi) = 0.
If det(psi) 0 then we can use (psi(e1),...,psi(en)) as a second basis and write (using w for omega):
det(phi o psi) = w(phi(psi(e1)),...,phi(psi(en))) / w(e1,...,en)
= ( w(phi(psi(e1)),...,phi(psi(en))) / w(psi(e1),...,psi(en)) ) * ( w(psi(e1),...,psi(en)) / w(e1,...,en) )
= det(phi) * det(psi)
So using the fact that the determinant is independent of the chosen basis.
The trick to divide by 1 = w(psi(e1),...,psi(en)) / w(psi(e1),...,psi(en)) resembles how the chain rule for differentation is derived.
1:18:09 - 1:25:43 when does the summation convention work?
When everything is linear or multilinear
I suppose the K for field is from German “Körper” (body)?
yes. in Spanish it's called cuerpo.(body)
Latin corpus
@@lucianoosinaga2980 in portuguese, we call it corpo (which means body, in english).
The argument given at 45:30 can simply be reversed as one can merely construct a map say A:V-->V which will then prove that one-one tensor is isomorphic to V instead of V*, I think the problem here is the fact that for infinite dimensional vector space the dim(V)
Hello there! I have difficulties with the construction of an End(V*) element from a bilinear tensor and the other way around, and I don't find references of this on internet, only found isomorphisms with End(V) does someone have a nice reference for that? Or can explain more in details? At 42:37. Btw I think we should have T(w, -) and not T(-, w) as w is in V*
At 46m15s he defines T (v,w), but as a map from VxV* to K. But, to show that T is a (1,1)-Tensor on V and he needs to show that T maps from V*xV to K, right?
Vectors are like nouns. convectors are like verbs. its easy to think of a collection of nouns in a bag but not a collection of verbs in a bag.
aha nice one
k-form is similar as a measure of the vector space. Use 1-form for length, 2-form for area , 3-form for volume, etc...
You forgot the distributive axiom in your field definition
Frederic went at one of a hell of a high school, I've never learned about fields when I was there lol.
He probably just means that you learned examples of fields like Q and R without necessarily learning the general field structure.
1:31:04 "I choose this (your great great grandfather) because you have no emotional attachment to him".
Me: What does that mean? (Remembers where this university is.) Oh.
german humour XD
Field also requires distributivity.
At 23:27 if k is complex field (mult op *) for V and k' is Real field (mult op x) for W then (Lambda diamond g)(m*v)= Lambda x m x g(v);
here, Lambda from Real and m from Complex and v from V.
but operation x (defined on Reals) can not deal a complex m. what then?
I believe it was a mistake. Without adjusting the definitions it doesn't make sense to talk about linear maps between vector spaces over different fields. The construction of the vector space is fine however. If S is an arbitrary set, and W is a vector space over some field K, then the set of all functions F(S, W) can be made into a vector space over K as the professor described. S could be just a set, a smooth manifold, poset, vector space over a different field E, or whatever else and it wouldn't matter.
But still if we take S = V some vector space over a field E, in F(V, W) we are considering all functions from V to W not just the "linear" ones as we don't know what it means to be "linear" between V and W when they are over different fields. If one really wanted to, one could consider the set Hom_{+}(V, W) of maps that preserve addition and do not care about the scalar multiplication, and that would be a vector space (actually a subspace of F(V, W)).
One way to talk about "linear" maps between vector spaces over different fields would be to relate the two fields somehow. If one provided a nice map h:E to K between fields such as a ring/field homomorphism, which preserves addition, multiplication, and "1" (the multiplicative neutral/identity element) between the fields then one could talk about what are called semilinear maps between V (over E) and W (over K). A semilinear map T: V to W (with respect to this h) would be a map that preserves addition and such that T(a*v) = h(a) x T(v) for vectors v in V, scalars a in E, and the scalar multiplications being * and x respectively.
This cannot be done if E is the complex numbers, and K is the real numbers (there is no map h). But it can the other way around if one just take the inclusion of the reals into the complex numbers.
For a more complicated example, you may take E to be the set of real numbers a + b*2^(1/2) with a, b rational and K to be the set of real numbers a + b*2^(1/4) + c*2^(1/2) + d*2^(3/4) with a, b, c, d rational and then take
h(a + b*2^(1/2)) = a + b*2^(1/2) or h(a + b*2^(1/2)) = a - b*2^(1/2)
Another example is take E to be the set of rational expressions with coefficients in another field (say the reals), meaning the set of expressions p(x)/q(x) where p(x) and q(x) are polynomials in a variable x. Then we may consider the map h from E to itself by which turns some p(x)/q(x) into p(x^2)/q(x^2).
There are many such maps, and understanding the maps and when a map between fields can be created comes down to familiarity with fields.
1:32:54 why?????
i wonder, too
oh, i think, at this point, he just uses the representation of phi as a linear map by its matrix and so, what he writes down here, is just the image of e_b, which is just the linear combination of the basis-vectors of V, where the coefficients are the entries of the b-th column of the matrix.
I think Frederic got the vectors and covectors mixed up in the definition of the tensor product. Covectors should have come before the vectors in the argument list.
Yes, I think so.
Yes indeed
This is the point at which the "index notation" used by physicist finally becomes superior to the mathematically more rigorous "map notation".
I don't know why I like it so much when he wipes the board with the sponge and then with the squeegee.
Wonder how they clean that classroom. Do they wait for the chalk to dry and vacuum it?
@1:58:06 He writes T(0 up)(2down), that means p=0 and q=2 in his tensor notation. Does not that mean V* x V*? And if you try to build g according to the notation that he has used in this video, it turns out to be a g (ab both up). But he has Where am I getting wrong?
It means, it maps from VxV to K. Therefore its components are received by acting on (all combinations of) e_a and e_b (basis vectors). Thus - according to the "rule" he gave - the components will also have bottom indices. The tensor itself is an element of the tensor-space " V* (tensor-product) V* ".
Groovy. Freddy is The Man.
Thanks
For dim V to be well-defined, we need a proof that all Hamel basis must have the same cardinality.
truly amazing lectures
The definitely of field is disturbing. Schuller does not require that multiplication by zero needs to be commutative nor associative nor that 1 behaves as an identity with zero. Is this an error or can these be derived for the other field axioms.
Symmetries or cemeteries.
I absolutely love this lecture.
26:00 dual vector space
43:00
:00;00 caveat of construction with a basisbof a vec space
1:06:00 dual basis of a dual vec space of V. comment on why this specific choice of a abasis of the dual space.
1:01:00 components of a tensor on a vec space. derived from the chosen basis of the vector space
12:30
summarize mistakes
40:10 covector w_i first, and then v_i
45:16 T(w, .) ,not T(., w)
At 21:16, I think his argument is somewhat flawed. You cannot talk about commutativity between the 'scaling' and λ since the 'dot' operation between them is that of W and not of the ring they belong to.
I think he meant after using the "associativity" which yields 𝛌⦿(𝛍⦿w)=(𝛌✄𝛍)⦿w where i denoted ✄ for the ring multiplication. What you would want for the map to be linear is to get (𝛍✄𝛌)⦿w
The problem he mentions arises for R a non-commutative ring since you can use axioms of linearity of the maps and their s-mult as applied to say (l.g)(m.v) in two different orders.
Say. l,m are in R and g in Hom V->W i.e. linear maps (on R-modules) and s-mult in Hom i.e. R x Hom -> Hom
s.g defined by (s.g)(v) = s.(g(v)) (s-mult in W)
FIRST WAY: (l.g)(m.v) 1= m.((l.g)(v)) 2= m.(l.(g(v))) 3= (m.l).g(v)
using 1: l.g is linear, 2: s-mult in Hom, 3: the module axiom r(sV) = (rs)V
SECOND WAY: (l.g)(m.v) 1= l.(g(m.v) 2= l.(m.g(v)) = (l.m).g(v)
using 1: s-mult in Hom, 2: g is linear, 3: the same module axiom r(sV) = (rs)V
Then the two RHS coming from the same LHS are
(m.l).g(v) and (l.m).g(v) which are not necessarily equal if R is a non-commutative ring, so linearity and s-mult are not compatible in this case and so you don't have a R-module structure on Hom(V,W)
Not quite clear to me around 2:00:45: There is a W^0_2 tensor (2 indices down, zero up) and he's saying the determinant won't be meaningful for it. However, if g_ij is a metric tensor, it's quite often the case that i need to compute the determinant of that; in fact, the square root of the determinant, for example to compute the Laplace operator in some curvilinear system. (Laplace-Beltrami operator). At any rate, I really like this lecture, even if there's no way around cemeteries when it comes to programming to actually compute something rather than philosophizing about it on the blackboard :-)
How can Aut(V) be a vector space? the 0 map is not invertible and therefore cannot be in Aut(V)
The additive identity of Aut(V) is the identity map, not the zero map.
Also, the scalar field for Aut(V) as a vector space is the same as for V itself.
You are right, in my experience Aut(V) is not thought of as a vector space but a group with the identity map as the neutral element and composition as the group operation. Even in the rare cases when Aut(V) with this operation as "addition" can be made into a vector space, it wouldn't be a subspace of something like End(V) as the addition operation we've defined on Aut(V) is not the one inherited by End(V), which would have as a consequence that the 0 of End(V) lies in Aut(V) like you said.
I believe there are only two instances where Aut(V) can be made into a vector space if you take composition as addition (and in both cases Aut(V) is 0-dimensional). These two cases are when V itself is 0-dimensional, or when V is the field of two elements. To prove this it is enough to show that if V isn't 0 dimensional then V is 1-dimensional over a field of characteristic 2, which can be done by considering (-1) times the identity map.
Question: tensors are multilinear maps T : VxV...xV x V*x...xV* -> R. can they see as elements of the dual space W* where W= VxV...xV x V*x...xV* ?
Yes, but you will be hiding information about the underlying structure.
What about the determinant of a metric tensor? Isn’t that also a (0,2) tensor or 2-form?
Yes, watch the end of the video. It's a tensor density, and densities are precisely the objects one can integrate over manifolds because their transformation behaviour exactly cancels that of the chart integration operator under change of charts. That's why you always see it all over the place inside integrals and as a factor in things that are supposed to be integrated.
Hi, thanks for this beatiful lecture. I have a really simple doubt. I know we write vector components with upper indices because is an element of the field, which is equivalent to be an element of the collection of all homomorphisms from V to K. So, what's the specific reason we write covector components with lower indices? Or is it just a notation convention?
I think based on what he said, it is the convention. And the perks is it makes the einstein summation convention applicable and consistent.
You defined a linear map between to vector spaces, but you seem to implicitly required the two vector spaces to be over the same field K. Is that requirement necessary?
+Jim Newton good, he addresses this question at minute 22:30. I'm glad a student asked because I wanted to ask the same question.
+Jim Newton at the end of the video, Schuller changed his mind and requires K=K-prime. I think this is too restrictive. It would probably work if K is a subspace of K-prime. For example if V is a vector space of the reals, and W is a vector space over the complex, you can easily add and multiple reals with complex, but you get a complex.
My suspicion is that there are not very many fields anyway, especially if you require them to be complete (if you want convergent sequences) there is probably only R, C, and maybe the quaternions (if they are even a vector space), so the point is probably moot.
As I recall (30 years ago) when I studied linear algebra, the professor insisted that the field was either R or C, and claimed nothing else was interesting.
+Jim Newton i have a question! at 39:15 he says "T eats up p vectors and q covectors", but shouldn't it be the opposite way?
+Jim Newton The requirement k = k' is indeed a bit too restrictive. In fact you only need a morphism of fields h:k->k', so that you can define a linear map F from a k-v.s. V to a k'-v.s. W by requiring that F(av) = h(a)F(v). However, this depends from the map h, which cannot always be chosen in a canonical way.
At the point that you've chosen a field homomorphism h : K -> K', you've equipped V' with the structure of a K-vector space, so there's no point in making such a distinction.
1:47:50
So tensor is basically an algebraic object rather than a geometrical object
Please what are the first imput in the tensor, element of V* or V ?
V*
If you like Schuller you should all go read Bourbaki my dudes! Perfect for complements in algebra and topology.
Mr Schuller should know however that Aut(V) is not a vector space...
I am speechless!
Matrices as number cemetery. Sounds sinister. What’s the analogy of funerals then? When you’re running out of twiddle indices? -)
how do I find the problem sheet?
I'd like to see the problem sheets, too.
Good Let'shope someone helps us find them. @@paulmcc8155
3:25 I don't think I know the 'field K' in high school, lol
Go off king
1:27:50 “witch mathematics” 🤣🤣
"You've seen this kind of Witch Mathematics before" 😂 1:27:50
chalk can write on wet board o_o
that's first world for ya
the definition of vector subspaces is not meaningful since U being the emptyset is not excluded
He forgot to list that a vector subspace must contain the appropriate identity elements, which would then exclude the empty set.
I am so amused by what Dr. Schuller thinks is done in high schools, elementary schools, etc.
yes.. i'm still learning 1+1 when he learns the tensor space in high school, IMAO
I feel like things escalated very quickly in complexity (and taking speed) in the last 2 lectures... having a hard time following this one.
From my experience I've found that courses like this really require that you do the problem sets, not just follow along with the lectures. Often essential proofs and definitions are in there, and the practice applying the concepts really builds the familiarity you need in order to advance the subject. Unfortunately I don't know where to find them.
1:46:30 Twiddle untwiddle .... I’m outtwiddled . 😂
me ar 1:42:25
Damn this guy really hates matrices
witchcraft mathematics haha!
It's cemetery, not cemetary.
English isn't everyone's native language
@@HilbertXVI So, a person should not offer corrections to mistakes?
...шкода, що англійська у нього так собі...