Outstanding explanation; how you show the equivalency of the ortho-normal corr. system and the dual space vector (function )is insightful. The proof was elegant. Then showing a non-Euclidean to introduce the entire idea of a metric tensor is brilliant.
It's a great explanation. I think I would graps it more firmly, when there exist some numerical examples, where we can see the numerical expression of non orthogonal bases and the necessity of the metric tensor when we deal with non orthogonal bases. Otherwise, it feels like they all follow some rules.
Since we use the dot product to define the metric tensor, does that mean that we don't have a metric for a general vector space, since vector spaces are not always inner product spaces?
4:20 in eq(3) thats for sure not how multiply matrices. In fact you can't multiply those two. It feels to me that, what you do is multimply a row matrix with d^1 all the way to the n-column with u vector (+) a row matrix with d^2 all the way to the n-column with u vector (+)...(+)a row matrix with d^n all the way to the n-column with u vector to get the RHS result. What's going on?
You’re right that you can’t multiply two column matrices like the ones in equation 3, but I’m not multiplying matrices in that equation; I’m doing a dot product of two column vectors, which is different altogether.
@@FacultyofKhan Firstly thanks for you answer. Now let me rephrase my problematic. Assuming ofc that the right way to compute a dot product is the sums in eq(3), there you get mixed terms. On the contrary in method 1 and 2 you don't get those mixed terms(e_1*e_2). So in my point of view the problem does not arise when you transpose the covector form row to column. It's still there in method 1 when it's a row vector, you still don't get the mixed terms. And the problem is the mixed terms don't show up in either method. So why are you saying that the problem is in the transpose?
@@FacultyofKhan is it correct that when you use the subscript, your are considering it in the dual vector space, however, when the super i is used, you put it in the normal vector space as you use the same base vector e there. In other words, when you transpose the vector, you change the vector space?
@@皇甫累 Yes, vector components with a subscript represent the components of the dual vector, while vector components with a superscript represent the components of a regular/normal vector. What I want to emphasize though is that taking the transpose of a vector doesn't necessarily give you its dual vector: this rule only applies in Euclidean space with an orthonormal basis. In every other situation, you need to use the metric tensor to lower the index and convert a vector to its dual.
Outstanding explanation; how you show the equivalency of the ortho-normal corr. system and the dual space vector (function )is insightful. The proof was elegant. Then showing a non-Euclidean to introduce the entire idea of a metric tensor is brilliant.
This is a very, very good explanation. Thank you for this outstanding contribution!🙏
Thank you very much !
Glad it helped!
It's a great explanation. I think I would graps it more firmly, when there exist some numerical examples, where we can see the numerical expression of non orthogonal bases and the necessity of the metric tensor when we deal with non orthogonal bases. Otherwise, it feels like they all follow some rules.
Currently taking bi-linear algebra, I'm starting to see some heavy similarities between these two subjects.
Since we use the dot product to define the metric tensor, does that mean that we don't have a metric for a general vector space, since vector spaces are not always inner product spaces?
Yes that's correct. You must have an inner product to define the metric tensor.
Which software do you use to record these videos?
4:20 in eq(3) thats for sure not how multiply matrices. In fact you can't multiply those two. It feels to me that, what you do is multimply a row matrix with d^1 all the way to the n-column with u vector (+) a row matrix with d^2 all the way to the n-column with u vector (+)...(+)a row matrix with d^n all the way to the n-column with u vector to get the RHS result. What's going on?
You’re right that you can’t multiply two column matrices like the ones in equation 3, but I’m not multiplying matrices in that equation; I’m doing a dot product of two column vectors, which is different altogether.
@@FacultyofKhan Firstly thanks for you answer. Now let me rephrase my problematic. Assuming ofc that the right way to compute a dot product is the sums in eq(3), there you get mixed terms. On the contrary in method 1 and 2 you don't get those mixed terms(e_1*e_2). So in my point of view the problem does not arise when you transpose the covector form row to column. It's still there in method 1 when it's a row vector, you still don't get the mixed terms. And the problem is the mixed terms don't show up in either method. So why are you saying that the problem is in the transpose?
Yes I feel the same way
@@FacultyofKhan is it correct that when you use the subscript, your are considering it in the dual vector space, however, when the super i is used, you put it in the normal vector space as you use the same base vector e there. In other words, when you transpose the vector, you change the vector space?
@@皇甫累 Yes, vector components with a subscript represent the components of the dual vector, while vector components with a superscript represent the components of a regular/normal vector.
What I want to emphasize though is that taking the transpose of a vector doesn't necessarily give you its dual vector: this rule only applies in Euclidean space with an orthonormal basis. In every other situation, you need to use the metric tensor to lower the index and convert a vector to its dual.