What is a Tensor 9: TPS transformations

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ม.ค. 2025

ความคิดเห็น • 57

  • @clifforddicarlo9178
    @clifforddicarlo9178 5 ปีที่แล้ว +12

    Finally, a coherent presentation of covariance and contravariance.

  • @PunmasterSTP
    @PunmasterSTP 2 ปีที่แล้ว +1

    TPS transformations? More like "Terrific demonstrations!" Thanks again for uploading these videos, and I'll definitely be using more watch-time to help the concepts sink in...

  • @blacksunprison13
    @blacksunprison13 8 ปีที่แล้ว +4

    You are an awesome internet teacher! Thank you!

  • @XylyXylyX
    @XylyXylyX  8 ปีที่แล้ว +2

    Thank you for your question and your nice compliment. I will show the proof in my next lesson!

  • @garytzehaylau9432
    @garytzehaylau9432 5 ปีที่แล้ว +1

    i get stuck and need your help
    1.why do you get rid off those matrices index
    and let this "i" go away in 12:59(i think you need to keep i as the index for e ..? but i am wrong?)
    2.why does e^ v becomes e^u ,e^i becomes e^v and e^k become e^lammda
    how can you change the index....?(what is the rule that you can do something like this for this step?
    thank

  • @littlenarwhal3914
    @littlenarwhal3914 4 ปีที่แล้ว +1

    at 6:30 i get that e hat is a covariant object but im not sure why we use lambda in this direction, because we used lambda for covariant objects in the opposite direction, so shouldnt we use lambda inverse as our covariant matrix for this direction and lambda as the contravariant one?

  • @jakobulmer2997
    @jakobulmer2997 8 ปีที่แล้ว +2

    how do you know that at 3:30 e^v has to equal the inverse of lambda times the unheaded bases?
    sorry for my english and thanks a lot

    • @jakobulmer2997
      @jakobulmer2997 8 ปีที่แล้ว

      I understand that it has to be that way because it is contravarient but how would you show it formally?
      By the way thanks for the great videos!

    • @nonshole
      @nonshole 5 ปีที่แล้ว

      Good question. In the last lecture he showed that the coordinates are contravariant, and actual vectors are covariant, but how do we show that dual space vectors (linear maps) are contravariant vectors?

  • @kahowong7952
    @kahowong7952 8 ปีที่แล้ว +2

    Excuse me I got a question and I hope you can help me with this..
    In your video you use superscript to denote index for dual vectors originally,
    but later you use the superscript also for index in component of vector in traditional vector space and you explained that is just Einstein convection of summation.
    I want to ask is these two uses of superscript independent ? Or is there any
    motivation behind to adopt superscript in both cases.
    Appreciate any help in this. Thanks.

    • @XylyXylyX
      @XylyXylyX  8 ปีที่แล้ว +1

      Ka ho Wong The uses are related in a very important way. The position of the scripting tells us whether or not the object transforms in the same way as a basis vector in V or as a basis vector in V*. The summation convention is designed so that we can quickly sum things and *also* keep track of how those things transform. Vector components transform in the *same way* as the basis vectors of the dual space and the components of dual space covectors transform the same way as basis vectors of the original space V. The scripting is there to constantly remind us of this. In many GR texts they treat the components as though they *are* the tensor, omitting the basis vectors, and then only the scripting remains to tell us show the object transforms.

  • @hugoqueiroz7648
    @hugoqueiroz7648 8 ปีที่แล้ว +1

    Excellent Lessons! Is there a book or an exercise list that you recommend to i get some pratice?

    • @XylyXylyX
      @XylyXylyX  8 ปีที่แล้ว +1

      Hugo Queiroz Thank you for the nice compliment. I have been recommending "Introduction to Vectors and Tensors" by Bowen and Yang. It is published by Dover. Good luck!

    • @subhasnaik6133
      @subhasnaik6133 7 ปีที่แล้ว +1

      XylyXylyX
      I think it is Wang

  • @Cubinator73
    @Cubinator73 7 ปีที่แล้ว +1

    Are these really the right directions at 5:15 and 5:36? The equation (ê_µ)=(Λ_µ^ν)(e^v) or Ê=ΛE (where E and Ê are just the matrices with columns e_µ and ê_µ) tells me that Λ converts a vector in terms of the unhatted basis to the same vector in terms of the hatted basis. Shouldn't those arrows at 5:15 and 5:36 be pointing in the other direction then?

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว +1

      Cubinator73 I think you are correct. During the video I was probably thinking to replace e-hat with Lambda-e and A-hat with inverse-Lambda-A which goes right to left. But as a transformation, Lambda takes e and returns e-hat. Did you check the video on a desktop computer? I have have already annotated this and annotations are not visible on mobile devices. Thanks for spotting this!

  • @enricolucarelli816
    @enricolucarelli816 7 ปีที่แล้ว +6

    Here I am going through your excellent course a second time. At 10.30 you say something that I don't understand. If I have a basis "e" and I want to transform to a new basis "ê" I would expect to have a matrix that multiplied with my old basis "e" gives me the new basis "ê". I actually understood it perfectly when you went from "e" base to "f" base in the previous lesson. Why do you suddenly reverse the criteria?

    • @Vuehrjer
      @Vuehrjer 7 ปีที่แล้ว +1

      yeah that got me too, but I think it's just definition that the matrix of the basis change is that way, that the coordinates of a vector in the new vector can be expressed as the matrix times the vector in the old base

    • @Vuehrjer
      @Vuehrjer 7 ปีที่แล้ว +1

      *in the new base

    • @slaveryisfreedom
      @slaveryisfreedom 6 ปีที่แล้ว +1

      Yeah. He got it backwards. As written, the transformation is from e-hat to e, not from e to e-hat. Lamda multiplies e-hat to get e. To go from e to e-hat we would need to multiply Lambda inverse by e. He gets this backwards in a couple of other places, too.

  • @ochotorenautube
    @ochotorenautube 7 ปีที่แล้ว +2

    Thanks for getting back to me! Copy that I understand. Lambda was defined for V. Lambda inverse can operate on V* basis vectors because of the delta relationship arbitrarily chosen between the basis vectors of V and V*?

  • @frankbennett2429
    @frankbennett2429 7 ปีที่แล้ว

    At 2:00, why did you put the co-vector in the second position in the bracket, in apparent violation of the convention, or doesn't it matter?

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว

      Frank Bennett I do try to place the covector first, but remember, a vector is the co-vector of a co-vector if you know what I mean! So it is ok and it doesn't really matter. The object on the left is the aping, the object on the right is the thing being mapped.

  • @kingplunger1
    @kingplunger1 3 หลายเดือนก่อน

    So far I was able to follow quite well, which is a compliment to your lectures as I tried to get into tensors a couple of times over the last months and it felt rather... overwhelming.
    In this one I don't get something though:
    Why do the dual basis vectors transform covariantly ? I watched the last one on transformations 3 times to make sure I didn't miss anything, but I am pretty sure I haven't. I get the transformation of the vector space basis, but how come the dual basis is contravariant ? Apart from "the notation indicates it".
    Couldn't I do the exact same thing you did with vectors from V in the last video and do that with elements in V* ? They are vectors too, after all. In the end just the indices would be different as far as I understand.
    Are you going to establish a connection between the two spaces/basis later and the connection is going to show how the dual space basis transforms, meaning inversely to the vector space basis ?
    I also don't understand why Lamda^-1 transforms the dual space basis, when Lamda transforms the vector space basis. I feel like I am missing something important.

  • @RicoRodriguezIII
    @RicoRodriguezIII 3 ปีที่แล้ว

    I'm struggling to make sense of what you're saying at 10:00 - that switch you're doing shouldn't be "legal". Whatever is on the LHS of that change-of-basis equation represents the "target" and thusly forward direction of the transformation - the thing-being-transformed is the set of basis vectors getting passed to the Linear Transformation (Lambda). If that's the case, then e_mu should be replaced by Lambda inverse times e-hat_nu, since going from e-hat to e is, by the way we have defined covariance, going "backward"
    Doing it that way then makes the substitution and factoring make sense - the transforms move "outside" of the basis and "onto" the components, applying the expected transformations (backward transformations to the contravariant vector components, forward transformations to the covariant covector components)

  • @michaeljohnson9991
    @michaeljohnson9991 6 ปีที่แล้ว +3

    You have an interesting comment at 21:01 "every subatomic particle (in the SM) is identified by how it transformed under Lorentz & Poincare transformations". Could you elaborate in a future video? It would be much appreciated.

  • @roykhoury5181
    @roykhoury5181 7 ปีที่แล้ว +2

    Thanks for a great job, at last tensors are starting to make sense to me. A small question though: when you get the product of the three lambdas since they are sort of matrices and one of them is the inverse of the others shouldn't they simplify to simply lambda inverse: I mean lambda x lambda inverse is the identity matrix, right?

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว +3

      All the Lamda's come in pairs: one for the component and one for the actual basis vector. When you look at just the component you will find, perhaps, three transformation matrices or transformation inverses. However, the index structure of those transformation matrices are not arranged so that any two of them will matrix multiply to the identity matrix. That is hidden in my presentation because I stopped putting the indices on the \Lambda matrices. I think if you look at Lesson 8 you will see some examples where I kept the indices and you will see what I mean.

  • @georgeorourke7156
    @georgeorourke7156 8 ปีที่แล้ว +3

    COULD YOU JUSTIFY ONE POINT: At minute 1:09 you state that the dual basis e (superscript mu) will transform contravariantly because of the upper positioning of the index. As far as I can tell, so far, we have place the index in the upper position for the dual basis simply to distinguish it from the basis of the underlying vector space. We have never actually shown that that this custom also agrees with the" contavariant transformation" convention. I hope I haven't missed something. Again thank you for very enjoyable lectures.

    • @XylyXylyX
      @XylyXylyX  8 ปีที่แล้ว +1

      If I understand your question, you would like to see a proof of the fact that the dual basis vectors transform contravariantly. To show this, take a transformed vector space basis and find out what convector space basis it would take to make the mapping equal the delta function. THe result with be the cobras is transformed contravariantly!

    • @georgeorourke7156
      @georgeorourke7156 8 ปีที่แล้ว

      Thank you - I now see the way to establish that the covector basis transforms contravariantly. Hopefully it was not a completely silly question.
      I hope you don't mind the querries but we the viewers are just making sure you don't pull any fast ones on us! Hopefully It also shows you the interest your course is generating.

    • @heylofellas
      @heylofellas 4 ปีที่แล้ว

      Hi! So I had a similar problem. We never really said dual basis vectors transform contravariantly. But our choice of notation just forces it to become. Like earlier e_nu and e^nu were just two different disconnected things. It was just notational. So I can write the dual basis vectors as f_nu as well right if its just notational? And then I'll retrieve the similar results there for what we got for vector space V

    • @robertbrandywine
      @robertbrandywine 4 หลายเดือนก่อน

      @@XylyXylyX What did you mean by "with be the cobras is"?

  • @ochotorenautube
    @ochotorenautube 7 ปีที่แล้ว +2

    Great lectures and great work, thank you! I'v followed everything up to this point. Why/How is the capital Lambda transformation matrix defined to transform two different basis in V related or defined for two different basis in the Dual Space? Is it because the basis e subscript mu in V is the same as the basis e superscript mu in V star?

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว +1

      ochotorenautube If I understand your question .... they are inverses of each other! Lamda transforms basis vectors of V and Lambda inverse transforms basis vectors of V*.

    • @ochotorenautube
      @ochotorenautube 7 ปีที่แล้ว +1

      Thanks for getting back to me! Copy that I understand. Lambda was defined for V. Lambda inverse can operate on V* basis vectors because of the delta relationship arbitrarily chosen between the basis vectors of V and V*?

  • @debendragurung3033
    @debendragurung3033 7 ปีที่แล้ว +1

    I have a mixed feeling. A transformation from one set of covariant basis to another set of covariant basis uses the same inverse transformation operation. As we transform between covariant and it's contravariant basis the components also does transform in an inverse fashion. Given that: suppose there was a transformation Matrix, how can we actually find the fate of vector under transformation - whether it's doing covariant to covariant or contra to co ...

  • @maheshudupa944
    @maheshudupa944 7 ปีที่แล้ว

    Around 14:00...I understand A(^mu)(_nu)(_lamda) as scalars, which is the object under transformation under contravariant transformation. But transformation of scalars, is that right what I've understood!? Can u please help me on this!

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว

      mahesh udupa Here A^\mu_
      u_\lambda are the components of a (1,2) rank tensor. Each individual component is a real number. Lambda is a matrix that is uses to transform each index. Nothing here is a (0,0) tensor.

    • @maheshudupa944
      @maheshudupa944 7 ปีที่แล้ว

      XylyXylyX understood but now it has raised a couple of questions. 1) mu,nu,lamda in terms of Einstein's convention are dummy suffixes right? 2) when we do transformation it's the bases that gets changed, how does components getting changed justify transformation? In my head all that transformation is doing is scaling the bases! Please help me clarify this concept!

  • @robertbrandywine
    @robertbrandywine 4 หลายเดือนก่อน

    The way you originally introduced covariance and contravariance makes more sense to me. Vectors in vector spaces are contravariant because their coefficients and basis vectors move in opposite directions and covectors are covariant because their coefficients and basis covectors move in the same direction.

    • @XylyXylyX
      @XylyXylyX  4 หลายเดือนก่อน

      @@robertbrandywine That’s right. The words are attached to the components, not the actual vectors!

  • @nafriavijay
    @nafriavijay 6 ปีที่แล้ว +2

    Hi Professor, Another great lecture BTW, great explanation. I have a question but mostly this comment will serve as a self note.
    I am coming back to this lecture and here you begin with basis transformation of Dual space (which they do contravariantly) but there was NO MENTION of it at all in previous lecture 8. In previous lecture we ONLY did basis transformation WITHIN the same Vector Space. In last lecture we learned that WITHIN the same VS Basis transformation is covariant and Components ( BTW can I call them Coefficient?) transformation is Contravariant, but we didn't define such things in Dual Space.
    So if I get this correctly,.... If it's all about defining the basis transformation in DSpace the inverse or regular transformation Matrix meaning we define some A basis in terms or some B or vice versa since both will be genuine Matrices . However we are taking them as such in DSpace where we define covariant transformations for componants and contravariant Tx for basis to keep in mind that we are have Vector Space at center of all our discussion. And in VS covariant for basis and contravariant for componants so things appear in inverted sense. Now we have basis in VS and Componant in DS tranasforming coV and basis in DS and Componant in VS transforming contraV .So that is exactly my question why pick them like so?

  • @jorenjanssens141
    @jorenjanssens141 6 ปีที่แล้ว +1

    Hi, thx for another great video in this series. I really enjoy them.
    I have some questions about this lecture.
    At 15.10 I think your first expression in the box is correct and the correction you put in after is false, am I correct or am I seeing this wrong? If so, 'What am i seeing wrong'.
    Question two; Where did al the indices on the lambda matrices go?
    Question three: While watching your videos I sometimes start googling or searching on wikipedia about some of the concepts you use and I have a question. What is the difference between a Multilinear map and a Tensor? And why is it that there are so many different definitions of a tensor?
    thx in advance for the answers to my questions

    • @XylyXylyX
      @XylyXylyX  6 ปีที่แล้ว +1

      Joren Janssens I can’t find .the spot you are asking about regardin 15.10...sorry. Regarding lambda indices: I just got tired of writing them. The point is to know when to use lambda and lambda inverse. Put in any indices that make sense.
      There is no substantive difference between a Multilinear map and a tensor. There is the technical fact that “tensors” must have a rank that can be written “(p,q)” which means that to be a tensor a multilinear map must have the vector factors Sto the left of the cove tour factors. So all tensors are multilinear maps but not all multilinear maps are tensors. However this is just a naming convention. We could create “(p,q,r)” rank objects, I suppose with p vectors followed by q covectors followed by another r vectors. We could call them “Jansens” I suppose :)

    • @jorenjanssens141
      @jorenjanssens141 6 ปีที่แล้ว +1

      thx for the answers
      About min 15.10. You edited your video with a correction line at the bottom about the boxed expression.
      I think that you wrote in the box is correct and that the correction you put at the bottom is false.

  • @evgenykovalev7462
    @evgenykovalev7462 8 ปีที่แล้ว

    A small matter but isn't there an error at 15:08? The inverse of the product of three matrices is the product of the inverses taken in the reversed order. So it should be A = A^ * {Lambda}*{Lambda}*{Lambda}^-1.

  • @dewookus
    @dewookus 7 ปีที่แล้ว

    Another excellent video. Question though: At 15:10 I think you were correct the first time. IF the three lambdas were matrices, then the inverse of the product would be the product of the inverses in reverse order. But, the three lambdas in this expression are not matrices, they are just numbers (their indices are not shown), and the expressions on the right hand side of each of the last two equations are sums, each consisting of 64 terms.
    If you derive the last equation (finding the favored basis components in terms of the hatted components) using the same method as you derived the first equation, don't you get what you originally had [(lambda inverse)(lambda)(lambda)]?
    You have done a great job of explaining this material. My background is in pure math, and the way engineers and physicists used tensors always confused me. Your videos are really clearing this up for me. Thanks!

    • @XylyXylyX
      @XylyXylyX  7 ปีที่แล้ว

      dewookus I’ll check it out a bit later, but in the meantime have a look at the lesson on a non-mobile device. YOur comment may have already been discovered and annotated.

  • @guitaropathe
    @guitaropathe 7 ปีที่แล้ว

    Hi,
    At 10:40 I know you said not to care about the indices, but I was very confused by the "contraction-like" indices with the lambda without indices and ê basis changing indices. You know I really want to get a deep understanding, and I like to dig each point that seem a little unclear. And this was not the good way of doing it. You shouldn't have change the ê basis because your indices choice was wrong from the root. It's correct of course but not understandable. I emailed an ancient teacher about that, a particle physicist and she doesn't approve this way of doing.

  • @Gurnoorsingh-v2u
    @Gurnoorsingh-v2u หลายเดือนก่อน

    I think your home is near the airport

  • @casaroli
    @casaroli 6 ปีที่แล้ว +1

    Isn’t it the opposite? Vectors transform contravariantly and covectors covariantly?
    In the next lecture, that’s what you say at the end. And it’s what my text books say.

    • @casaroli
      @casaroli 6 ปีที่แล้ว

      Never mind.
      You were talking about the basis, not their components. You explained it 2 minutes later.

    • @XylyXylyX
      @XylyXylyX  6 ปีที่แล้ว

      Yes, that is an important point to understand. I'm glad you noticed it and I'm glad you discovered the answer!