Lie theory for the roboticist

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ม.ค. 2025

ความคิดเห็น • 38

  • @БогданМагомета
    @БогданМагомета 9 วันที่ผ่านมา +2

    Thank you for the lecture. I appreciate it a lot.
    I think the constraints for all special groups mentioned (SO(2), SO(3), SE(2), SE(3)) should include det(R)=1.
    If we omit this constraint, the groups will simply be O(2), O(3), E(2), E(3).
    Same thing in the paper.

    • @joansola02
      @joansola02 5 วันที่ผ่านมา

      You are right!

  • @parth8044
    @parth8044 2 หลายเดือนก่อน +1

    Beautiful Introduction To Lie Theory ! Thank you !

  • @zhaoxingdeng5264
    @zhaoxingdeng5264 6 หลายเดือนก่อน +3

    Very clear and useful. Thank you!

  • @fangbai8238
    @fangbai8238 6 หลายเดือนก่อน +2

    Thank you so much, Joan!

  • @yen-linchen7398
    @yen-linchen7398 ปีที่แล้ว +4

    Thank you so much. It's really helpful and clear.

  • @mMaximus56789
    @mMaximus56789 2 ปีที่แล้ว +6

    I loved this course, probably the best introduction on Lie groups in this platform!
    Is there, by any chance, a possibility of a course such as this but on Riemmanian manifolds?

  • @chineduecheruo8872
    @chineduecheruo8872 ปีที่แล้ว +2

    Thank you Juan Sola!

  • @puguhwahyuprasetyo3927
    @puguhwahyuprasetyo3927 ปีที่แล้ว +2

    This video is amazing. Thank you Professor Sola

  • @Foo-i1v
    @Foo-i1v 8 หลายเดือนก่อน +2

    Thank you, Sir!

  • @5ty717
    @5ty717 ปีที่แล้ว +2

    Excellent

  • @PengfeiGuo-yn7hu
    @PengfeiGuo-yn7hu 10 หลายเดือนก่อน +1

    Thank you for sharing this great video,it's very helpful to me.And cloud I get the slides file?

  • @mohammedtalha4649
    @mohammedtalha4649 2 ปีที่แล้ว +1

    Thanks alot for this man! loved it

  • @longfeihan2100
    @longfeihan2100 ปีที่แล้ว +1

    Very nice and comprehensive video! Thanks a lot! I'm wandering whether the link to the video in the last slide will be maintained. Currently it is not available.

    • @joansola02
      @joansola02 ปีที่แล้ว +1

      All videos can be found by searching for "Lie theory for the roboticist", on YT. There starts to be a few of them! They are all roughly the same, but not equal!

  • @mohammadhusseinyoosefian
    @mohammadhusseinyoosefian 2 หลายเดือนก่อน +1

    Thanks for the great Lecture. When you define the Adjoint action, you say that it maps from a tangent space at X to the Lie algebra. However, as far as I know, the Adjoint is in fact a conjugation that maps from Lie algebra to itself. In your paper (A micro Lie theory), eq. (29) reflects this; it says that the Adjoint is a map from \mathfrak{m} to \mathfrak{m}. So, this is a bit confusing to me.

    • @joansola02
      @joansola02 2 หลายเดือนก่อน +1

      The confusion is due to terminology. All tangent spaces are lie algebras. But that at the identity is "the lie algebra". Does this make sense?

    • @mohammadhusseinyoosefian
      @mohammadhusseinyoosefian 2 หลายเดือนก่อน +1

      @@joansola02 Thank you for your explanation, I wasn't aware of that! However, I still think that the adjoint (since it is the derivative of the conjugate at identity) maps from "the Lie algebra" to "the Lie algebra".
      I guess in your paper you used \mathfrak{m} to denote "the Lie algebra" (and not other Lie algebras), right? In this case, eq. (29) in your paper makes sense; you have Ad_x: \mathfrak{m} \mapsto \mathfrak{m}. This means that adjoint maps from "the Lie algebra" to "the Lie algebra". However, if you take a look at the third bullet point at page 4 in the paper, you wrote and I quote "Vectors of the tangent space at X can be transformed
      to the tangent space at the identity E through a linear
      transform. This transform is called the adjoint.
      " I think this is what confuses me!

    • @joansola02
      @joansola02 2 หลายเดือนก่อน +1

      @@mohammadhusseinyoosefian No, the adjoint maps from a local Lie algebra to the global Lie algebra. But all Lie algebras are the same space, just their cartesian axes are related by some transformation (the adjoint). Is like the space of R3 in global coordinates and the space R3 in local coordinates. They are the same space (R3) but their representations are related by a rotation and a translation. The same happens with all the possible Lie algebras> they are the same space, but a transformation applies when going from one to another. This transformation is the Adjoint.
      In case of SO(3), for example, the tangent spaces are related by the rotation R. The Adjoint of SO(3) is precisely Ad_R = R, the same rotation matrix.

    • @mohammadhusseinyoosefian
      @mohammadhusseinyoosefian 2 หลายเดือนก่อน +1

      ​ @joansola02 Thank you very much for the neat explanation! I think I almost got it. There is a small glitch in my understanding that I would appreciate it if you could help me figure out.
      Here is what I think: Consider two vectors "a" and "b", where "a" is expressed in body frame (i.e., local) and "b" is expressed in the spatial frame (i.e., global). Now, I can transform both of these vectors to the global Lie algebra using the hat operator and then write the adjoint for them: "a^ = g.b^.g^{-1}", where g is the group element. This makes sense.
      Now, we know that if "b^" is in the global Lie algebra, we can transform it to a local Lie algebra at g using "b* = g.b^", where "b*" is the element of the local Lie algebra, right? Now replace "b^" in adjoint with "g^{-1}.b*" to get "a^ = g.g^{-1}.b*.g^{-1} = b*.g^{-1}". So what I am saying is that if "b*" is in the local Lie group and "a^" is in the global Lie group, the adjoint between them should be "a^ = b*.g^{-1}", which is not the same as what is in the paper, unless "b*" is meant to be in the global Lie group! I really thank you for the time you allocate to answer my comments!

    • @joansola02
      @joansola02 2 หลายเดือนก่อน +1

      @@mohammadhusseinyoosefian You have two concepts wrong.
      1. As you have described them, vectors 'a' and 'b' are unrelated. They are just two vectors, one in frame global and one in local, but they are two vectors with no relation whatsoever.
      2. the vector 'b' in local frame does not transform to global frame with 'g * b^'. This operation does not transform b from local to global. The operation that does this is the adjoint, and this is by the definition of the adjoint. So you have 'b*^ = g * b^ * g.inv', or if you remove the hats, you have b* = Ad_g * b'
      I recommend you improve your notations. Let 'a' be one vector. Let this vector be noted 'a_0' when expressed in the global frame, and 'a_L' when in local. You have
      a_0 = Ad_g * a_L
      see that vector 'b' does not appear here, because it has nothing to do with 'a'.
      Now consider that 'b' is a modified 'a' through the action of 'g'. In the global frame, you can write
      b_0 = g x a_0
      where 'x' is now the action of 'g' onto vectors of the type 'a'.
      Realize that the action is not a reference transformation, but a true modification of a vector 'a' into a vector 'b'. This is not the same as expressing 'a' in two different references --> this you do with the adjoint.
      To know how 'g' acts on 'a' but expressed in local frame, you need to chain all relations so far:
      b_L = Ad_g.inv * b_0 = Ad_g.inv * ( g x a_0 )
      See that you have matrix products '*' and actions 'x'.
      Different groups have different actions, and so one cannot generalize the action operations. We can only generalize the Adjoint, the inverse, and the group composition law.
      If you put your attention to a particular group, then you can sort things out. For example, in SO(3) the adjoint of R is R. This might make you think that the action of R is the same as the frame transform of R. But this is not the case in other groups.
      In SO(3):
      b_L = Ad_R.inv * ( R x a_0) = R.tr * R * a_0 = a_0

  • @iamyouu
    @iamyouu ปีที่แล้ว +1

    May I get a link to the slides? Thank you

  • @urewiofdjsklcmx
    @urewiofdjsklcmx ปีที่แล้ว +1

    Let's say I have two processes that both estimate the same group element (e.g. an element of SO(3)) and for both I have an associated covariance than the covariances are defined in different tangent spaces (at the individual estimate), right? So in order to combine them I somehow have to transform them with the Jacobians such that they are mapped to the same tangent space before I can combine them? The hypothecial application that I have in my mind are two Kalman Filters that estimate the same system. Before I watched the video I would have naively fused the two covariance matrices directly, which is apparently not the correct way..

    • @joansola02
      @joansola02 ปีที่แล้ว

      It is unclear how do you "combine" both estimates. If you provide the formulas for such a combination in the case of vector spaces, I can then hopefully guide you through the process of doing the equivalent thing in Lie groups.

    • @urewiofdjsklcmx
      @urewiofdjsklcmx 7 หลายเดือนก่อน +1

      @@joansola02 Ok a bit late my response: So in vector spaces I can simply add the information matrices (inverse covariance matrices) of state estimates x_1 and x_2 like so: I_fused = I_1 + I_2. But if x_1 and x_2 belong to a Lie group, my understanding is now that the I_1 and I_2 are defined in the tangent spaces defined at x_1 and x_2. So I guess I cannot just add them up like in a vector space ? I probably need to transform first both I_1 and I_2 to some common tangent space and then add them afterwards?

    • @joansola02
      @joansola02 6 หลายเดือนก่อน +1

      @@urewiofdjsklcmx You are right. If the covariances or the info matrices are defined locally, then you have to combine them in the same reference space. You can use for this the adjoint operator, which can be used to transform covariances matrices from one tangent space to another.
      Let X and Y be two elements of group. Let E be the identity. Let Ad_X be the adjoint at X, and Ad_Y that at Y. Now, Ad_XY = Ad_X.inv * Ad_Y transforms vectors from the tangent at Y to the tangent at X. You transform covariances as Q_X = Ad_XY * Q_Y * Ad_XY.transpose. You can easily sort out the equivalent conversion for info matrices.
      You can also express all info matrices at the identity E. To do so, you do e.g. Q_E = Ad_X * Q_X * Ad_X.tr. You can then directly add I_X + I_Y = Q_X.inv + Q_Y.inv.
      The conversions for the info matrices are easy: I_X = (Q_X).inv = (Ad_XY * Q_Y * Ad_XY.tr).inv = Ad_XY.inv.tr * I_Y * Ad_XY.inv

  • @Aleksandr_Kashirin
    @Aleksandr_Kashirin 2 ปีที่แล้ว +2

    Very nice lecture!
    Could you please make these slides available for other viewers?
    Also, I have a question:
    Could you please emphasize the key differences between EKF and IEKF that you showed on the slides?
    Why do we want to use Lie Algebra in localization tasks, especially in EKF?
    Thank you!

    • @joansola02
      @joansola02 2 ปีที่แล้ว

      What do you mean by IEKF? Invariant? Iterative? Information? Indirect? They are all possible choices. In the course, however, I dont remember referring to any of them.
      I suppose then that you refer to the ESKF or error-state KF.
      All ESKF work with a nominal state and an error state. All Lie-based KF are indeed ESKF because the error is defined in the tangent space.
      For example, let the state be a quaternion q \in S3 \in R4. The tangent space is isomorphic to R3.
      Now given a computed Kalman gain K, the update on the state q for EKF and ESKF are:
      EKF: q_new = q + K * ( y - h(q) ) --- here dq = K * ( y - h(q) ) \in R4
      ESKF: q_new = q * Exp ( K' ( y - h(q) ) ) = q (+) ( K' * (y - h(q) ) ) --- here dq = K' * ( y - h(q) ) \in R3
      so the updates are indeed quite different, but the shortcut (+) makes it look the same. Remark that K is for EKF and K' is for ESKF, they are not equal.

    • @urewiofdjsklcmx
      @urewiofdjsklcmx 2 ปีที่แล้ว +1

      Will it make a big difference in practice if I apply the (+) operator only for the "group variables" and keep the regular + for the remaining states (for instance also if I want to include sensor biases)?

    • @joansola02
      @joansola02 2 ปีที่แล้ว

      @@urewiofdjsklcmx All variables that can be described as pertaining to R^n can be treated normally with a '+' sign. In fact, the R^n spaces are also Lie groups under addition, and the (+) operator in R^n boils down to the '+' operation. Even more, since R^n under addition is a commutative group, then left-(+) and right-(+) are both the same and equal to regular '+'.

    • @urewiofdjsklcmx
      @urewiofdjsklcmx 2 ปีที่แล้ว +1

      ​@@joansola02 Hmm but this will decouple the error states right? If I understood the invariant EKF (IEKF) by Barrau and Bonnabel correctly they stay in the SE_n(3) group to define the error. Apparently this is more accurate, but I guess also quite complicated if you need to consider biases and other states..

    • @joansola02
      @joansola02 2 ปีที่แล้ว +1

      @@urewiofdjsklcmx Yes, in this manner the errors are decoupled. The question is how much. And the answer is not much. But again, not much might be too much depending on the appication, objectives, and particular numeric values of the involved variables. the advantage of decoupling is that you have all the algebra you need for each one of the blocks. If you want a completely coupled state, then sometimes you will not have all the closed forms you need (exponential map, adjoint matrix and right Jacobian being the key 3 elements for which you would like to have closed forms -- al the other forms can be deduced from these three)

  • @franciscodominguezmateos5528
    @franciscodominguezmateos5528 10 หลายเดือนก่อน +3

    Hi Joan, Do you know about Geometric Algebra?

    • @joansola02
      @joansola02 10 หลายเดือนก่อน +1

      No, I never approached this topic...

    • @Anon-z4h
      @Anon-z4h 9 หลายเดือนก่อน +2

      Objects in GA also have a natural Lie group and algebra structure related by the exp and log math shown here. Thanks for the presentation!