SLAM Course - 05 - EKF SLAM (2013/14; Cyrill Stachniss)

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 49

  • @madeautonomous
    @madeautonomous 2 ปีที่แล้ว +1

    Very well explained. Would recommend to everyone who wants to start learning SLAM!!!
    Thank you!

  • @Ednmirr
    @Ednmirr 2 ปีที่แล้ว +1

    I much appreciate this great lecture! Thank Pro. Stachniss.

  • @MrCobraTraders
    @MrCobraTraders 2 หลายเดือนก่อน +1

    Thank you professor.
    - from India

  • @adityavardhanjain
    @adityavardhanjain 5 หลายเดือนก่อน

    I wish to apply this practically. I have only thought of the graph implementation.

  • @phdrn
    @phdrn 6 ปีที่แล้ว

    Im from Uni Kassel, your lectures help me a lot. Thank u very much!

  • @MSApro123
    @MSApro123 10 ปีที่แล้ว +5

    That was absolutely great! You mentioned in the video that you will provide some worksheets and HWs. Could you please post some links?
    Do you have a website? Books?
    I hope you will offer a MOOC course so more people can get help about this topic,
    Thanks again for sharing such a great material.

    • @yousofebneddin7430
      @yousofebneddin7430 6 ปีที่แล้ว +7

      ais.informatik.uni-freiburg.de/teaching/ws13/mapping/

    • @fauzanmarwan2182
      @fauzanmarwan2182 4 ปีที่แล้ว

      @@yousofebneddin7430 Thanks!

  • @sergeysobolnikov8870
    @sergeysobolnikov8870 8 ปีที่แล้ว +8

    Dear Professor Stachniss,
    Thanks for the great material!
    On slide 21 (30 min) you used Velocity-Based Motion Model. Why didn't you apply the 3rd parameter (final rotation) in this model?

    • @rayryeng
      @rayryeng 3 ปีที่แล้ว

      I figure it's an oversight but in the end you are free to use either one. The computed Jacobian for the velocity-based model when updating the pose will still end up to be the same as the third parameter has no dependence on x, y or theta. Therefore whichever version you use, it will only affect the prediction of the pose where the update will still be the same.

  • @chaitanyabandi5026
    @chaitanyabandi5026 6 ปีที่แล้ว +1

    Dear Mr. Cyrill Stachniss,
    Thank you for your lectures. They are just fantastic.
    I have question regarding the functions f(.) and h(.). How can we model those functions for different applications ?
    currently i am working on tracking 6 dof of an object. How can i model the functions ?
    please clarify.
    Please forward me an example on 6dof tracking equations if available.
    Thank you for your solution.

  • @shrinivasiyengar5799
    @shrinivasiyengar5799 5 ปีที่แล้ว +1

    At somewhere near 36:12, we still need to eliminate the sine and the cosine functions, right? If not then we technically haven't linearised them. The last step of the linearization is to evaluate the Jacobian at the linearization point, is it not?

  • @vivekgr3001
    @vivekgr3001 4 ปีที่แล้ว

    Perfect explanation!!!

  • @lilimeng1103
    @lilimeng1103 9 ปีที่แล้ว

    For the initialization of covariance, it was assumed that the landmarks covariance values should be infinite. But it will lead to numerical instability explained in P329 of Probabilistic Robotics. The output of the robot pose will be -nan. What's the best way of initializing the covariance in the homework of sheet 4? it seems that it only works when these covariance values are set to zero.Thanks!

  • @emilianogagliardi3324
    @emilianogagliardi3324 7 ปีที่แล้ว

    Thanks for sharing professor.
    I have a question about the slide representing the correlation matrix. It seems that the orientation of the robot is not correlated with anything since there are two white lines in correspondence of the robot orientation correlation with other variables. Why? I Think that uncertainty over the orientation should be reflected in the uncertainty of the landmark pose, leading to something like a banana shape distribution of landmark pose (I know that we would not observe it, because of linearization).
    Thanks in advance.

  • @EngBandar1
    @EngBandar1 9 ปีที่แล้ว

    The line 10 at 58:25 min. represents the initial values of the landmark's estimate however the initial values of its uncertainty are not stated not even in the book. Really sad. How is the covariance matrix being enlarged ?????

    • @CyrillStachniss
      @CyrillStachniss  9 ปีที่แล้ว +1

      The initial uncertainty is infinite (see initialization). As a result of that, the Kalman Gain will "fully trust" the measurement and you will have the correct uncertainty assigned to the landmark (observation and robot's pose uncertainty).

    • @EngBandar1
      @EngBandar1 9 ปีที่แล้ว

      Cyrill Stachniss Thank you so much for the reply. I've seen the initial values eq. (10.9) however this is a mere implication, therefore it shouldn't be used that way.At the start point, the robot starts operating with an empty map and assumes the start point is the origin of the global frame which means the covariance matrix is equal to the covariance matrix of the robot's pose whose initial values are set to zero. Once the robot observes a landmark, the estimate of this landmark is obtained via the inverse measurement function and its uncertainty is truly affected by the uncertainty of the robot's pose and the uncertainty of the sensor. But you are saying the filter should fully trust these measurements which implies the uncertainty of the landmark's estimate for the first time should be set to zero. I got confused.

  • @thrunsalmighty
    @thrunsalmighty 10 ปีที่แล้ว

    I am surprised that the 2D case has not been formulated in terms of complex numbers. Perhaps there is no real advantage in this sort of problem. Except notationally, maybe.

  • @GCOMRacquet
    @GCOMRacquet 10 ปีที่แล้ว

    Hey, on Slide nr. 9 the covariance is not symmetric. I guess in colum 3 starting at row 4, there should be something like sigma_{m_{x,1},theta}... and so on. Is that right or do i missunderstand something? Best, Sascha.

  • @jinchew5549
    @jinchew5549 8 ปีที่แล้ว

    I see the formula for prediction of new/unseen landmark but that only holds true for point landmark like beacon. If you extract line features and wish to yield a point landmark ( i.e shortest distance to line ) from global [0,0] so that regardless of robot position the (mx,my) is the same , how do you go about formulating that so that i can have the Jacobian for the addition of new landmark. Thanks

  • @saswatibhattacharjee7387
    @saswatibhattacharjee7387 6 ปีที่แล้ว

    In 31:48 what are the initial value of x,y,theta,and landmarks? Initially are they all zeros?

  • @oldcowbb
    @oldcowbb 4 ปีที่แล้ว +1

    is loop closure a special step you have to add into the algorithm or does it happen naturally?

    • @notmyproudest
      @notmyproudest 2 ปีที่แล้ว

      please go through 1:00:10 again.
      Particulary the part where adding new landmarks,
      basically if we are seeing landmark for first time, the predicted landmark = updated landmark.
      When we observe the landmark again we compute the difference between the predictions from previous estimate we have and the new observations, and we get a new update or loop closure.
      This is what I understood. Happy to stand corrected and open for new infernces.

  • @saswatibhattacharjee7387
    @saswatibhattacharjee7387 6 ปีที่แล้ว

    Sir, in your slide you said Initialize a landmark if observed. This observation is in local frame. So how do I correct the position of landmark in respect to Global frame. and what if the robot has a circular movement and it show the same landmark again and again?

  • @FarooqKifayat
    @FarooqKifayat 9 ปีที่แล้ว

    Qt in the video is the measurement noise covariance and Rt is the process noise covariance?

  • @JochefoRodriguez
    @JochefoRodriguez 9 ปีที่แล้ว

    Thanks a lot for the series os videos! I have one question, in the EKF_SLAM_Prediction(ut-1,Et-1,ut,zt,ct,Rt) example, what changes from de Fx of step 2 to the FTx of step 3?

  • @mingmingzhang4309
    @mingmingzhang4309 9 ปีที่แล้ว

    Hello,sir! what we should do if we don't know the number of lamdmarks,as this is the most cases……and in the book,the title is "SLAM with known correspondence",what is the correspondence?

  • @ShahNawazKhan-jz8wl
    @ShahNawazKhan-jz8wl 7 ปีที่แล้ว

    Dear Sir Cyrill Stachniss I am new with Slam, I am confused about the landmarks, that how it should be distinguishable form one another, either in its shape or just location of the landmarks. I want to use Slam in agriculture If I am detecting the trees with Covolutional Neural Network would it be suitable. because all the trees are detected with same characteristics. Please help me with this. Thanks

  • @SaiManojPrakhya
    @SaiManojPrakhya 11 ปีที่แล้ว +1

    Hi Sir,
    This is regarding the property of EKF SLAM that says that, at the limit, the lower bound of covariance matrix cannot reduce to less than the initial uncertainty of the robot when it sees the first landmark.
    If the robot observes a land mark at its initial position (0,0,0), then can it be said that , at limit the uncertainty reduces to zero as there is a landmark whose position is certain(100 %) ? Please correct me if i got something wrong.

    • @CyrillStachniss
      @CyrillStachniss  11 ปีที่แล้ว

      No, that minimal uncertainty is just the lower bound, it does not mean that you reach it.

    • @SaiManojPrakhya
      @SaiManojPrakhya 11 ปีที่แล้ว

      Lets say we know three landmarks at position (0,0,0). In this case can it be said that the pose of the robot is certain but the covariance of these three landmarks will be uncertainty of the sensor model ?(is it right)
      So when is it possible for the robot to achieve the lower bound i.e. the uncertainty of the sensor model when these landmarks are initialized ?

    • @CyrillStachniss
      @CyrillStachniss  11 ปีที่แล้ว +1

      1. that is correct
      2. is is possible to achieve that but in general you can not guarantee that the robot reaches this states at a given point in time.

  • @boggyBeast22
    @boggyBeast22 4 ปีที่แล้ว

    I do have a question and I hope you still reply to these comments.Q: Is it necessary to grow your matrix(in practice) every time you observe a new landmark or can you just add a number of landmarks in the beginning and then just keep updating them according to the measurement?For example I want to implement this algorithm on a robot using a raspberry pi and I hope I can constantly update a large input number of landmarks that I haven't discovered yet instead of adding the newly observed one and enlarging the matrices.Thank you

    • @alexandrelarribau4083
      @alexandrelarribau4083 3 ปีที่แล้ว +1

      I think Q is a 2 by 2 matrix and doesn't depend on the number of landmarks

  • @MaggieStanley-s8w
    @MaggieStanley-s8w หลายเดือนก่อน

    Helen Ferry

  • @anuragpbox
    @anuragpbox 8 ปีที่แล้ว

    Can we download the slides anywhere please?? while conveying my gratitude to the professor Cyrill Stachniss

    • @CyrillStachniss
      @CyrillStachniss  8 ปีที่แล้ว +3

      See my website at www.ipb.uni-bonn.de

    • @TKe-ic4md
      @TKe-ic4md 7 ปีที่แล้ว +2

      the Slides cannot be shown on website.

  • @rahul122112
    @rahul122112 3 ปีที่แล้ว

    Can someone help @47:44:
    Slide 35, I am unable to understand why we use "observed location of landmark j" (calculated in slide 34) to generate the observation prediction model h(μt). Shouldn't the prediction be independent of observation?
    As I have understood from previous lectures, since the map is known and the data associations are known, the landmark locations are known. Since we already know which landmark a particular observation belongs to, shouldn't the map location of the landmark (m j,x and m j,y) be used instead of (μ j,x and μ j,y) in slide 35?
    Not sure why I am confusing this. I understand it has something to do with the observation being made for the first time, but what that signifies, I am not sure I understand.
    Also what happens when a landmark is seen a again? Do we still use (μ j,x and μ j,y) or do we use (m j,x and m j,y)?

    • @alexandrelarribau4083
      @alexandrelarribau4083 3 ปีที่แล้ว

      Hey rahul, I am working on this problem and I think the point is to compare these 2 things. You have in one side the landmark you just observed and in the other side this same landmark you have in memory. and you make the difference of them multiply by K in the fifth step. Tell me if this is not clear

    • @rahul122112
      @rahul122112 3 ปีที่แล้ว

      @@alexandrelarribau4083 Hey Alexandre. Thanks for the explanation. I was able to resolve and implement a working ekf slam to test my understanding. It is as you say. The only issue was that I wasn't able to understand what would happen if we have never seen a landmark before. It caused confusion in my understanding.

  • @siddharthanrajasekaran8977
    @siddharthanrajasekaran8977 9 ปีที่แล้ว

    Dear Professor,
    Are we using the observed location of landmark j (slide 34, 46:28) in slide 35, 47:41 to find the expected location? If yes, then won't q be same as the $r_t^i$ of observed location?
    Thank you

    • @CyrillStachniss
      @CyrillStachniss  9 ปีที่แล้ว

      +Siddharthan P R If the landmark has NEVER BEEN OBSERVED, we can initialize it the observation. This corresponds to an arbitrary initial position of the landmark (eg 0,0) and an infinitive large variance. So if the landmark has never been observed, use the observation, otherwise you fuse the prediction and the observation with the update step.

    • @siddharthanrajasekaran8977
      @siddharthanrajasekaran8977 9 ปีที่แล้ว

      +Cyrill Stachniss Thanks a lot professor!!

  • @임성진-y9h
    @임성진-y9h 8 ปีที่แล้ว

    Dear Professor
    Tell me how to do it. computing atan2(..)->delta x, delta y? jacobian derivative

    • @anuragpbox
      @anuragpbox 8 ปีที่แล้ว

      Thank you very much.

    • @anuragpbox
      @anuragpbox 8 ปีที่แล้ว

      Thank you very much.