That was absolutely great! You mentioned in the video that you will provide some worksheets and HWs. Could you please post some links? Do you have a website? Books? I hope you will offer a MOOC course so more people can get help about this topic, Thanks again for sharing such a great material.
Dear Professor Stachniss, Thanks for the great material! On slide 21 (30 min) you used Velocity-Based Motion Model. Why didn't you apply the 3rd parameter (final rotation) in this model?
I figure it's an oversight but in the end you are free to use either one. The computed Jacobian for the velocity-based model when updating the pose will still end up to be the same as the third parameter has no dependence on x, y or theta. Therefore whichever version you use, it will only affect the prediction of the pose where the update will still be the same.
Dear Mr. Cyrill Stachniss, Thank you for your lectures. They are just fantastic. I have question regarding the functions f(.) and h(.). How can we model those functions for different applications ? currently i am working on tracking 6 dof of an object. How can i model the functions ? please clarify. Please forward me an example on 6dof tracking equations if available. Thank you for your solution.
At somewhere near 36:12, we still need to eliminate the sine and the cosine functions, right? If not then we technically haven't linearised them. The last step of the linearization is to evaluate the Jacobian at the linearization point, is it not?
For the initialization of covariance, it was assumed that the landmarks covariance values should be infinite. But it will lead to numerical instability explained in P329 of Probabilistic Robotics. The output of the robot pose will be -nan. What's the best way of initializing the covariance in the homework of sheet 4? it seems that it only works when these covariance values are set to zero.Thanks!
Thanks for sharing professor. I have a question about the slide representing the correlation matrix. It seems that the orientation of the robot is not correlated with anything since there are two white lines in correspondence of the robot orientation correlation with other variables. Why? I Think that uncertainty over the orientation should be reflected in the uncertainty of the landmark pose, leading to something like a banana shape distribution of landmark pose (I know that we would not observe it, because of linearization). Thanks in advance.
The line 10 at 58:25 min. represents the initial values of the landmark's estimate however the initial values of its uncertainty are not stated not even in the book. Really sad. How is the covariance matrix being enlarged ?????
The initial uncertainty is infinite (see initialization). As a result of that, the Kalman Gain will "fully trust" the measurement and you will have the correct uncertainty assigned to the landmark (observation and robot's pose uncertainty).
Cyrill Stachniss Thank you so much for the reply. I've seen the initial values eq. (10.9) however this is a mere implication, therefore it shouldn't be used that way.At the start point, the robot starts operating with an empty map and assumes the start point is the origin of the global frame which means the covariance matrix is equal to the covariance matrix of the robot's pose whose initial values are set to zero. Once the robot observes a landmark, the estimate of this landmark is obtained via the inverse measurement function and its uncertainty is truly affected by the uncertainty of the robot's pose and the uncertainty of the sensor. But you are saying the filter should fully trust these measurements which implies the uncertainty of the landmark's estimate for the first time should be set to zero. I got confused.
I am surprised that the 2D case has not been formulated in terms of complex numbers. Perhaps there is no real advantage in this sort of problem. Except notationally, maybe.
Hey, on Slide nr. 9 the covariance is not symmetric. I guess in colum 3 starting at row 4, there should be something like sigma_{m_{x,1},theta}... and so on. Is that right or do i missunderstand something? Best, Sascha.
I see the formula for prediction of new/unseen landmark but that only holds true for point landmark like beacon. If you extract line features and wish to yield a point landmark ( i.e shortest distance to line ) from global [0,0] so that regardless of robot position the (mx,my) is the same , how do you go about formulating that so that i can have the Jacobian for the addition of new landmark. Thanks
please go through 1:00:10 again. Particulary the part where adding new landmarks, basically if we are seeing landmark for first time, the predicted landmark = updated landmark. When we observe the landmark again we compute the difference between the predictions from previous estimate we have and the new observations, and we get a new update or loop closure. This is what I understood. Happy to stand corrected and open for new infernces.
Sir, in your slide you said Initialize a landmark if observed. This observation is in local frame. So how do I correct the position of landmark in respect to Global frame. and what if the robot has a circular movement and it show the same landmark again and again?
Thanks a lot for the series os videos! I have one question, in the EKF_SLAM_Prediction(ut-1,Et-1,ut,zt,ct,Rt) example, what changes from de Fx of step 2 to the FTx of step 3?
Hello,sir! what we should do if we don't know the number of lamdmarks,as this is the most cases……and in the book,the title is "SLAM with known correspondence",what is the correspondence?
Dear Sir Cyrill Stachniss I am new with Slam, I am confused about the landmarks, that how it should be distinguishable form one another, either in its shape or just location of the landmarks. I want to use Slam in agriculture If I am detecting the trees with Covolutional Neural Network would it be suitable. because all the trees are detected with same characteristics. Please help me with this. Thanks
Hi Sir, This is regarding the property of EKF SLAM that says that, at the limit, the lower bound of covariance matrix cannot reduce to less than the initial uncertainty of the robot when it sees the first landmark. If the robot observes a land mark at its initial position (0,0,0), then can it be said that , at limit the uncertainty reduces to zero as there is a landmark whose position is certain(100 %) ? Please correct me if i got something wrong.
Lets say we know three landmarks at position (0,0,0). In this case can it be said that the pose of the robot is certain but the covariance of these three landmarks will be uncertainty of the sensor model ?(is it right) So when is it possible for the robot to achieve the lower bound i.e. the uncertainty of the sensor model when these landmarks are initialized ?
I do have a question and I hope you still reply to these comments.Q: Is it necessary to grow your matrix(in practice) every time you observe a new landmark or can you just add a number of landmarks in the beginning and then just keep updating them according to the measurement?For example I want to implement this algorithm on a robot using a raspberry pi and I hope I can constantly update a large input number of landmarks that I haven't discovered yet instead of adding the newly observed one and enlarging the matrices.Thank you
Can someone help @47:44: Slide 35, I am unable to understand why we use "observed location of landmark j" (calculated in slide 34) to generate the observation prediction model h(μt). Shouldn't the prediction be independent of observation? As I have understood from previous lectures, since the map is known and the data associations are known, the landmark locations are known. Since we already know which landmark a particular observation belongs to, shouldn't the map location of the landmark (m j,x and m j,y) be used instead of (μ j,x and μ j,y) in slide 35? Not sure why I am confusing this. I understand it has something to do with the observation being made for the first time, but what that signifies, I am not sure I understand. Also what happens when a landmark is seen a again? Do we still use (μ j,x and μ j,y) or do we use (m j,x and m j,y)?
Hey rahul, I am working on this problem and I think the point is to compare these 2 things. You have in one side the landmark you just observed and in the other side this same landmark you have in memory. and you make the difference of them multiply by K in the fifth step. Tell me if this is not clear
@@alexandrelarribau4083 Hey Alexandre. Thanks for the explanation. I was able to resolve and implement a working ekf slam to test my understanding. It is as you say. The only issue was that I wasn't able to understand what would happen if we have never seen a landmark before. It caused confusion in my understanding.
Dear Professor, Are we using the observed location of landmark j (slide 34, 46:28) in slide 35, 47:41 to find the expected location? If yes, then won't q be same as the $r_t^i$ of observed location? Thank you
+Siddharthan P R If the landmark has NEVER BEEN OBSERVED, we can initialize it the observation. This corresponds to an arbitrary initial position of the landmark (eg 0,0) and an infinitive large variance. So if the landmark has never been observed, use the observation, otherwise you fuse the prediction and the observation with the update step.
Very well explained. Would recommend to everyone who wants to start learning SLAM!!!
Thank you!
I much appreciate this great lecture! Thank Pro. Stachniss.
Thank you professor.
- from India
I wish to apply this practically. I have only thought of the graph implementation.
Im from Uni Kassel, your lectures help me a lot. Thank u very much!
That was absolutely great! You mentioned in the video that you will provide some worksheets and HWs. Could you please post some links?
Do you have a website? Books?
I hope you will offer a MOOC course so more people can get help about this topic,
Thanks again for sharing such a great material.
ais.informatik.uni-freiburg.de/teaching/ws13/mapping/
@@yousofebneddin7430 Thanks!
Dear Professor Stachniss,
Thanks for the great material!
On slide 21 (30 min) you used Velocity-Based Motion Model. Why didn't you apply the 3rd parameter (final rotation) in this model?
I figure it's an oversight but in the end you are free to use either one. The computed Jacobian for the velocity-based model when updating the pose will still end up to be the same as the third parameter has no dependence on x, y or theta. Therefore whichever version you use, it will only affect the prediction of the pose where the update will still be the same.
Dear Mr. Cyrill Stachniss,
Thank you for your lectures. They are just fantastic.
I have question regarding the functions f(.) and h(.). How can we model those functions for different applications ?
currently i am working on tracking 6 dof of an object. How can i model the functions ?
please clarify.
Please forward me an example on 6dof tracking equations if available.
Thank you for your solution.
who talks like this
At somewhere near 36:12, we still need to eliminate the sine and the cosine functions, right? If not then we technically haven't linearised them. The last step of the linearization is to evaluate the Jacobian at the linearization point, is it not?
Perfect explanation!!!
For the initialization of covariance, it was assumed that the landmarks covariance values should be infinite. But it will lead to numerical instability explained in P329 of Probabilistic Robotics. The output of the robot pose will be -nan. What's the best way of initializing the covariance in the homework of sheet 4? it seems that it only works when these covariance values are set to zero.Thanks!
Thanks for sharing professor.
I have a question about the slide representing the correlation matrix. It seems that the orientation of the robot is not correlated with anything since there are two white lines in correspondence of the robot orientation correlation with other variables. Why? I Think that uncertainty over the orientation should be reflected in the uncertainty of the landmark pose, leading to something like a banana shape distribution of landmark pose (I know that we would not observe it, because of linearization).
Thanks in advance.
The line 10 at 58:25 min. represents the initial values of the landmark's estimate however the initial values of its uncertainty are not stated not even in the book. Really sad. How is the covariance matrix being enlarged ?????
The initial uncertainty is infinite (see initialization). As a result of that, the Kalman Gain will "fully trust" the measurement and you will have the correct uncertainty assigned to the landmark (observation and robot's pose uncertainty).
Cyrill Stachniss Thank you so much for the reply. I've seen the initial values eq. (10.9) however this is a mere implication, therefore it shouldn't be used that way.At the start point, the robot starts operating with an empty map and assumes the start point is the origin of the global frame which means the covariance matrix is equal to the covariance matrix of the robot's pose whose initial values are set to zero. Once the robot observes a landmark, the estimate of this landmark is obtained via the inverse measurement function and its uncertainty is truly affected by the uncertainty of the robot's pose and the uncertainty of the sensor. But you are saying the filter should fully trust these measurements which implies the uncertainty of the landmark's estimate for the first time should be set to zero. I got confused.
I am surprised that the 2D case has not been formulated in terms of complex numbers. Perhaps there is no real advantage in this sort of problem. Except notationally, maybe.
Hey, on Slide nr. 9 the covariance is not symmetric. I guess in colum 3 starting at row 4, there should be something like sigma_{m_{x,1},theta}... and so on. Is that right or do i missunderstand something? Best, Sascha.
I see the formula for prediction of new/unseen landmark but that only holds true for point landmark like beacon. If you extract line features and wish to yield a point landmark ( i.e shortest distance to line ) from global [0,0] so that regardless of robot position the (mx,my) is the same , how do you go about formulating that so that i can have the Jacobian for the addition of new landmark. Thanks
In 31:48 what are the initial value of x,y,theta,and landmarks? Initially are they all zeros?
is loop closure a special step you have to add into the algorithm or does it happen naturally?
please go through 1:00:10 again.
Particulary the part where adding new landmarks,
basically if we are seeing landmark for first time, the predicted landmark = updated landmark.
When we observe the landmark again we compute the difference between the predictions from previous estimate we have and the new observations, and we get a new update or loop closure.
This is what I understood. Happy to stand corrected and open for new infernces.
Sir, in your slide you said Initialize a landmark if observed. This observation is in local frame. So how do I correct the position of landmark in respect to Global frame. and what if the robot has a circular movement and it show the same landmark again and again?
Qt in the video is the measurement noise covariance and Rt is the process noise covariance?
Thanks a lot for the series os videos! I have one question, in the EKF_SLAM_Prediction(ut-1,Et-1,ut,zt,ct,Rt) example, what changes from de Fx of step 2 to the FTx of step 3?
Hello,sir! what we should do if we don't know the number of lamdmarks,as this is the most cases……and in the book,the title is "SLAM with known correspondence",what is the correspondence?
Dear Sir Cyrill Stachniss I am new with Slam, I am confused about the landmarks, that how it should be distinguishable form one another, either in its shape or just location of the landmarks. I want to use Slam in agriculture If I am detecting the trees with Covolutional Neural Network would it be suitable. because all the trees are detected with same characteristics. Please help me with this. Thanks
Hi Sir,
This is regarding the property of EKF SLAM that says that, at the limit, the lower bound of covariance matrix cannot reduce to less than the initial uncertainty of the robot when it sees the first landmark.
If the robot observes a land mark at its initial position (0,0,0), then can it be said that , at limit the uncertainty reduces to zero as there is a landmark whose position is certain(100 %) ? Please correct me if i got something wrong.
No, that minimal uncertainty is just the lower bound, it does not mean that you reach it.
Lets say we know three landmarks at position (0,0,0). In this case can it be said that the pose of the robot is certain but the covariance of these three landmarks will be uncertainty of the sensor model ?(is it right)
So when is it possible for the robot to achieve the lower bound i.e. the uncertainty of the sensor model when these landmarks are initialized ?
1. that is correct
2. is is possible to achieve that but in general you can not guarantee that the robot reaches this states at a given point in time.
I do have a question and I hope you still reply to these comments.Q: Is it necessary to grow your matrix(in practice) every time you observe a new landmark or can you just add a number of landmarks in the beginning and then just keep updating them according to the measurement?For example I want to implement this algorithm on a robot using a raspberry pi and I hope I can constantly update a large input number of landmarks that I haven't discovered yet instead of adding the newly observed one and enlarging the matrices.Thank you
I think Q is a 2 by 2 matrix and doesn't depend on the number of landmarks
Helen Ferry
Can we download the slides anywhere please?? while conveying my gratitude to the professor Cyrill Stachniss
See my website at www.ipb.uni-bonn.de
the Slides cannot be shown on website.
Can someone help @47:44:
Slide 35, I am unable to understand why we use "observed location of landmark j" (calculated in slide 34) to generate the observation prediction model h(μt). Shouldn't the prediction be independent of observation?
As I have understood from previous lectures, since the map is known and the data associations are known, the landmark locations are known. Since we already know which landmark a particular observation belongs to, shouldn't the map location of the landmark (m j,x and m j,y) be used instead of (μ j,x and μ j,y) in slide 35?
Not sure why I am confusing this. I understand it has something to do with the observation being made for the first time, but what that signifies, I am not sure I understand.
Also what happens when a landmark is seen a again? Do we still use (μ j,x and μ j,y) or do we use (m j,x and m j,y)?
Hey rahul, I am working on this problem and I think the point is to compare these 2 things. You have in one side the landmark you just observed and in the other side this same landmark you have in memory. and you make the difference of them multiply by K in the fifth step. Tell me if this is not clear
@@alexandrelarribau4083 Hey Alexandre. Thanks for the explanation. I was able to resolve and implement a working ekf slam to test my understanding. It is as you say. The only issue was that I wasn't able to understand what would happen if we have never seen a landmark before. It caused confusion in my understanding.
Dear Professor,
Are we using the observed location of landmark j (slide 34, 46:28) in slide 35, 47:41 to find the expected location? If yes, then won't q be same as the $r_t^i$ of observed location?
Thank you
+Siddharthan P R If the landmark has NEVER BEEN OBSERVED, we can initialize it the observation. This corresponds to an arbitrary initial position of the landmark (eg 0,0) and an infinitive large variance. So if the landmark has never been observed, use the observation, otherwise you fuse the prediction and the observation with the update step.
+Cyrill Stachniss Thanks a lot professor!!
Dear Professor
Tell me how to do it. computing atan2(..)->delta x, delta y? jacobian derivative
Thank you very much.
Thank you very much.