This lecture really helped me understand the idea behind the Bayes filter. I had read a little about it and heard a few basic concepts, but this really connected all of the loose ends. Thank you!
Thank you for keeping these videos up and even the new video you posted on bayes filter! Very good lectures and make me even want to go to where you're teaching!
Thanks for the great lecture professor. I would like to make a clarification for 46:04 , where you explain the max-range effect of measurement model. I think the reason of that part is not about the 5m a way obstacle for a 4m range sensor. I think the real reason is: sometimes the sensor may fail to measure an object even it is just in front of it (let say 2m away), and gives us the result of max range (4m), which is a sensor failure. That part models these kind of errors I guess, isn't it?
Amazing lecture series - good distillation of the probabilistic concepts. However a question: why when predicting x(t) using odometry we use a triple transform = initial-rotation-translation-final-rotation. Given that the bayes formulation is inherently recursive , i.e. x(t) => x(t+1) => x(t+2) one would imagine that the second rotation would be naturally the initial part of the next estimation, i.e. in the estimation of x(t+2). Otherwise it appears (naively) that we might end up double counting the second rotation.
For autonomous cars we probably should not use rotation-translation-rotation model because cars cannot rotate in place. Therefore, circular motion model as described at 35:18 should be more appropriate.
at 5:50 it is mentioned that if we have sensor bias then previous measurement can help get better estimate. I want to know how because the previous measurement is also taken from the same sensor so it also has the same bias?
In the velocity-based model, where we assume that the robot receives the command with parameters (v,w) and executes them for a predefined time interval delta-t, shouldn't we also include in the model some uncertainty about the robot's internal clock ? Or can it just be represented as a scaling term for the uncertainty of executing (v,w) ?
I have a small doubt with respect to Markov assumption to reduce the first complex term to p(Zt / Xt). As you said that having previous observations and control commands helps to get better estimates, then why is it that this assumption is made ?
Otherwise we would not end up with such an easy and effective algorithm - and the approximation error can be assumed to be small, especially when eliminating systematic errors beforehand through calibration.
Hi Cyrill... Thanks for your great videos on SLAM and robotics. At 35:58 - Are you basically just saying this is a 2D quaternion using spherical linear interpolation (slerp)?
Hi all, I have a silly Question. Around 27mins in when we are talking about the Odometry Model, why do we measure the translation as the euclidean distance between the two poses? While that does make sense, I thought the Odometry Model meant measuring the rotation of the robot wheels, so I was expecting some formula that included RPM and the wheel radius. I am sure I am missing something silly however. Great lecture btw. Looking forward to watching the rest :)
+Kevin Farrell Most robot control systems provide the pose in a robot coordinate frame, so for the prediction step, you need to compute the rigid body transformation between two poses and use a noise model for it. The Rotate-Tranalate-Rotate Model is just one possible choice, you can take others as well.
I am a bit confused with p(x(t) | x(t-1),u(t)). In a motion model for prediction, at (t-1) , we need u(t-1) and x(t-1) to predict x(t). u(t) is a future move at time (t). I guess, it should be p(x(t) | x(t-1),u(t-1)). Kindly let me know where am I going wrong.
It depends how you define u_t. I used the notation from the probabilistic robotics book where u_t leads from x_{t-1} to x_t. I guess you mean the right thing but are probably used to the other notation.
I have a doubt regarding Bayes's expansion of bel(x_t) in slide no 4. As per my derivation, denominator should have a term P(Z_t | Z_{1:t-1}, u_{1:t}). I am not able to understand how this term is subsumed in the constant as Z_t is dependent on u_t.
Nitin Dhiman The original expression is bel(x_t) which is only a function of x_t. It states that bel(x_t) = p(x_t | z_{1:t}, u_{1:t}) which indicates that z_t and u_t are given (i.e. known), so P(Z_t | Z_{1:t-1}, u_{1:t}) would be a constant.
At 13:42, shouldn't we assume that the control action u_t depend on the current state x_t? Control can't be oblivious to the system state can it? If so, then knowing u_t we may be able to infer something about x_{t-1} and the two will not be independent. Is this true? and if it is then we can't use independence to ignore u_t in the last equation.
Yes, that's the approximation/simplification which he says may or may not be true, at ~13:30. So yes, u_t might reasonably depend on x_{t-1}, so knowledge of u_t would inform as to p(x_{t-1}). But since u_t is informed by x_{t-1}, which only depends on z_{1:t-1}, u_{1:t-1}, it can't provide anymore info about x_{t-1}.
v = (Omega)X(r). this is cross product of omega and distance from rotation axis. now r = v/Omega then intially(when orientation is theta), position of r is (v/omega sin(theta)) and after delta t, r is (v/omega sin(theta + omega*delta t). now final robot position is (x',y',Theta') = (x,t,theta)+displacement in delta t = (x,y,theta) + (r final - r initial)
When you are talking about the odometry model, does the robot have to make these motions in reality or is that the effective path it will go in the end? If we for instance have a car, that does a lot of curves but the effective path can be summarized as shown in your odometry model in the slides, meaning two rotations and a translation, could the movement still be simplified as that or does the full path traveled with all the different curves have to be taken into account? In that case the model would be very complicated right? This was not entirely clear for me so I thank you for any help :)
Yannick M The models describes the intended motion of the robot/car between two time steps. From t to t+1, we consider a simply rigid body transformation, basically from start configuration to end configuration at t+1. But If you chain all commands starting from t=1 ... T, you get a (discretized) trajectory,
Could someone Explain me what kind of Information the Odometry measurements reports to us? (talking about the measured ones) I mean does the Robot have a internal coordinate System and what relation does it have to the global coordinates? Or does we simply measure everytime from point x_{t-1} = 0 to x_t = 2.5(example) meters and use this information to get the rotations and translation? Just what kind of information ist x_{t-1} and x_t. Since we need in the example x,y,Orientation to calculate the 3 steps the Robot must have some kind of coordinate system or am i tottaly wrong ^^
I depends on the platform. Most systems (e.g., a Pioneer) have an internal coordinate system and integrate the motion command within that local frame (which drifts). The pose in this frame is reported to the outside world. Thus in most cases, one uses the internal coordinate frame to compute the relative motion, which is used as the odometry in the methods presented here.
Solutions are not public but here are my solutions if you want to take a look or what not. Let me know if you sport anything wrong about them github.com/conorhennessy/SLAM-Course-Solutions
Quite unlike other stellar videos that you have published. This is far too abstract! I have no clue what problem we are attempting to solve. You lost me.
the best videos a person can find free of cost
Thank you Cyrill, you've made our lives as students easier
This lecture really helped me understand the idea behind the Bayes filter. I had read a little about it and heard a few basic concepts, but this really connected all of the loose ends. Thank you!
Thank you for keeping these videos up and even the new video you posted on bayes filter! Very good lectures and make me even want to go to where you're teaching!
Thanks for the great lecture professor.
I would like to make a clarification for 46:04 , where you explain the max-range effect of measurement model. I think the reason of that part is not about the 5m a way obstacle for a 4m range sensor. I think the real reason is: sometimes the sensor may fail to measure an object even it is just in front of it (let say 2m away), and gives us the result of max range (4m), which is a sensor failure. That part models these kind of errors I guess, isn't it?
Amazing lecture series - good distillation of the probabilistic concepts. However a question: why when predicting x(t) using odometry we use a triple transform = initial-rotation-translation-final-rotation. Given that the bayes formulation is inherently recursive , i.e. x(t) => x(t+1) => x(t+2) one would imagine that the second rotation would be naturally the initial part of the next estimation, i.e. in the estimation of x(t+2). Otherwise it appears (naively) that we might end up double counting the second rotation.
amazing class
For autonomous cars we probably should not use rotation-translation-rotation model because cars cannot rotate in place. Therefore, circular motion model as described at 35:18 should be more appropriate.
That's wrong. my autonomous car rotates in place.
great lectures
wonderful courses, thanks!
at 5:50 it is mentioned that if we have sensor bias then previous measurement can help get better estimate. I want to know how because the previous measurement is also taken from the same sensor so it also has the same bias?
Thank u!
at 5:00 I didn't quite get how you applied baye's rule. how are you able to swap out only z_t?
In the velocity-based model, where we assume that the robot receives the command with parameters (v,w) and executes them for a predefined time interval delta-t, shouldn't we also include in the model some uncertainty about the robot's internal clock ? Or can it just be represented as a scaling term for the uncertainty of executing (v,w) ?
I have a small doubt with respect to Markov assumption to reduce the first complex term to p(Zt / Xt). As you said that having previous observations and control commands helps to get better estimates, then why is it that this assumption is made ?
Otherwise we would not end up with such an easy and effective algorithm - and the approximation error can be assumed to be small, especially when eliminating systematic errors beforehand through calibration.
Hi Cyrill... Thanks for your great videos on SLAM and robotics. At 35:58 - Are you basically just saying this is a 2D quaternion using spherical linear interpolation (slerp)?
where can we see the tutorials of this course and excersises?
What happend when in the velocity model the w=0? How you can do in that case?
Could anyone tell me what's the difference between beam-endpoint model and ray-casting model ?
Hi all, I have a silly Question. Around 27mins in when we are talking about the Odometry Model, why do we measure the translation as the euclidean distance between the two poses? While that does make sense, I thought the Odometry Model meant measuring the rotation of the robot wheels, so I was expecting some formula that included RPM and the wheel radius. I am sure I am missing something silly however.
Great lecture btw. Looking forward to watching the rest :)
+Kevin Farrell Most robot control systems provide the pose in a robot coordinate frame, so for the prediction step, you need to compute the rigid body transformation between two poses and use a noise model for it. The Rotate-Tranalate-Rotate Model is just one possible choice, you can take others as well.
I am a bit confused with p(x(t) | x(t-1),u(t)). In a motion model for prediction, at (t-1) , we need u(t-1) and x(t-1) to predict x(t). u(t) is a future move at time (t). I guess, it should be p(x(t) | x(t-1),u(t-1)). Kindly let me know where am I going wrong.
It depends how you define u_t. I used the notation from the probabilistic robotics book where u_t leads from x_{t-1} to x_t. I guess you mean the right thing but are probably used to the other notation.
Hiii Dr
Hi guys
I have a question , how can I add this additional noise term to the final orientation in the velocity model?
could you help me ?
I have a doubt regarding Bayes's expansion of bel(x_t) in slide no 4. As per my derivation, denominator should have a term P(Z_t | Z_{1:t-1}, u_{1:t}). I am not able to understand how this term is subsumed in the constant as Z_t is dependent on u_t.
The whole denominator sits in the normalization constant.
Thanks for the reply. I am not able to comprehend it. P(Z_t | Z_{1:t-1}, u_{1:t}) is not constant as it is dependent on u_t
Nitin Dhiman The original expression is bel(x_t) which is only a function of x_t. It states that bel(x_t) = p(x_t | z_{1:t}, u_{1:t}) which indicates that z_t and u_t are given (i.e. known), so P(Z_t | Z_{1:t-1}, u_{1:t}) would be a constant.
At 13:42, shouldn't we assume that the control action u_t depend on the current state x_t? Control can't be oblivious to the system state can it? If so, then knowing u_t we may be able to infer something about x_{t-1} and the two will not be independent. Is this true? and if it is then we can't use independence to ignore u_t in the last equation.
Yes, that's the approximation/simplification which he says may or may not be true, at ~13:30. So yes, u_t might reasonably depend on x_{t-1}, so knowledge of u_t would inform as to p(x_{t-1}). But since u_t is informed by x_{t-1}, which only depends on z_{1:t-1}, u_{1:t-1}, it can't provide anymore info about x_{t-1}.
Why is omega on the denominator at around 32:56 ?
v = (Omega)X(r). this is cross product of omega and distance from rotation axis.
now r = v/Omega
then intially(when orientation is theta), position of r is (v/omega sin(theta)) and after delta t, r is (v/omega sin(theta + omega*delta t).
now final robot position is (x',y',Theta') = (x,t,theta)+displacement in delta t = (x,y,theta) + (r final - r initial)
Thanks a lot for the lecture. How can we find the lecture slides of the whole course?
Send me an email
When you are talking about the odometry model, does the robot have to make these motions in reality or is that the effective path it will go in the end? If we for instance have a car, that does a lot of curves but the effective path can be summarized as shown in your odometry model in the slides, meaning two rotations and a translation, could the movement still be simplified as that or does the full path traveled with all the different curves have to be taken into account? In that case the model would be very complicated right? This was not entirely clear for me so I thank you for any help :)
Yannick M The models describes the intended motion of the robot/car between two time steps. From t to t+1, we consider a simply rigid body transformation, basically from start configuration to end configuration at t+1. But If you chain all commands starting from t=1 ... T, you get a (discretized) trajectory,
Cyrill Stachniss Thank you for the reply! I got it now :)
thank you
Could someone Explain me what kind of Information the Odometry measurements reports to us? (talking about the measured ones) I mean does the Robot have a internal coordinate System and what relation does it have to the global coordinates? Or does we simply measure everytime from point x_{t-1} = 0 to x_t = 2.5(example) meters and use this information to get the rotations and translation? Just what kind of information ist x_{t-1} and x_t. Since we need in the example x,y,Orientation to calculate the 3 steps the Robot must have some kind of coordinate system or am i tottaly wrong ^^
I depends on the platform. Most systems (e.g., a Pioneer) have an internal coordinate system and integrate the motion command within that local frame (which drifts). The pose in this frame is reported to the outside world. Thus in most cases, one uses the internal coordinate frame to compute the relative motion, which is used as the odometry in the methods presented here.
Will you make your assignments public?
Yes, the assignments are public, see the Course Website in WS 13/14 taught by myself at Freiburg University. The solutions, however, are not public.
Solutions are not public but here are my solutions if you want to take a look or what not. Let me know if you sport anything wrong about them github.com/conorhennessy/SLAM-Course-Solutions
Quite unlike other stellar videos that you have published. This is far too abstract! I have no clue what problem we are attempting to solve. You lost me.