Thanks for the simplified explanations! Your channel is really helpful for anyone wanting to grasp the key concepts of computer vision and robotics quickly. Greatly appreciate your initiative.
Relative Orientation: 00:00 to 22:05 Fundamental Matrix: 22:05 to 44:40 Essential Matrix: 44:40 to 51:07 Popular parameterizations for the relative orientation: 51:07 to
Thank you for this amazing video. I do have a question though. At 32:29, when you describe that both expressions are the same, where does the x transpose come from? Thanks a lot
I thought if we have the translation matrix, we would know the distance between 2 cameras. However, I forgot about the lambda property of the homogeneous coordinates. Thank you for showing that epipolar axis only shows the direction, not the length between 2 camera centers.
12:02 In three dimensional world, don't we need three parameters for the direction vectors like 'B'? I'm not sure why in the video (and on the slides), the professor says only 2 params! Would greatly appreciate it if someone can clarify.
@@CyrillStachniss Thank you so much Prof. for such a quick response. I understood your point regarding the vector length However, don't we need three parameters i.e. roll, pitch, and yaw for the direction of a vector? (From my understanding a rigid body in 3D have 6 DOF: 3 for translation and 3 for rotation. If possible, please do correct me if I'm mixing that up with a different concept. Thanks a lot in advance!)
Hi Professor, at 5:25, you mentioned that in DLT we need 5 control points to estimate 11 parameters for one camera. But, in DLT, don't we need at least 6 points for one camera? (source: 34:44 in your video on DLT for Camera Calibration and Localization - th-cam.com/video/3NcQbZu6xt8/w-d-xo.html) Thanks in advance!
I dont understand why in not calibrated case we have 15 params for AO. I dont get the projective transform part in relation to a pair of cameras. Can you please specify which specific params can't be solved, and why?
Thanks for the simplified explanations! Your channel is really helpful for anyone wanting to grasp the key concepts of computer vision and robotics quickly. Greatly appreciate your initiative.
Thanks, great to hear that
Relative Orientation: 00:00 to 22:05
Fundamental Matrix: 22:05 to 44:40
Essential Matrix: 44:40 to 51:07
Popular parameterizations for the relative orientation: 51:07 to
thanks for making this nice explanation public and freely accessible
Thanks so much for your crisp explanation!
Thank you for this amazing video. I do have a question though. At 32:29, when you describe that both expressions are the same, where does the x transpose come from?
Thanks a lot
I thought if we have the translation matrix, we would know the distance between 2 cameras. However, I forgot about the lambda property of the homogeneous coordinates. Thank you for showing that epipolar axis only shows the direction, not the length between 2 camera centers.
Thank you Professor! That's really useful for me
12:02 In three dimensional world, don't we need three parameters for the direction vectors like 'B'?
I'm not sure why in the video (and on the slides), the professor says only 2 params! Would greatly appreciate it if someone can clarify.
You can only determine 2 parameters, the scale (=the vector length) can not be recovered.
@@CyrillStachniss Thank you so much Prof. for such a quick response. I understood your point regarding the vector length However, don't we need three parameters i.e. roll, pitch, and yaw for the direction of a vector?
(From my understanding a rigid body in 3D have 6 DOF: 3 for translation and 3 for rotation. If possible, please do correct me if I'm mixing that up with a different concept. Thanks a lot in advance!)
thank you, thank you so much
Nice explanation. Thank you.
You are welcome!
Hi Professor, at 5:25, you mentioned that in DLT we need 5 control points to estimate 11 parameters for one camera. But, in DLT, don't we need at least 6 points for one camera? (source: 34:44 in your video on DLT for Camera Calibration and Localization - th-cam.com/video/3NcQbZu6xt8/w-d-xo.html)
Thanks in advance!
Correct, that is a mistake on my side. I should have said 6 and not 5.
@@CyrillStachniss Thanks a lot professor for such a quick response and for the clarification.
I dont understand why in not calibrated case we have 15 params for AO.
I dont get the projective transform part in relation to a pair of cameras.
Can you please specify which specific params can't be solved, and why?
Is this applicable only for camera pair or camera lidar also?!
At 21:17 I think your slide is wrong. The 5 and the 7 on the top line should be swapped.
This mysterious 7 dof you can extract which is explained by the fact that 22 - 15 = 7... I don't understand what the 7 dof are.
The answer is apparently something something specifying a conic and epipoles. Pg 252 of the Bible (Multiple View Geometry)
For creating the fundamental matrix, you are using K invert. But, K is a 3*4 matrix. How it can have an invert?
No, K is 3x3 and invertible