OpenCV Python Camera Calibration (Intrinsic, Extrinsic, Distortion)
ฝัง
- เผยแพร่เมื่อ 3 ก.ค. 2024
- In this video, I will go over how to do camera calibration in OpenCV using python in VS Code. I will show you how we can take several images of chessboard pattern and use that to find our camera parameters (intrinsics and extrinsics). This will let us find our focal length, camera center, distortion coefficients, rotation and translation vectors. I will then show you how to remove distortion from an image.
0:00 Introduction
0:19 What is camera calibration? (Intrinsic, Extrinsic, Pinhole Model)
3:37 Why do we need camera calibration?
3:50 How does camera calibration work?
5:20 Code
Thanks for watching! If you found this video helpful, please like, subscribe and share:
/ @kevinwoodrobotics
Social:
LinkedIn: / kevinwoodrobotics
Github: github.com/kevinwoodrobotics
Instagram: / kevinwoodrobotics
Twitter: / kevinwoodrobotics
Code and Doc: kevinwoodrobotics.com/product/opencv-python-camera-calibration/
OpenCV Python Playlist Code and Doc: kevinwoodrobotics.com/product/opencv-python-tutorials-full-playlist/
here to watch your videos😊
thank you for the helpful video!
Thanks!
Hi! New subscriber here! Happy to see you cover the implementation of classic computer vision concepts in python, recently started learning computer vision and was getting a bit discouraged with some nebulous concepts floating around my head, your channel is uplifting, glad to have found it.
Welcome! I’m glad my work is helping you out! Let me know if there’s any topics you’d like to see
@@kevinwoodrobotics Thank you. Will be sure to let you know!
I had muscled my way through some theory with "CBCSL Teaching" & "First principles of Computer Vision" videos.
I do hope you get recommended to others as I had earlier been actively searching for a channel like yours. Thanks again and best regards!
@@electric_sand no problem! If you know anyone that may benefit feel free to share my channel! Thanks
@@kevinwoodrobotics Will do! 🦾
Thank You.
Thanks!
Yup, it's sure helpful!
Thanks!
👩🏻💻✨✨✨
How can I learn more about the linear algebra that is happening here? I generally understand what is happening, but actually how we get some of the matrices such as the rotation one seems really interesting.
Craigs book Introduction to Robotics has a pretty good primer on linear algebra and some applications to these rotation and transformation matrices. I plan to make some videos on that so stay tuned!
Great video. Does this help in removing perspective distortion. I am doing a project on automatic garment measurement using computer vision. I have found key-points of the garment but when finding the measurement the pixel to distance scale doesn't remain valid for all the measurement because parts closer to the camera will have a different pixel-to-distance ratio compared to parts further away. So using this code will help in fixing it?
This is for lens distortion. For your problem I would recommend looking into homography to transform perspective view into a 2d planar view. I have a video on that
@@kevinwoodrobotics thanks for the reply. Another doubt, Is it possible to not crop the original image when applying distortion because it seems like it provides a Roi and need to crop the image.
This is incredibly helpful - thank you! Do you share your code anywhere please? - It looks like your Github profile doesn't have any public repositories...
Thank you! Yes you can find my code here: kevinwoodrobotics.com/product/opencv-python-camera-calibration/
Hi, Kevin! It's nice to see your tutorial videos, cause they really help me a lot to thru my digital image processing project. But as a beginner, ive a question (im so sorry if youve already explain it in your video, i just, i still didnt understand yet about how this calibration works). Lets say i want to do estimate measurement with image processing. Some papers said, that i should take the calibration first before i do the measurement. If i already do the calibration, does the fix parameters (intrinsic and extrinsic) will be "saved" in the camera? How could we do that...? And if its so, so, when i want try to capture an image with my camera, i dont have to run the calibration code (cause the camera have already "set" in the camera)? Or i should run the code again or merge the calibration code with my image processing code? Sorry for the question, I really thank you a lot if you are willingly to explain me some..
Good question. So the idea is that you take calibration images to get the camera parameters which you can save somewhere. Then you can use those parameters on new images to correct for distortion for your specific task or use that for depth estimation and reconstruction.
@@kevinwoodrobotics Ah.. i see. But some libraries (like opencv) they have several functions to do this kind of things, right? Such cv.getPerspectiveTransform or cv.warpPerspective. Does that mean i shouldnt have to do that functions? Since ive already "taught" and "set up" my camera? Or i still have to do the functions?
@@dyahlangensari4802 yeah you have to modify the images still after calibrating
@@kevinwoodrobotics Oh, may i ask you one more question? What if the camera that i want to calibrate is a wide-angle camera? They have more complex distortion rather than an usual planar camera, isnt it? Since they're viewing in such wide-angle. Does it have "the same principles" with the usual planar one? Or do they have a different principles?
Hey, a really good video, but could you tell me where can I see the equiations, because i want to understand that, I would appreciate it
docs.opencv.org/4.x/d9/d0c/group__calib3d.html
UnboundLocalError: local variable 'imgGray' referenced before assignment
i am getting this error
make sure your img path is correct and your imgGray var is within scope
At ~4:20, you mention "mapping between world points and image points", but isn't this equation relating two (2D) image points [x_i, y_i] and [X_i, Y_i] from the left and right image, rather than describing the relationship between (2D) image and (3D) world points? Otherwise, why is there no Z_i component in the [X_i, Y_i, 1] vector?
The matrix H stands for "homography", which is a projective transformation that maps coordinates on one (planar) image to another image.
Great question. Here we are assuming the world frame is on the chess board so all the Z values are the same for all world points on that plane. So we are able to drop out the Z component.
@@kevinwoodrobotics Another question, by assuming Z component are all the same, we are assuming that the distance in x, y plane might be different, but in the code it looks the world x, y coordinates are all fixed, which implies that the distances of corners are same VS different. How to understand this?
@@songpandy9590 so the idea is to assume that this grid is attached to some known location in the world. So for simplicity, we attached this to the origin of the world frame and coincident with the xy plane. Since the checker board has known spacing, those values are fixed locations. Hope this helps.