Hi Nicolai. Thank you so so much for your videos. All about image processing field is amazing. Only I have an advice about your lists of videos or your channel structure. I think you can put a video link or something like that when you talk about other topics that you explain before, because is easier for new people follow all topics knowing and seeing previously videos. Again, your channel and codes are so helpful. Thanks!
I got this error. I did exactly the same. Can you help me please? File "Webcam_calib_pics.py", line 43, in ret, cameraMatrix, dist, rvecs, tvecs = cv2.calibrateCamera( cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-afu9cjzs/opencv/modules/calib3d/src/calibration.cpp:3694: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO'
Thanks for the video series, it's very helpful. Although I find images like those in the video are commonly used in calibration examples, I don't understand why the calibration pattern, i.e., the chessboard, isn't placed against a black background with carefully controlled lighting. What is the motivation? Is it just to keep photo session setup simple or do the pseudo-random image compositions serve some purpose?
Hello, very good video. I have two questions btw. 1) why does objpoints on line 18 (or line15) have to be 3d points? 2) why is there no scaling needed on the objpoints to match the imgPoints in pixel space?
I am using image on the dot pattern not checker or chessboard and I’m stucked in the part of calibration. How can I get the XYZ points of 3D real world and all the parameters (Camera matrix, translation and rotation vectors)? Thank you
Really helpful video! You earned a sub. I have a question regarding camera calibration though: While running this script on images taken from a pretty bad drone-mounted camera, after cropping i receive a cropped image that is 40x10 pixels instead of the original 324x224 pixels. When I removed the cropping part of the script, my "calibrated" image was even more distorted as if it were taken by a fish eye lens 🤣 I got a total error of 0.58 which is pretty bad compared to your 0.04 👀
@@NicolaiAI your engagement with your comment section is unbelievable! Do you think it is still possible to calibrate this camera? Anyway I will keep trying
Hi, I wanted to ask how can I get 3D co-ordinates of a moving point? given that I had two webcams set orthogonally and both can track a point ( in this case I'm using a LED at the tip of a pen) and both cameras can track the LED. Now I wanted to plot or log the 3D co-ordinates of the LED in real time as I move it? Please provide me an idea. Thanks a lot!
Thanks @NicolaiAI. But can you help me clear two doubts? 1. Does this camera calibration changes with lighting conditions in the room? Say when first clicking the images to decide the parameters, the lights were bright. But once we receive the parameters, we started to use them to take image and correct them with real (intended objects) with different lightning conditions. Will is effect (are the parameters going to change as lightning conditions changes) 2. If I use mobile camera or dslr in auto mode, and take and calibrate it, are these set of parameters valid? Or when we use auto mode, the parameters keeps changing?
Hey! first of all, congrats for the quality of the videos, you make so easy to understand this complex world of computer vision! I have a question regarding the distance to calibrate the stereo system. Does it matters how far you place the chessboard from the cameras to take the pictures? Or depends on how far is the depth that you want to calculate after? Cause I`m building a set up that should get a minimum distance of 10 meters. Thanks in advance for your help!
Thank you very much for the kind words! Really appreciate it. If ur setup is at that distance u might want to have some images of the chessboard further away as long as it can detect the corners it should be fine
Yup but it requires a good calibration board and img quality to run that step and not ruin the calibration. I have neither of these and the sub corners are just destroying the calibration
I'm trying to use this to get the orientation of an object for pick and place application. How can I get the orientation value of the object in real-time ? Also is it possible to get the coordinates of the object being tracked ? These 2 values (orientation of the object and coordinates of the objects) are required for the robot to pick-up the object.
after i corrected the fisheye images with the calibrated matrix i lost some field of view... How much do i loss exaclty? or better asked: which section of the image ist getting corrected?
Yeah u will lose some fov depending on how much fish eye distortion. Can't really tell how much u are loosing but u can try take a look at the distortion parameters and see if u can get something out of them
Is the code fast enough to do undistortion on drone footage in real time, like 15-30 fps or at least 5 fps? I'm going to detect ArUco markers with my drone and if the camera distortion is too much, I reckon undistorting the images will be a great help, but I don't know if the drone can do it fast enough to be practical real-time. Great video!
Great stuff and thank you, hope it will be practical for what I have to do, but I think if you will manage yourself to speak a little bit more rare and condensing your info and also discard unneeded details, this could turn into a really awesome tutorial. Now it is great, but it could be awesome. Can I ask you my practical problem now, please ? I'm not sure what you taught here solves my issue. My input are some pictures taken by a, let's say randomly positioned and oriented camera, but I need to find out from that picture, the camera coordinate system (translations & rotations) + projection settings so that I can then, programatically speaking, generate some new projections, later on, real world coordinates to x,y pixels projections. In the input image I can insert any sorts of checkboards and so on, and I can also find out the world coordinate of those markers and also on the camera itself, but I cannot measure the rotational model of the camera, or it's projection equation. But I need to simulate all of these, later on, programatically, with a decent accuracy (no trembling objects :) ). Can you pinpoint me on the technique I must use for this, please ? Or how to search it on google, since camera calibration seems to be a little bit different than what I want to achieve, I'm pretty sure you used a simpler model, where you assumed the 3D objects are already in the camera's local coordinate system.
Hi buddy, great video! Quick question, I must had missed the explanation, but I don't understand what you do with the total error at the end of the program. That value is it used to correct the image, or is it some kind of focal distance? Thanks!
Nope u will use the distortion parameters to correct the image, the error at the end is the reprojection error which is used to see how good ur calibration is
IDK why, But I found this video so confusing. I could've been delineated in a step-wise manner. Of course, this video is for folks having background knowledge of computer vision. But for beginner like me, it is not easy to get along.
I am working on a project where I have to calculate height of person through camera. Can you give any suggestions or advice for that? Thanks in advance
Hi, great tutorial. One quick question. I notice that a chessboard paper needs to be in an image for it to be distorted. How can we apply this without attaching the chessboard image on the surface. Do we just store the imgpoints and objpoints we obtained and directly use them. However any specific tips on how the chessboard should be placed(angles) on the surface
Not sure I get ur question but u don't need the chessboard to have a distorted image. We just use the chessboard to calibrate the camera and get the distortion parameters. Then we can remove the chessboard and undistort the images with the distortion parameters
@@NicolaiAI Excellent that was what I wanted to understand. So just get objpoint and imgpoint(which are the distortion parameters). Then we perform distortion, correct?
@@nischalreddychandra1506 No the object points are the chessboard corners in world coordinates and the image points are the chessboard corners in the frame or in camera coordinates. We use that relation to find the distortion parameters for the camera and then we undistort the images with those parameters
@@NicolaiAI Can i ask some question ? I tried test and take a frame for my chess board and try to calibration. i got a error like this ~\AppData\Local\Temp/ipykernel_15672/542108933.py in ret, cameraMatrix, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, frameSize, None, None) error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3694: error: (-215:Assertion failed) nimages > 0 in function 'cv::calibrateCameraRO' how to solve it ?
Hello bro, ur tutorial is very amazing! But i still have some problems with my project. I do the camera calibration for deep learning. The input image size of my model is different from the one i directly get from the camera. Which size of the image do u suggest for calibrating the camera? Thank you!
Thank you very much. If ur deep learning model takes a specific input and u cant change that, then you will have to reshape ur image from the camera. The image size for camera calibration does not really matter that much
Hi I got a question, i tried to calibrate and then undistort photos from my gopro and im using input photos of size 3840 × 2160 but after undistortion im getting images of size only 1675 × 1323. Can you plesase tell me why? When i look at your pictures they look much less cropped then mine. Thanks
Hello, i have a question, i put exactly the same code and exactly the same photos that you take, however my camera matrix is different of what you get. The focal center in your matrix is (720,548) and what i've got is (740,589), certainly you got a good calibration cause i checked that center with paint hehe, my question is, why this happen?
I can't seem to understand what are these two lines of code doing? Can you please explain a little bit? objp=np.zeros((cheesboard_size[0] * cheesboard_size[1], 3), np.float32) objp[:,:2]=np.mgrid[0:cheesboard_size[0], 0:cheesboard_size[1]].T.reshape(-1,2)
Hello mate! I’m involved in an university project and I’m stucked in the part of calibratioj. How can I get the XYZ points of 3D real world once I have all the parameters (Camera matrix, translation and rotation vectors)? Thank you
With only a monocular camera u don't have any information about the depth in the image so u can't find XYZ. With the extrinsic parameters u can only describe the cameras pose (translation and rotation) with respect to the world
@@NicolaiAI OK. I’m trying to make an autonomous car and I don’t know how to advance on this. I’ve got the extrinsic parameters but I don’t know what to do with them. This is killing me... thank you for everything! Your work is awsome
Thank you very much! U should definitely check out the videos with stereo vision and the projects i have done. U will need 2 cameras to get the depth to objects
Hi thanks for everything. I am working on sfm(structure from motion) with 2 images(left and right). I want to know how I can calculate the intrinsic parameters of the K matrix? Knowing that I obtained the correspondences between the 2 images with SIFT. This problem is beyond me for a long time
Hi, thank you very much! With the K matrix u mean the camera matrix? If u want to get the intrinsic parameters u will need to to have corresponding points both in the image plane and in world coordinates as it seems like u have, then i have a formula here on my channel in one of the first videos with the pinhole model about how u can relate those points with the intrinsic parameters as we would do in camera calibration
Pin cushion distortion. Literally a cushion that you can stick pins into while doing needle work to keep them to hand. Traditionally shaped like a cuboid with concave edges.
Hi when u have calibrated the camera u know the distance between the object points/camera points so u will have kinda a reference in that way but it will require the object u will need to measure to be at the same distance from the camera every time
My reference would just my calibration videos and then take the corners and take the distance between them and then use that measure like mm/pixel. U could probably find it directly somewhere on Google but probably not with camera calibration, that would probably be with a measurement urself of an object and then use that as a reference. But when u do camera calibration u already not the real life distance between the squares of the chessboard
I did the same procedure for my camera. Now I want to apply the distortion parameter for different images how can I do that and should I need to save those distortion parameters and use them for a different program?
@@NicolaiAI I will try with 27x17 chessboard, as you did in your code. I was wondering what size of board is optimal, but you made it clear that it's not that important. Thank you
In this example here u will have to have a chessboard to calibrate the camera. U can also do it with another object were u know the object points and corresponding image points, but it makes it at bit more complicated and often u don't have the exact measurements of those objects
Nope u will need to do ur own corner detection, if u don't have a chessboard u can't really use findChessboardCorners. And then u will need to find those corners for ur IMG points and then u will need ur object points from real life with distance measurements of those corners detected. It harder to do and less accurate unless it's done very good to calibrate the camera without a chessboard but it's doable
@@NicolaiAI one more thing while doing calibration our camera has ro be still to remove distortion? What happens if we change position of camera a little bit will the new images will be undistorted or i would need to recalibrate for a newer position.
@@ZulkaifAhmed1 u can move the camera around as u want to after u have calibrated it. The calibration just finds the distortion parameters which is not depending on the position of the camera
Hey thank you for this great tutorial, Can i know whether i can use the same method if i need to calibrate the camera so that I could get the width and height of a label at any focal length and then when deciding whether the label detected is in the correct alignment? (here I decide whether the label is correctly aligned using rectangular contour parameters of a correctly aligned label) please help me in this.
Thank you very much for watching. I don't really think u can use anything from camera calibration to do what u want. But u can get better and more accurate results after doing camera calibration in ur project
Thanks for the reply. Yeah what I want is more accuracy so that I can get the alignments at any focal length. I can use this same method for that right?
Yes that should improve ur accuracy. But it will only work with 1 focal length, don't know where u get different focal lengths from unless u have different cameras. If u just mean different distances to the objects then this will work perfectly
@@NicolaiAI Yeah actually what I meant was different lengths to the label. Cuz in here I detect the alignments when the objects are moving on a conveyor belt.
Great tutorial, thanks for this video, I have a question, does resizing my images to a diffrent resolution will effect the calibration? I have images with 4000x1844px and it's alot for processing
Great video thank you! I would like to ask if such method will calibrate my camera if I need to take pictures of patient's faces rather than chessboards, or does it just apply through corner detection?
Thanks for watching. It is not just corner detection, but u need to know the exact dimensions of the object u want to calibrate with since u need to use that information from 3d to 2d, u will have to do the math behind the functions ur self if u use other objects, in most cases a chessboard is used since it is simple. A face would not be precise at all and it will be very hard to do
In theory it should be possible, but i would not recommend it at all since u will need some references like lines. Without knowing ur project i would say that u should use a chessboard instead to calibrate unless it is impossible
From what I read u need to take pictures of a patient's face, but u can still calibrate ur camera with a chessboard and then use that to undistort the images with faces
@@NicolaiAI My project includes recording patients' faces and predicting the corresponding blood pressure, therefore with the ultrafast camera Im using I need to choose the lens that gives the lowest distortion. So what I hope can do is measure the distortion by taking multiple images with each lens.
Hello @ The Coding Lib, Thanks for the videos. My application requires determining object lengths and width in real-world units. So, I did camera calibration following your steps but how do I use these matrices to obtain the length and width of an object in the image?
Nicolai PLEASE HELP ive been getting this error for so long i have tried eveything its the "((-215:Assertion failed) nimages > 0)" i swear ive tried everything the file name is 100% correct the images can be opened i have tried please help File "e:\stereoVisionCalibration\stereovision_calibration.py", line 70, in retL, cameraMatrixL, distL, rvecsL, tvecsL = cv.calibrateCamera(objpoints, imgpointsL, frameSize, None, None) cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3752: error: (-215:Assertion failed) nimages > 0 in function 'cv::calibrateCameraRO'
Hey, can you please provide the photos used by you? Also, I had one question. I determined the camera matrix for a particular set of images and noted down the focal lengths Then I created the subsets of those images and determined camera matrix for those images. Now, theoretically, the focal lenghts in the camera matrix of the parent set should be similar to the focal lengths in the camera matrices of the subsets. But when I ran the code, I am getting a noticable difference in the focal lengths in the camera matrices of the two sets-parent set and the subsets. Can you please provide a reason for this and also a possible solution?
Depends on the quality of your images from that subset. If there are some bad photos in that subset, your results could be bad. Since camera calibration is set up as a minimization problem by fitting points to a model, the more data points you have (i.e. more images) the more accurate your results.
@@NicolaiAI Thank you for your response. I'm actually doing a head pose estimation project in python so if I added 3d model points and 2d image points as my objpoints/ imgpoints as my parameters would the code still be applicable?
@@irishrepublican756 yes i actually have a video about pose estimation where i use this calibration script here first and then use the object and image points
i have this error in your code h, w = img.shape[:2] AttributeError: 'NoneType' object has no attribute 'shape' why is that? am i missing a module that i need to import?
I had the same issue, it is due to the wrong file path in the undistortion section (line 63). Depending on your folder structure and where you have the image that you want to undistort, this line should be looking something like this: img = cv.imread('calibration/Image__2018-10-05__10-31-59.png') and you should write the path to this image in your code.
@The Coding Library Thanks for making this video. I'm getting this error cv2.error: OpenCV(4.1.2) /io/opencv/modules/calib3d/src/calibration.cpp:3677: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO' while running the script, as I'm passing my own image during camera calibration, not of the chessboard. How can I resolve this issue?
Is it possible to calibrate the camera Without a checkerboard? The goal is to measure the dimensions of an object such as a window or door in an image taken from an iPhone or android. This is for an app so we don’t want the user to have to get a checkerboard into the picture.
Thank you for the video, it's very helpful, I'm using it for a deep learning project, but I got stuck with an error that I couldn't understand what caused it, here is the error: Traceback (most recent call last): File "C:\Users\HP\PycharmProjects\classifi_m\Rcnn.py", line 83, in err = cv2.norm(imgpoints[i], imgPoints2[i], cv2.NORM_L2) / len(imgPoints2) cv2.error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src orm.cpp:1071: error: (-2:Unspecified error) in function 'double __cdecl cv::norm(const class cv::_InputArray &,const class cv::_InputArray &,int,const class cv::_InputArray &)' > Input type mismatch (expected: '_src1.type() == _src2.type()'), where > '_src1.type()' is 13 (CV_32FC2) > must be equal to > '_src2.type()' is 6 (CV_64FC1) I would appreciate it if you could help me
Camera Calibration Software and High Precision Calibration Boards: camera-calibrator.com/
NGL, This is the best calibrating tutorial video ever. SPLENDID!!!
Thanks a lot mate! Means a ton to me
Thanks, it was really helpful. At first i thought i had the speed set on x1.25
Thanks for watching! At least I saved u some time with the speed ;)
x0.75 saves the day
Hi Nicolai. Thank you so so much for your videos. All about image processing field is amazing. Only I have an advice about your lists of videos or your channel structure. I think you can put a video link or something like that when you talk about other topics that you explain before, because is easier for new people follow all topics knowing and seeing previously videos.
Again, your channel and codes are so helpful. Thanks!
Thank you so much for watching and the advice! I'll definitely take that into account. Very helpful
I got this error. I did exactly the same. Can you help me please?
File "Webcam_calib_pics.py", line 43, in
ret, cameraMatrix, dist, rvecs, tvecs = cv2.calibrateCamera(
cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-afu9cjzs/opencv/modules/calib3d/src/calibration.cpp:3694: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO'
Good stuff. u did the honest work!
Thank you very much! Really appreciate it
Thanks for the video series, it's very helpful. Although I find images like those in the video are commonly used in calibration examples, I don't understand why the calibration pattern, i.e., the chessboard, isn't placed against a black background with carefully controlled lighting. What is the motivation? Is it just to keep photo session setup simple or do the pseudo-random image compositions serve some purpose?
Hello, very good video. I have two questions btw. 1) why does objpoints on line 18 (or line15) have to be 3d points? 2) why is there no scaling needed on the objpoints to match the imgPoints in pixel space?
I am using image on the dot pattern not checker or chessboard and I’m stucked in the part of calibration. How can I get the XYZ points of 3D real world and all the parameters (Camera matrix, translation and rotation vectors)? Thank you
Really helpful video! You earned a sub.
I have a question regarding camera calibration though:
While running this script on images taken from a pretty bad drone-mounted camera, after cropping i receive a cropped image that is 40x10 pixels instead of the original 324x224 pixels. When I removed the cropping part of the script, my "calibrated" image was even more distorted as if it were taken by a fish eye lens 🤣 I got a total error of 0.58 which is pretty bad compared to your 0.04 👀
Thank you very much! The image dimensions might be too small to be able to do prober camera calibration
@@NicolaiAI your engagement with your comment section is unbelievable! Do you think it is still possible to calibrate this camera? Anyway I will keep trying
@@ObliviousBanana thank you once again, i try to help as many as possible. What is the reason ur image is cropped/reduced so much?
Thanks for the nice tutorial about camera calibration. Could you please share the images you used during this tutorial ?
Thanks for good lectures.
Thanks for watching! Really means a lot
Tnx for the tutorials man. These videos are great.
Thank you very much! Really appreciate it
Hi, I wanted to ask how can I get 3D co-ordinates of a moving point? given that I had two webcams set orthogonally and both can track a point ( in this case I'm using a LED at the tip of a pen) and both cameras can track the LED. Now I wanted to plot or log the 3D co-ordinates of the LED in real time as I move it? Please provide me an idea. Thanks a lot!
Hi i have several videos about that here on my channel :)
Thanks @NicolaiAI.
But can you help me clear two doubts?
1. Does this camera calibration changes with lighting conditions in the room? Say when first clicking the images to decide the parameters, the lights were bright. But once we receive the parameters, we started to use them to take image and correct them with real (intended objects) with different lightning conditions. Will is effect (are the parameters going to change as lightning conditions changes)
2. If I use mobile camera or dslr in auto mode, and take and calibrate it, are these set of parameters valid? Or when we use auto mode, the parameters keeps changing?
Thank u for posting. Super helpful
Thanks for watching! Glad that i could help
It's on this line:
err = cv2.norm(imgpoints[i], imgPoints2[i], cv2.NORM_L2) / len(imgPoints2)
Hey! first of all, congrats for the quality of the videos, you make so easy to understand this complex world of computer vision!
I have a question regarding the distance to calibrate the stereo system. Does it matters how far you place the chessboard from the cameras to take the pictures? Or depends on how far is the depth that you want to calculate after? Cause I`m building a set up that should get a minimum distance of 10 meters.
Thanks in advance for your help!
Thank you very much for the kind words! Really appreciate it. If ur setup is at that distance u might want to have some images of the chessboard further away as long as it can detect the corners it should be fine
At 17:44, shouldn't corners2 be appended to the imgPoints instead of corners?
Corners2 are more accurate, right?
Yup but it requires a good calibration board and img quality to run that step and not ruin the calibration. I have neither of these and the sub corners are just destroying the calibration
I'm trying to use this to get the orientation of an object for pick and place application. How can I get the orientation value of the object in real-time ? Also is it possible to get the coordinates of the object being tracked ? These 2 values (orientation of the object and coordinates of the objects) are required for the robot to pick-up the object.
I have a video here on the channel in the computer vision playlist about pose estimation
What is considered a low or good mean_error? Mine is ~2.4 and I don't know how good that is. Thanks!
I have same question.
How I can scale this to 3D pose estimation of an object? What is the correct approach to understand it
Hi, great tutorial. I have a quiestion, does it have to be a chessboard? Or it can be any object with known dimensions.
Thanks a lot!
With this code u have to use a chessboard. But calibration can be done with other objects too but then I will have to implement most of it urself
after i corrected the fisheye images with the calibrated matrix i lost some field of view... How much do i loss exaclty? or better asked: which section of the image ist getting corrected?
Yeah u will lose some fov depending on how much fish eye distortion. Can't really tell how much u are loosing but u can try take a look at the distortion parameters and see if u can get something out of them
@@NicolaiAI
After "correcting" the image projection, you get an ideal image, but how is ideal actually defined?!
Is the code fast enough to do undistortion on drone footage in real time, like 15-30 fps or at least 5 fps? I'm going to detect ArUco markers with my drone and if the camera distortion is too much, I reckon undistorting the images will be a great help, but I don't know if the drone can do it fast enough to be practical real-time. Great video!
Great stuff and thank you, hope it will be practical for what I have to do, but I think if you will manage yourself to speak a little bit more rare and condensing your info and also discard unneeded details, this could turn into a really awesome tutorial. Now it is great, but it could be awesome.
Can I ask you my practical problem now, please ? I'm not sure what you taught here solves my issue. My input are some pictures taken by a, let's say randomly positioned and oriented camera, but I need to find out from that picture, the camera coordinate system (translations & rotations) + projection settings so that I can then, programatically speaking, generate some new projections, later on, real world coordinates to x,y pixels projections. In the input image I can insert any sorts of checkboards and so on, and I can also find out the world coordinate of those markers and also on the camera itself, but I cannot measure the rotational model of the camera, or it's projection equation. But I need to simulate all of these, later on, programatically, with a decent accuracy (no trembling objects :) ). Can you pinpoint me on the technique I must use for this, please ? Or how to search it on google, since camera calibration seems to be a little bit different than what I want to achieve, I'm pretty sure you used a simpler model, where you assumed the 3D objects are already in the camera's local coordinate system.
Hi buddy, great video! Quick question, I must had missed the explanation, but I don't understand what you do with the total error at the end of the program. That value is it used to correct the image, or is it some kind of focal distance? Thanks!
Nope u will use the distortion parameters to correct the image, the error at the end is the reprojection error which is used to see how good ur calibration is
@@NicolaiAI Got it. Now, let's say that I want to measure distances using the live feed from a camera, how useful this calibration would be?
hi,
Thanks for this tutorials.
May I know length of the square in checkerboard you have used? or if you have the printable file, please can you share?
Hi thanks for watching. The distance between the squares is 15mm 🙂
ret, cameraMatrix, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, frameSize, None, None)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3694: error: (-215:Assertion failed) nimages > 0 in function 'cv::calibrateCameraRO'
I'm getting this error what to do?
The path to the images is not correct
IDK why, But I found this video so confusing. I could've been delineated in a step-wise manner. Of course, this video is for folks having background knowledge of computer vision. But for beginner like me, it is not easy to get along.
I am working on a project where I have to calculate height of person through camera. Can you give any suggestions or advice for that? Thanks in advance
Hi, great tutorial. One quick question. I notice that a chessboard paper needs to be in an image for it to be distorted. How can we apply this without attaching the chessboard image on the surface. Do we just store the imgpoints and objpoints we obtained and directly use them. However any specific tips on how the chessboard should be placed(angles) on the surface
Not sure I get ur question but u don't need the chessboard to have a distorted image. We just use the chessboard to calibrate the camera and get the distortion parameters. Then we can remove the chessboard and undistort the images with the distortion parameters
@@NicolaiAI Excellent that was what I wanted to understand. So just get objpoint and imgpoint(which are the distortion parameters). Then we perform distortion, correct?
@@nischalreddychandra1506 No the object points are the chessboard corners in world coordinates and the image points are the chessboard corners in the frame or in camera coordinates. We use that relation to find the distortion parameters for the camera and then we undistort the images with those parameters
@@NicolaiAI But to undistort we just need the parameters. The board wouldn't be required anymore correct?
That's correct
Hello, nice information about the calibration bro
Thanks a lot for watching bro!
@@NicolaiAI Can i ask some question ? I tried test and take a frame for my chess board and try to calibration. i got a error like this
~\AppData\Local\Temp/ipykernel_15672/542108933.py in
ret, cameraMatrix, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, frameSize, None, None)
error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3694: error: (-215:Assertion failed) nimages > 0 in function 'cv::calibrateCameraRO'
how to solve it ?
@@JustaCat755 I'd did not load in the images correctly, try specifying the whole path
Amazing Tutorial
Thank you very much!
Can you please what framesize defines, it donates your webcam pixels or pixel size of image you took or some other things
Yes it is the weidth and height of the image in pixels
@@NicolaiAI hmm okay thnkq
Hello bro, ur tutorial is very amazing! But i still have some problems with my project.
I do the camera calibration for deep learning.
The input image size of my model is different from the one i directly get from the camera.
Which size of the image do u suggest for calibrating the camera?
Thank you!
Thank you very much. If ur deep learning model takes a specific input and u cant change that, then you will have to reshape ur image from the camera. The image size for camera calibration does not really matter that much
Hi I got a question, i tried to calibrate and then undistort photos from my gopro and im using input photos of size 3840 × 2160 but after undistortion im getting images of size only 1675 × 1323. Can you plesase tell me why? When i look at your pictures they look much less cropped then mine. Thanks
Hello, i have a question, i put exactly the same code and exactly the same photos that you take, however my camera matrix is different of what you get. The focal center in your matrix is (720,548) and what i've got is (740,589), certainly you got a good calibration cause i checked that center with paint hehe, my question is, why this happen?
The values can vary a bit. It's doesn't seem like that big of a difference. If u run the whole script again u might get other values
I can't seem to understand what are these two lines of code doing? Can you please explain a little bit?
objp=np.zeros((cheesboard_size[0] * cheesboard_size[1], 3), np.float32)
objp[:,:2]=np.mgrid[0:cheesboard_size[0], 0:cheesboard_size[1]].T.reshape(-1,2)
It is just creating an array for the object points which is ur 3D points u try to reproject and optimize with ur img points
@@NicolaiAI Thank you
@@NicolaiAI One more request, can you please provide the pics that you used in the tutorial?
Hello mate! I’m involved in an university project and I’m stucked in the part of calibratioj. How can I get the XYZ points of 3D real world once I have all the parameters (Camera matrix, translation and rotation vectors)? Thank you
With only a monocular camera u don't have any information about the depth in the image so u can't find XYZ. With the extrinsic parameters u can only describe the cameras pose (translation and rotation) with respect to the world
@@NicolaiAI OK. I’m trying to make an autonomous car and I don’t know how to advance on this. I’ve got the extrinsic parameters but I don’t know what to do with them. This is killing me... thank you for everything! Your work is awsome
Thank you very much! U should definitely check out the videos with stereo vision and the projects i have done. U will need 2 cameras to get the depth to objects
Hi thanks for everything. I am working on sfm(structure from motion) with 2 images(left and right). I want to know how I can calculate the intrinsic parameters of the K matrix? Knowing that I obtained the correspondences between the 2 images with SIFT. This problem is beyond me for a long time
Hi, thank you very much! With the K matrix u mean the camera matrix? If u want to get the intrinsic parameters u will need to to have corresponding points both in the image plane and in world coordinates as it seems like u have, then i have a formula here on my channel in one of the first videos with the pinhole model about how u can relate those points with the intrinsic parameters as we would do in camera calibration
@@NicolaiAI Can I have the link to the video please?
I'm not at the computer right now, so i can't really find it for u if it's urgent
@@NicolaiAI No problem, after when you have time. I really like your availability to answer quickly, thank you very much and courage to you.
@@kabahabib5838 have you been able to solve this at your case? if so can you share with me via my email?
Pin cushion distortion. Literally a cushion that you can stick pins into while doing needle work to keep them to hand. Traditionally shaped like a cuboid with concave edges.
Hello Nicloai, I have a question 1)Why do we use only black and white board? Cant we use any other color?
U can use whatever as long as it detects the corners of the board. That the most important thing. But not sure why u would use a coloured board doe
Hey, given a set of images from different angles. How can I create a 3D model out of it
I’d definitely recommend checking out my videos with nerf
Thanks for the video,
Is it possible to undistord an image that does not contain a chessboard but another object of known dimension?
Hey, I try to find out object size in a plane after camera calibration, Is it possible to calculate actual object size without aruco/ reference image?
Hi when u have calibrated the camera u know the distance between the object points/camera points so u will have kinda a reference in that way but it will require the object u will need to measure to be at the same distance from the camera every time
@@NicolaiAI yes my object point and camera point distance is constant it will not change. can you share reference regarding this?
My reference would just my calibration videos and then take the corners and take the distance between them and then use that measure like mm/pixel. U could probably find it directly somewhere on Google but probably not with camera calibration, that would probably be with a measurement urself of an object and then use that as a reference. But when u do camera calibration u already not the real life distance between the squares of the chessboard
@@NicolaiAI thank you
I did the same procedure for my camera. Now I want to apply the distortion parameter for different images how can I do that and should I need to save those distortion parameters and use them for a different program?
I saved the parameters in a file in my stereo calibration video if u want to check that out. U can just follow the steps there :)
@@NicolaiAI Yes I saw that video too, so I just load new image and call those files to apply distortion parameters on the new image right?
@@NicolaiAI got it thank you once again
Glad that i could help
What is the size of the chessboard? Is the size actually important at all?
Size doesnt matter too much its more about the quality of the calibration but i would prob not go lower that 9x6 or smt
@@NicolaiAI I will try with 27x17 chessboard, as you did in your code. I was wondering what size of board is optimal, but you made it clear that it's not that important. Thank you
I find this video very helpful! Keep it up! I see that the focal length in your example here is 1187.2505 pixels. How can I convert that to mm?
I want to calibrate from the images I have (of cats for example) how do I do it? Because with this code I try but I get errors
In this example here u will have to have a chessboard to calibrate the camera. U can also do it with another object were u know the object points and corresponding image points, but it makes it at bit more complicated and often u don't have the exact measurements of those objects
@@NicolaiAI In this case, I have to use other functions instead of: cv2.findChessboardCorners(...) and cv2.drawChessboardCorners(...) ?
Nope u will need to do ur own corner detection, if u don't have a chessboard u can't really use findChessboardCorners. And then u will need to find those corners for ur IMG points and then u will need ur object points from real life with distance measurements of those corners detected. It harder to do and less accurate unless it's done very good to calibrate the camera without a chessboard but it's doable
@@NicolaiAI Thank you for your answers. Can I calibrate from my image (which I have) using the chessboard? This is my problem
Obviously not if u don't have a chessboard in the images
Bro what changes would i have to make to code if i am using different size of chessboard pattern
I have a variable at the start of the code named "chessboardSize" u can change that to ur chessboard side or the number of corners u want to find
@@NicolaiAI one more thing while doing calibration our camera has ro be still to remove distortion? What happens if we change position of camera a little bit will the new images will be undistorted or i would need to recalibrate for a newer position.
@@ZulkaifAhmed1 u can move the camera around as u want to after u have calibrated it. The calibration just finds the distortion parameters which is not depending on the position of the camera
@@NicolaiAI Thankyou bro your response time is great thanks for ur help.
Thank u for watching!
Thank yo so much! Where is the source code in Python, Couldn't able to find here, only in C++
After callibration, how do we obtain x y z cordinate of object captured by camera.
Hi I am looking for the script in github but can not find it. Where can i find it?
Thanks!
Thank you so much Bert! Appreciate u dropping by
Hey thank you for this great tutorial, Can i know whether i can use the same method if i need to calibrate the camera so that I could get the width and height of a label at any focal length and then when deciding whether the label detected is in the correct alignment? (here I decide whether the label is correctly aligned using rectangular contour parameters of a correctly aligned label) please help me in this.
Thank you very much for watching. I don't really think u can use anything from camera calibration to do what u want. But u can get better and more accurate results after doing camera calibration in ur project
Thanks for the reply. Yeah what I want is more accuracy so that I can get the alignments at any focal length. I can use this same method for that right?
Yes that should improve ur accuracy. But it will only work with 1 focal length, don't know where u get different focal lengths from unless u have different cameras. If u just mean different distances to the objects then this will work perfectly
@@NicolaiAI Yeah actually what I meant was different lengths to the label. Cuz in here I detect the alignments when the objects are moving on a conveyor belt.
Then camera calibration and this video can help u a lot with the accuracy and distortion
How can I calibrate a camera like the insta360 x3?
U can try with the fisheye module for calibration. But not sure how much it can do
The focal length generated by the code is very high; it is around 770 for my camera and also about 100 for yours. I wonder what its unit is.
It is in pixels in the intrinsic matrix
@@NicolaiAI but when you convert it to mm ,the value is impoosible (770 pixels is around 200 mm) that's why i am not sure about the unit.
Great tutorial, thanks for this video,
I have a question, does resizing my images to a diffrent resolution will effect the calibration? I have images with 4000x1844px and it's alot for processing
Yes, the cx and cy components of your intrinsic parameters are directly from your image size, so it will change.
Great video thank you!
I would like to ask if such method will calibrate my camera if I need to take pictures of patient's faces rather than chessboards, or does it just apply through corner detection?
Thanks for watching. It is not just corner detection, but u need to know the exact dimensions of the object u want to calibrate with since u need to use that information from 3d to 2d, u will have to do the math behind the functions ur self if u use other objects, in most cases a chessboard is used since it is simple. A face would not be precise at all and it will be very hard to do
@@NicolaiAI thank u for the very quick reply!
Would it be possible to just calculate the distortion of my camera lens through this method?
In theory it should be possible, but i would not recommend it at all since u will need some references like lines. Without knowing ur project i would say that u should use a chessboard instead to calibrate unless it is impossible
From what I read u need to take pictures of a patient's face, but u can still calibrate ur camera with a chessboard and then use that to undistort the images with faces
@@NicolaiAI My project includes recording patients' faces and predicting the corresponding blood pressure, therefore with the ultrafast camera Im using I need to choose the lens that gives the lowest distortion. So what I hope can do is measure the distortion by taking multiple images with each lens.
Hello @ The Coding Lib, Thanks for the videos. My application requires determining object lengths and width in real-world units. So, I did camera calibration following your steps but how do I use these matrices to obtain the length and width of an object in the image?
Did you find out
Nicolai PLEASE HELP ive been getting this error for so long i have tried eveything its the "((-215:Assertion failed) nimages > 0)" i swear ive tried everything the file name is 100% correct the images can be opened i have tried please help
File "e:\stereoVisionCalibration\stereovision_calibration.py", line 70, in
retL, cameraMatrixL, distL, rvecsL, tvecsL = cv.calibrateCamera(objpoints, imgpointsL, frameSize, None, None)
cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3752: error: (-215:Assertion failed) nimages > 0 in function 'cv::calibrateCameraRO'
The mean error is in which unit? Is it a percentual error?
Why are the horizontal and vertical numbers on the chess board (25,18) but (24,17) in the code?
which addition do I have to perform to run this code in jetson nano?
U only need OpenCV and python
@@NicolaiAI but o couldn't run the code after 3rd pic it freezes and Nvidia image screen turns black and white
@@karatugba might be processing power and memory. I think there is not enough memory available on a nano to run OpenCV unless u do some tweaks
@@NicolaiAI is it compatible for CSI cams
@@karatugba then u would need to use gstreamer to read the images from the camera
Hey, can you please provide the photos used by you?
Also, I had one question.
I determined the camera matrix for a particular set of images and noted down the focal lengths
Then I created the subsets of those images and determined camera matrix for those images.
Now, theoretically, the focal lenghts in the camera matrix of the parent set should be similar to the focal lengths in the camera matrices of the subsets.
But when I ran the code, I am getting a noticable difference in the focal lengths in the camera matrices of the two sets-parent set and the subsets.
Can you please provide a reason for this and also a possible solution?
Depends on the quality of your images from that subset. If there are some bad photos in that subset, your results could be bad. Since camera calibration is set up as a minimization problem by fitting points to a model, the more data points you have (i.e. more images) the more accurate your results.
would this work the same with a 360 degree camera ? (Ricoh Theta V)
no, you would esentialy be flatening a sphere
Could I use the camera of my phone ?
Yes
Can this code be used with input from a webcam?
Yes u can take images from your webcam and run them through this script aswell
@@NicolaiAI Thank you for your response. I'm actually doing a head pose estimation project in python so if I added 3d model points and 2d image points as my objpoints/ imgpoints as my parameters would the code still be applicable?
@@irishrepublican756 yes i actually have a video about pose estimation where i use this calibration script here first and then use the object and image points
@@NicolaiAI Awesome, could you send you me a link or name of the video? Great videos by the way :)
Thank you very much. I think it's in all my computer Vision playlist and the title is pose estimation
can we convert 2d point image 3d point
i have this error in your code
h, w = img.shape[:2]
AttributeError: 'NoneType' object has no attribute 'shape'
why is that? am i missing a module that i need to import?
Is the image loaded in correctly? The version on my github should run fine
I had the same issue, it is due to the wrong file path in the undistortion section (line 63). Depending on your folder structure and where you have the image that you want to undistort, this line should be looking something like this:
img = cv.imread('calibration/Image__2018-10-05__10-31-59.png')
and you should write the path to this image in your code.
Getting only 12-15 fps from my webcam , pls help!!!!
@The Coding Library Thanks for making this video. I'm getting this error cv2.error: OpenCV(4.1.2) /io/opencv/modules/calib3d/src/calibration.cpp:3677: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO' while running the script, as I'm passing my own image during camera calibration, not of the chessboard. How can I resolve this issue?
It looks like that the images is not loaded in correctly since u get an assertion error nimages Should be greater than 0
how do you solve these problem abhinav anil
How did you solve this problem?
i solved it with changing chessboardsize i made a mistake entering the right size. When i entered the right size problem solved.
Checker Size doesnt matter?
U Will have to specify the size of the chessboard u are using for ur calibration
Watching it x0.75 speed
Even camera calibration coding sessions get the "TH-cam face" treatment nowadays
It’s about helping people
Is it possible to calibrate the camera Without a checkerboard? The goal is to measure the dimensions of an object such as a window or door in an image taken from an iPhone or android. This is for an app so we don’t want the user to have to get a checkerboard into the picture.
Muito massa!
Thank you for the video, it's very helpful, I'm using it for a deep learning project, but I got stuck with an error that I couldn't understand what caused it, here is the error:
Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\classifi_m\Rcnn.py", line 83, in
err = cv2.norm(imgpoints[i], imgPoints2[i], cv2.NORM_L2) / len(imgPoints2)
cv2.error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src
orm.cpp:1071: error: (-2:Unspecified error) in function 'double __cdecl cv::norm(const class cv::_InputArray &,const class cv::_InputArray &,int,const class cv::_InputArray &)'
> Input type mismatch (expected: '_src1.type() == _src2.type()'), where
> '_src1.type()' is 13 (CV_32FC2)
> must be equal to
> '_src2.type()' is 6 (CV_64FC1)
I would appreciate it if you could help me