Identify and Measure precisely Objects distance | with Deep Learning and Intel RealSense
ฝัง
- เผยแพร่เมื่อ 23 มิ.ย. 2021
- Source code and files: pysource.com/2021/06/24/ident...
This tutorial will teach you how to accurately detect the distance of Multiple Objects by using Opencv, Python, Deep Learning and the Intel RealSense d435i depth camera.
➤ Full Video courses:
Object Detection: pysource.com/object-detection...
➤ Follow me on:
Instagram: / pysource7
LinkedIn: / pysource
➤ For business inquiries:
pysource.com/contact
#opencv #intelrealsense #distance - วิทยาศาสตร์และเทคโนโลยี
A really wonderful video, I got a lot from video. Thanks!!
I just love your videos and explanation :)
Great video ,thank you for sharing
I like this program......👍👏
I'm gona use this for FRC
Great video!
Nice Example
Thank you
TH-camr acknowledge another fellow TH-camr.... Nice work.... Kep growing both
2 kings 👏🏽
very useful video that explains efficiently how does this work
Hello Sir,
I'm trying to send through http the depth video flux (my idea is to send RGB + depth flux to another machine which process everything) but when i try to send it, as every value is on a uint16 i have to convert it to uint8 if i want to send something otherwise i receive a cut value (only the first 8 bit from 0 to 255 and the others bit from 65536 to 256 or cut) so have you ever try to do something like this?
Excellent video, got me up to a basic understanding fast.
Buying the complete courses was an easy decision.
That's great sir. ♥️
I have also measured the distance from object to camera 📷 using simple webcam, just by detecting face and estimated distance.
Good workaround. If you have a face actually by detecting the size of the iris you can get an accurate distance detection.
@@pysource-com thank you so much sir ♥️. I will try to that as well.
I must appreciate your efforts first, I learned a lot from this channel,
Great video! I would like to know how to use the mask extracted from Yolact to measure wear on a metallic surface, can you help me on this path?
Hi @pysource, I have some doubts is it possible to get all the three parameters of an object in real-time similar to how you got distance information, I want to know the height, width, and thickness(length) of a detected object in 3d space using an intel realsense camera. Can you help me with this? currently, I am using YOLOv3/4/5 for object detection (I mean I know all the three) so ever you're okay with the W*H*L information.
Thank you very much. May I ask how to accelarate the program with CUDA/CUDNN on Ubuntu? It seem that I cannot run the makrcnn detection in GPU although my laptop has an GPU.
Hope to see you answer
Thanks for the video. I like your channel a lot! 2 questions please: 1) How can I measure the size of an object using a D455, and 2) How would I measure the distance between two objects/points in 3D space? Thanks!
you can try using the pixel distance between the two and scaling that
and then use trig to find the actual distance since you know the depth of both objects
please some model you run , like liDar based detection, tracking, segmentation, and compression ..please make video on this , i am looking forward ....
Great Contents, can you let me know how to increase the speed of the detections or frames. i have cuda installed in my laptop and for yolo its working fine . But for this i am facing issue.
i´m seaking you guidence please. I just started with lidar and point cloud. i want to use them to locate object from a shelf( or for a exaple from a supermarket shelf) and grab them with a robot. what are the steps that i need to perform a such task. I need from the camera the location of the Object and the i have to pass this information to the robot... right?
Cool
Nice one. Just curios. Have you tried to measure the object size using mask-rcnn? Will it able to detect shape of an object (For example : In my case i am interested to know whether it can detect card boxes , like the one from couriers)
I never worked in mask-rcnn due to time constraint. I have used depth image of RealSense to find object shapes but i rather like to have a reliable method like rcnn or YoLo to do it on various condition.
If you train properly mask-rcnn it will get the shape of the Object.
If you know the distance and the shape you can then also calculate the area and size of the object with a good accuracy.
@@pysource-com Can you please elaborate more how to do that? If we know the distance and assuming the shape is a rectangle, how can I calculate the size of the rectangle?
@@mohamadn6116 Good question. I explore it as well and still haven't found a method to find the size of the object when the distance to it is known. Maybe someone knows?
How can I use the distance algorithm with my own detection algorithm?
can you tell me what version of opencv you used?
is it possible to achieve the same result by using depth display and not RGB (colour stream) as in this case?
hi thanks for the great explanation. i´m having problem with frozen_interface_graph_coco.pb for some how it´s not been read by my computer and i can´t open it, so when i want to write mrcnn = maskRCNN i´m getting error. what do you recommend me to do?
Whats is the best camera for do a ArSarndBOX?
If I use intel SR300 can get same result?
Or I should change other library
Great video for distance measure.
Is this possible to implement the same concept and use the Intel RealSense Depth Camera to check the smoothness/flatness for flat surfaces such as floor/ wall??
I guess no
hey Pysource
thank you for these video. I want to implement same project on raspberry pi but real sense camera is much expensive. any other way??🙂
Can i manage to make some contents realsense camera with "Unreal Engine" ? I've figured out it can be created with Unity, but there's is no information with UE4 :)
Can you show how to use CUDA libraries for OpenCV for this project?
How to get the IMU readings from the inbuilt IMU in D455
How to measure the accuracy of distance from the object to the camera?
Will this work on a Jetson Nano? .. any chance of a tutorial on that if it does? Great Channel, keep up the awesome work!
Nope, you will need at least a Jetson Xavier to make this work, plus you will need a lighter segmentation algorithm.
On jetson nano I would go with YOLO l realsense + Int(instead of Mask rcnn)
@@pysource-com Shame, the Nano is a great device for giving most of your CV tutorial stuff a try. Thanks for the reply👍
what if the rgb frame and the depth frame is having different resolution
hello
Its a great project.
Can you please, illustrate which cv2 library or any other technique should I use to make the center stable and the depth accurate ? or a hint to figure out by my self.
Thank you Sir.
There are different approaches we could use, I'll give you a couple of tips:
- either you try taking a bigger area instead of just 1 point at the center. You could take more points (like an area of 10x10 so 100 points) and get the average of them
- or you should implement this with Object Tracking so that the bounding box would be stable following the object.
Thank you so much
why depth map is distance object to camera
sir, thank you for your great video.
and i have a question.
can i apply the same code which you linked, for use Intel® RealSense™ Depth Camera SR305?
I haven't personally tested that camera but most likely it should work with the same code
hello,
from realsense_camera import *
for some reason this import don't work with my pyrealsense2 package
Can I run this on a Raspberry Pi or a Beaglebone by any chance?
i am not able to get the confirmation email link from your website and because of that i am not able to download the file that made to run this? please resolve this issue
Hello Sergio it is posible this or code with Lidar R2000
Great video. How would I measure the distance between two objects/points in 3D space?
you can try using the pixel distance between the two and scaling that
and then use trig to find the actual distance since you know the depth of both objects
Thanks @@camdennagg6419 . In the end I used functions .get_depth_frame() and .get_distance() (in x and y) on aligned frames, then used trig.
@@danielbell7483eyy nice job. It's nice when something works out haha.
Can this work on D455?
hi, how do I get frames from a file *.bag recorded with realsense??
plz plz plz )
Great video , can we use the same technique using Pi camera?
Nope, you need a Depth camera for this
If I use a simple camera instead of the RealSense, can I still assess the distance with the following assumptions: 1) The camera location is fixed. 2) the object I am detecting is pre-known? I would think that in this case, the size of detected module can be translated into distance.
Yes, a good way will be to use the Aruco marker.
you can check this other video th-cam.com/video/lbgl2u6KrDU/w-d-xo.html
you will learn how to get the size of the object and you can adapt it to take the distance
@@pysource-com Thanks ! Such a simple solution.
i want to run this code in my laptop webcam what i do ,please tell
From where I can find the files not the code?
i used both my personal and institution email account and i didnt receive any email so i cant download the files. is there a solution?
im getting this error when i try to run
C:\Users\Acer\AppData\Local\Microsoft\WindowsApps\python3.9.exe C:/test/measure_object_distance.py
Loading Intel Realsense Camera
Traceback (most recent call last):
File "C:\test\measure_object_distance.py", line 7, in
rs = RealsenseCamera()
File "C:\test
ealsense_camera.py", line 17, in __init__
self.pipeline.start(config)
RuntimeError: Couldn't resolve requests
can someone help?
Hi, I also got the same error and fixed it by changing the resolution of the camera in the realsense_camera.py
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
hope this helps
Great video helped me a lot
but I have trouble with installing pyrealsense2
error: no matching distribution found for pyrealsense2
I am using ubunto
please help fix it
I recommend to use python 3.8. And it should be on a Desktop computer, not Nvidia Jetsons or RAspberry as it's not available with pip install for them.
Dear can you use Lidar camera to do this please ...
I might do that with LIDAR in the future
yes definetly, I tried it on Intel realsense L515
How could we train the MaskRCNN on custom pictures or dataset
you can do that by following this tutorial th-cam.com/video/WuvY0wJDl0k/w-d-xo.html
@@pysource-com hi, thank you for your great video. How is the .h5 format model used in this project.
hi, can not download the code file
Can these files be used for Intel realsense L515?
They should work, as the library is the same for all the intel realsense cameras
yes it works
Can this work on the raspberry Pi too
Nope, raspberry pi is too weak to handle object segmentation.
On raspberry pi you could alternatively use Mobilenet object detection + Intel Realsense
@@pysource-com any chance on releasing a tutorial on how to do this. LMAO i am stuck :( Thank you
Bro i m not able to download source code
Great tutorial, I tried running
from realsense_camera import *
rs = RealsenseCamera()
but I get an error
Traceback (most recent call last):
File "C:/Users/owner/PycharmProjects/Yolo/yolo.py", line 4, in
rs = RealsenseCamera()
File "C:\Users\owner\PycharmProjects\Yolo
ealsense_camera.py", line 19, in __init__
self.pipeline.start(config)
RuntimeError: Couldn't resolve requests
realsense_camera.py Line 13
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
Replace them as below,
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
@@user-mk5xs6tl8o wow, thank you so much @@user-mk5xs6tl8o, it works
thanks
how do you solve cant see the reply
@@jonparker8832 HI, adjust resolution in following to (640, 480). This solved the problem in my case.
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
Your example is not working. below is just working. What is difference? My enviroment is jupyter notebook of windows 10 with anaconda.
--------------------------------------
# Setup:
pipe = rs.pipeline()
cfg = rs.config()
#cfg.enable_device_from_file("../object_detection.bag") 기존예제코드
config = rs.config() # 추가한 코드
config.enable_record_to_file('test.bag') # 추가한 코드
profile = pipe.start(cfg)
# Skip 5 first frames to give the Auto-Exposure time to adjust
for x in range(5):
pipe.wait_for_frames()
# Store next frameset for later processing:
frameset = pipe.wait_for_frames()
color_frame = frameset.get_color_frame()
depth_frame = frameset.get_depth_frame()
# Cleanup:
pipe.stop()
print("Frames Captured")
-----------------------------------