man , im bout to cry rn it took me 3 months to find a solution for my robot which is working with depth cam( based only on 2 cams on chip) and i couldn't and once i saw ur vid i jumped from my seat , man ty sooo muchhh
I got introduced to your channel by a Master's Student whom I am supervising. This is the first video I have watched, and I am super impressed. I don't know if it comes to you easily, or if it is as a result of much research, but all I can say is please keep up the good work. I am now addicted to your channel :).
I'm really impressed with the quality of your videos... can tell you work really hard in the editing. your RGB and video overview with showing the compression via the frames in the video we were watching was great. Very good at general presentation as well... thanks man :) hope you have a good day today!
I am so thankful that I found you! Thanks you so much for so many helpful and easy tutorials for everything! I wish I could even support you some how! Keep going with these tutorials, you don't have much followers, but it is okay, because you help people who really want to be like you in future!
Seriously. I am definitely in the deep end of the pool; but thanks to your methodical outlining of what you’re doing and why; along with issues you see or even made yourself. Thanks a ton.
Very useful information, thanks! I would really appreciate it if you could add a small video using this camera and IMU data streamed can you build an RGBD SLAM or RTAB?
Thanks, it is definitely on my to-do list! I didn't want to make it a core part of the series as not everyone will have a depth camera, but it is top on the list of "extension projects".
I'm excited to find your channel as I'm planning to develop a firefighting robot designed for navigating smoke-filled environments. This robot would: Generate a 3D map of the room to navigate through smoke. Detect cracks and assess structural integrity to predict potential collapses. Incorporate thermal imaging for identifying hotspots and victims. Utilize AI for autonomous decision-making and obstacle avoidance. Feature a water or foam dispenser for firefighting efforts. I'd greatly appreciate any guidance or collaboration on this project.
Awesome video! Here's an Idea or a bunch of Ideas for up coming videos... I want to build a smart RV... I would want it to cook, clean, find my stuff and bring that stuff to me. I need to build a set of robotic arms that run along a track hung from the ceiling (It could fold itself up and be out of the way when not needed), I would like to use visual slam, to map and constantly update said map as it moves (obstacle avoidance, where is my cup of coffee, etc.) also has to do object recognition obviously. I have seen that using machine learning the robot ai could learn how to do tasks from watching TH-cam videos using pose estimations and then using ai to find the most efficient way to preform the task and be adaptable to new problems as they arise by running simulations of said task several times before performing the task, and learn as it does the task! I'm also not sure about the best way to run this... Computation (AI) is going to need a semi-decent gaming laptop at minimum, but the robot arm is only going to require a raspberry 5 with ssd. It would essentially just run the stepper motors and sensors (pressure sensors in the grippers, cameras, etc.). Then the main brain needs to communicate with the ros in the arm to preform tasks. I would want it to all run local with a G.U.I.
I like your videos they are very informative and well structured ... can u make a tuto about camera calibration with ros2 , I would like to see how u gonna explain that .
Thank you so much for such a grear toturial video. Do you have any plan to create a video how can we use depth camera in collaborative robots and do some pick and place applications? if not, is there any sources you might introduce to me.. I am new to ROS 😊
It depends on the model. Mine is auto-focus and I have found the autofocus a bit iffy. I really haven't tried to fix it though so it may be a driver setting that can be tweaked. I believe you can also manually control the focus on that model, which I would probably do.
Yeah I had hoped the same but so far I think not, as the depth seems to be a bit worse and it doesn't have an IMU. I think the OAK-D (rather than OAK-D Lite) is a closer competitor. I still have hope that it will be good for entry-level prototyping and simpler projects.
Hi! i'm looking for a Stereo Camera to use for a heads-up project, and I was hoping you would have some recs for one that is compatible with the Raspberry pi 4 B with the newest release of the raspbian os .
Thanks for this great tutorial. Unfortunately in my workspace some thing arenˋt quit working as in yours. For Example I don’t get to see the pointcloud in Rviz and I'm also not able to select the camera info topic. Do you have an idea what might be the problem?
Hey Nigel - I'll be honest here. My current process is basically: write a script for myself, follow the script for the camera, then turn the script into a blog post. It used to be the other way around (starting with the blog post) which was better, because now unless I get the blog post up within a couple of days of the video I find myself feeling the pressure to get the next video out and the blog post never gets published. So I do have an 80% finished blog post for this one which I definitely intend to publish *some time* but I'm not making any promises as to when. If there are particular questions you have, head over to the discussion thread for this video on the forums and I'll try to answer them. (Because the forums only went up last week there's not been much activity there yet). discourse.articulatedrobotics.xyz/t/discussion-depth-cameras-and-ros-making-a-mobile-robot-pt-10/35
Hey, I'm having this problem with my depth camera. I'm using Hector quadrotor package that has ASUS Xtion PRO depth camera and I'm trying to use it for 3D mapping in custom Gazebo world (apartment). The problem is when I subscribe to /camera/rgb/image_raw in Rviz by Camera it can see only certain objects, but most of the objects for example walls, postboxes etc. are just gray. The depth /camera/depth/image_raw works just fine and I can see objects, but still only in shades of gray. I tried to switch format in urdf of a camera from R8B8G8 to B8R8G8 and RB8 but nothing changed... Do you have any idea what could be the problem?
Hey, I actually managed to fix this problem by running this in terminal: "$ export SVGA_VGPU10=0", right before launching my .launch file that starts Gazebo and Rviz. It seems that the color not being visualised is an error and didn't find the explanation for that. You can also echo it in your bashrc script so you don't need to write it every time -> $ echo "export SVGA_VGPU10=0" >> ~/.bashrc
Has someone figured out a way to know the size of an object using the depth camera from the gazebo? I am currently having trouble with it. Any help is appreciated.
Thank you so much, your video helps so much. Anyway, I meet some problem when I try to do the same thing with OAK-D-PRO-POE, I can only see the rgb_image, but can not echo anything from left/right camera. Do you have any idea about this problem?
hi how are you getting topics automatically? I am doing everything the same ,the only difference is i am using .sdf file and not xacro(in ros-noetic). during simulation i can see the image of object shown by camera rectangle, but no topic is created in rostopic list. you are writing depth and all depth topic got created, how?
Does this work for ROS noetic as well? I went to the previous tutorial for a regular camera and had no issues, I however didnt see many of the other topics being published. nonetheless, i managed to get the image_raw topic functioning in rviz. the issue now is that when attempting to change the type = "depth" it does not display the topics as expected. I am able to display only when i change the filename="libgazebo_ros_depth_camera.so"> , under image topic: /depth/image_raw and it shows an image where far objects are white while near are black in the PointCloud2 however, i try to put the /points topic, but there is an error that says: Transform [sender=unknown_publisher] For frame []: Frame [] does not exist and in the terminal : [ WARN] [1716868128.356292091, 23.441000000]: Invalid argument passed to canTransform argument source_frame in tf2 frame_ids cannot be empty
man , im bout to cry rn
it took me 3 months to find a solution for my robot which is working with depth cam( based only on 2 cams on chip) and i couldn't
and once i saw ur vid i jumped from my seat , man ty sooo muchhh
I got introduced to your channel by a Master's Student whom I am supervising. This is the first video I have watched, and I am super impressed. I don't know if it comes to you easily, or if it is as a result of much research, but all I can say is please keep up the good work. I am now addicted to your channel :).
Thank you for your kind words!
@@ArticulatedRobotics Anytime 😊
I'm really impressed with the quality of your videos... can tell you work really hard in the editing. your RGB and video overview with showing the compression via the frames in the video we were watching was great. Very good at general presentation as well... thanks man :) hope you have a good day today!
I just joined the Patreon. My first time ever. This education is incredible. Thank you for putting it all together!
I am so thankful that I found you! Thanks you so much for so many helpful and easy tutorials for everything! I wish I could even support you some how! Keep going with these tutorials, you don't have much followers, but it is okay, because you help people who really want to be like you in future!
Thanks for your kind words :)
Seriously. I am definitely in the deep end of the pool; but thanks to your methodical outlining of what you’re doing and why; along with issues you see or even made yourself. Thanks a ton.
Your videos are very good explained and structured.
Thanks for the work , looking forward to new videos.
I absoulutely agree!!!
Waiting for the rest of the series
Sorry I've been busy - next one is up now!
Very useful information, thanks! I would really appreciate it if you could add a small video using this camera and IMU data streamed can you build an RGBD SLAM or RTAB?
Thanks, it is definitely on my to-do list! I didn't want to make it a core part of the series as not everyone will have a depth camera, but it is top on the list of "extension projects".
what a wonderful intro 'We're going 3D! :)'
First one here. Thank you Josh for making these videos. They are really helping me complete my internship project.
Wow, that was quick! Thanks so much, it makes me so glad to hear that people find it useful! :D
I'm excited to find your channel as I'm planning to develop a firefighting robot designed for navigating smoke-filled environments. This robot would:
Generate a 3D map of the room to navigate through smoke.
Detect cracks and assess structural integrity to predict potential collapses.
Incorporate thermal imaging for identifying hotspots and victims.
Utilize AI for autonomous decision-making and obstacle avoidance.
Feature a water or foam dispenser for firefighting efforts.
I'd greatly appreciate any guidance or collaboration on this project.
Awesome video! Here's an Idea or a bunch of Ideas for up coming videos... I want to build a smart RV... I would want it to cook, clean, find my stuff and bring that stuff to me. I need to build a set of robotic arms that run along a track hung from the ceiling (It could fold itself up and be out of the way when not needed), I would like to use visual slam, to map and constantly update said map as it moves (obstacle avoidance, where is my cup of coffee, etc.) also has to do object recognition obviously. I have seen that using machine learning the robot ai could learn how to do tasks from watching TH-cam videos using pose estimations and then using ai to find the most efficient way to preform the task and be adaptable to new problems as they arise by running simulations of said task several times before performing the task, and learn as it does the task! I'm also not sure about the best way to run this... Computation (AI) is going to need a semi-decent gaming laptop at minimum, but the robot arm is only going to require a raspberry 5 with ssd. It would essentially just run the stepper motors and sensors (pressure sensors in the grippers, cameras, etc.). Then the main brain needs to communicate with the ros in the arm to preform tasks. I would want it to all run local with a G.U.I.
I like your videos they are very informative and well structured ... can u make a tuto about camera calibration with ros2 , I would like to see how u gonna explain that .
Great work, Please continue your efforts
Very useful! Well structured and clear.
Thanks!
Can you please show installation steps for other cameras like orbek 3d astra, realsense, zed etc
Thank you so much for such a grear toturial video. Do you have any plan to create a video how can we use depth camera in collaborative robots and do some pick and place applications? if not, is there any sources you might introduce to me.. I am new to ROS 😊
Please make videos on sensor fusion too, I'm really looking forward
+1
Very nice video. Would you suggest fixed focus or auto focus Oak-d Lite for Mobile Robot vision navigation?
Wow very informative video ! Keep this good content coming 🙏
Thanks!
Hi, I recently found your channel and loved your content. If possible please make videos on ROS 2 Control. Thank You.
how to slam from pointcloud?
Rtabmap ros
Second one here, thank you for the videos. Very helpful with my work
Thanks Hamish :)
Very nice tutorial. Keep it up man!
Great video, thanks for all details
Just a question, is OAK-D Lite Fixed-Focus or Auto-Focus ?
It depends on the model. Mine is auto-focus and I have found the autofocus a bit iffy. I really haven't tried to fix it though so it may be a driver setting that can be tweaked. I believe you can also manually control the focus on that model, which I would probably do.
i am waitting your repply !
Hi,
Thanks for the amazing videos! Have you tried to build octomap from simulated depth camera in Moveit2?
thanks very much. Please, keep up the good work
Thanks!
Is it possible to do the point cloud step for gazebo & rviz with intel's realsense D435/D415? I need to do it as a part another task.
just wonder if this could be a complete replacement of realsense D435i.
Yeah I had hoped the same but so far I think not, as the depth seems to be a bit worse and it doesn't have an IMU. I think the OAK-D (rather than OAK-D Lite) is a closer competitor.
I still have hope that it will be good for entry-level prototyping and simpler projects.
great job!
ps: i like how your hair change all video lol
Hi! i'm looking for a Stereo Camera to use for a heads-up project, and I was hoping you would have some recs for one that is compatible with the Raspberry pi 4 B with the newest release of the raspbian os .
Have you figured out how to use the depthai ros with object detection? I'm using an oak d lite camera
Hi! are you considering using RtabMap?
Hi yes I do plan to at some point but not certain when just yet :)
Okay, thanks! Love ur channel
Thanks for this great tutorial. Unfortunately in my workspace some thing arenˋt quit working as in yours. For Example I don’t get to see the pointcloud in Rviz and I'm also not able to select the camera info topic. Do you have an idea what might be the problem?
hey, i have the same issue. have you fixed it? edit: launching the launch file with gazebo world parameter caused the error
Anyone have a link to the blog post referenced?
Hey Nigel - I'll be honest here. My current process is basically: write a script for myself, follow the script for the camera, then turn the script into a blog post. It used to be the other way around (starting with the blog post) which was better, because now unless I get the blog post up within a couple of days of the video I find myself feeling the pressure to get the next video out and the blog post never gets published.
So I do have an 80% finished blog post for this one which I definitely intend to publish *some time* but I'm not making any promises as to when.
If there are particular questions you have, head over to the discussion thread for this video on the forums and I'll try to answer them. (Because the forums only went up last week there's not been much activity there yet).
discourse.articulatedrobotics.xyz/t/discussion-depth-cameras-and-ros-making-a-mobile-robot-pt-10/35
Hey, I'm having this problem with my depth camera. I'm using Hector quadrotor package that has ASUS Xtion PRO depth camera and I'm trying to use it for 3D mapping in custom Gazebo world (apartment). The problem is when I subscribe to /camera/rgb/image_raw in Rviz by Camera it can see only certain objects, but most of the objects for example walls, postboxes etc. are just gray. The depth /camera/depth/image_raw works just fine and I can see objects, but still only in shades of gray. I tried to switch format in urdf of a camera from R8B8G8 to B8R8G8 and RB8 but nothing changed... Do you have any idea what could be the problem?
Hey, I actually managed to fix this problem by running this in terminal: "$ export SVGA_VGPU10=0", right before launching my .launch file that starts Gazebo and Rviz.
It seems that the color not being visualised is an error and didn't find the explanation for that. You can also echo it in your bashrc script so you don't need to write it every time -> $ echo "export SVGA_VGPU10=0" >> ~/.bashrc
I'm glad you got that sorted and I hope others can find this comment if they have the same trouble. I definitely would have been stumped!
Has someone figured out a way to know the size of an object using the depth camera from the gazebo? I am currently having trouble with it. Any help is appreciated.
i am working on SLAM with Intel D435i camera But I have A problem yo build a map and localization can you explain on that topic thank you.
Hii, is this only works in ROS2? I have applied in ROS1, i am not getting depth topics in rviz.
Wowww this is Awesome!!! Just wanna say, I found this video in great need for my work. You really helped me. A big thanks to you. I really mean it
Thanks, I always love hearing that people find my videos helpful!
Thank you so much, your video helps so much. Anyway, I meet some problem when I try to do the same thing with OAK-D-PRO-POE, I can only see the rgb_image, but can not echo anything from left/right camera. Do you have any idea about this problem?
Unfortunately not, but I hope you were able to resolve it!
You may have luck with this alternative driver github.com/Serafadam/depthai_ros_driver
is this tutorial is compatible with any depth cameras like kinect v1 15:30
'Ey! Amazing video, you know how can you build a 3d map using this kind of cameras? (and not 3d LIDARS)
Thanks! That is a bit of a complex topic which I hope to cover at some point but not sure when :)
hi how are you getting topics automatically? I am doing everything the same ,the only difference is i am using .sdf file and not xacro(in ros-noetic). during simulation i can see the image of object shown by camera rectangle, but no topic is created in rostopic list. you are writing depth and all depth topic got created, how?
Lll⁰
Fine
why is my robotmodel showing red in rviz?
It might be because you have not added or defined the material in the URDF.
Will be try!
3:58 "farthest", not "furthest"
You remind me of Jon Safran
Hello sir
❤😂🎉🎉😢😮😮😅😅😊
Does this work for ROS noetic as well?
I went to the previous tutorial for a regular camera and had no issues, I however didnt see many of the other topics being published. nonetheless, i managed to get the image_raw topic functioning in rviz.
the issue now is that when attempting to change the type = "depth" it does not display the topics as expected. I am able to display only when i change the filename="libgazebo_ros_depth_camera.so"> , under image topic: /depth/image_raw and it shows an image where far objects are white while near are black
in the PointCloud2 however, i try to put the /points topic, but there is an error that says: Transform [sender=unknown_publisher]
For frame []: Frame [] does not exist
and in the terminal : [ WARN] [1716868128.356292091, 23.441000000]: Invalid argument passed to canTransform argument source_frame in tf2 frame_ids cannot be empty