Robust Odometry and Fully Autonomous Driving Robot | ATCrawler x FAST-LIO x Navigation2

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 48

  • @fra5715
    @fra5715 7 หลายเดือนก่อน +2

    Dude, this is amazing. Thank you for video.

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  7 หลายเดือนก่อน +1

      Hope it will be useful information 😁

    • @fra5715
      @fra5715 7 หลายเดือนก่อน

      @@stepbystep-robotics6881 it definitely is 👍👍

  • @김영욱-n5u1r
    @김영욱-n5u1r 4 หลายเดือนก่อน +1

    Nice video!
    As I understand it, AMCL publishes map to odom tf, while FAST-LIO publishes odom to robot's base frame tf. Will this method work well even if the robot platform experiences significant roll or pitch movements, such as in two-wheeled balancing robots?

  • @wongpokhim
    @wongpokhim 4 หลายเดือนก่อน

    Very simple explanation! Thank you

  • @madeautonomous
    @madeautonomous 6 หลายเดือนก่อน +2

    Great video. We tried fast-lio2 before but quickly drifted.

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน +1

      I have seen it couple times, but after restart it and place on proper vehicle, it works most of the time.

  • @thomasluk4319
    @thomasluk4319 6 หลายเดือนก่อน +2

    Can u explain a bit how do u extract the laser point from fastlio registered point cloud? And what’s the relay?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน +1

      The laserscan is extract from a package called poiontcloud_to_laserscan package.
      The term “relay” on my package is just to repeat the Odometry topic from FastLio but replace the frame_id to “odom” and also publish the odom to base_link TF.

  • @Wolf-dx6jk
    @Wolf-dx6jk หลายเดือนก่อน +1

    What software do you use to actually tell your motors how they should rotate? Because you only point your robot the point on the map where it should go and it goes. But what data does robot use as input data to understand that it should move this or that direction? And where this data store to convert into motors rotation?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  หลายเดือนก่อน

      Sorry maybe I didn't have much explain too detail about robot navigation.
      The robot that I am using is a part from my previous video here th-cam.com/video/xYo6Bpzbto8/w-d-xo.htmlsi=TwPnH5ht5e6_Qp3H
      Simply speaking, the motors are given PWM signal and to generate the PWM I have the ROS package to easily send the command.
      About the map and and goal point things, I am using Navigation2 package which is the most well known navigation in ROS2, you can try searching on that.

    • @Wolf-dx6jk
      @Wolf-dx6jk หลายเดือนก่อน

      @ sorry but i still didn’t get it.
      After you defining the point in your room robot starts to get to the point. So drivers of motors should be connected somehow to get the information of the path robot are going to move through. So I’m trying to understand which package do you use to convert path to the goal into order of commands motors must execute. And how to connect motors for them to consuming this order as input signal (or may be after parsing).
      For example I found in nav2 packages shim_controller package (I guess it’s pwm controller though). And if this controller is automatically convert path into order of commands so it’s obvious that I need to find how to make my robot get signals from this package. But I’m not sure I am on the right path.

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  หลายเดือนก่อน

      @@Wolf-dx6jk Okay let me explain a bit more.
      I made a package called md100a_ros2_utils github.com/attraclab/md100a_ros2_utils
      on this package, it's listening on /cmd_vel topics from everywhere. Once any publisher send /cmd_vel, the md100a_ros2_utils package will convert that into PWM signal for motor driver to move wheel, as I have explained in this video th-cam.com/video/xYo6Bpzbto8/w-d-xo.htmlsi=jc-wiGhkRY6EyIlo&t=515
      So you have to understand how we can control this robot by /cmd_vel topic first before moving to next step.
      Next, I said I was using Navigation2 package which is the package for Localization (know position on the map) and Navigation (move the robot by sending goal point). This package will be doing everything about generating path, create /cmd_vel. If you already have static map and decent odometry source, and once you run Navigation2 package you just need to pick a point on the map, then boom you will get /path and /cmd_vel topics from it. The package will keep sending /cmd_vel until the robot can reach the goal on map.
      I am not sure your ROS2 knowledge, but the Navigation2 package is a well-known one in ROS2, so maybe you can try checking on that, I think it could help.
      Good luck ;)

    • @Wolf-dx6jk
      @Wolf-dx6jk หลายเดือนก่อน

      @@stepbystep-robotics6881oh, thank you very much. That’s really clear explanation.
      However I’ve got another question. Does fl_nav2_helpers is your own package which I can’t get anywhere?

  • @nanastos18060
    @nanastos18060 7 หลายเดือนก่อน

    Super sweet technology, thank you for sharing!!

  • @Wolf-dx6jk
    @Wolf-dx6jk หลายเดือนก่อน

    Hi again! Did you change anything into slam toolbox files to get the map creates from your laserscan topic?

    • @Wolf-dx6jk
      @Wolf-dx6jk 29 วันที่ผ่านมา

      Sorry for being annoying but I really need your help. 9:47 I saw you are using custom config file. Can you please tell me what have you changed in original one to make everything work with livox mid360?

    • @Wolf-dx6jk
      @Wolf-dx6jk 28 วันที่ผ่านมา

      Okay I saw my first map but I guess I misunderstood how to use static transform publisher
      I transform like this body (default frame of laserscan) -> Odometry -> map -> odom
      But this only provide me see my map after initialization not further
      So I’m here again to ask how do you transform your laserscan frame to odom?
      It would be the best if you can provide your TF tree in Rviz that is working for creating map.

  • @PCBWay
    @PCBWay 7 หลายเดือนก่อน +3

    Well done!

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  7 หลายเดือนก่อน

      Thank you to PCBWay for the nice 3D printed part. It’s really neat and fast delivery!

  • @hrmny_
    @hrmny_ 6 หลายเดือนก่อน +1

    What's the minimum angle of the lidar vertically?
    As in if it's mounted flat can it see the ground? Seems like you have some ground data in your neighborhood scan

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน

      Min. FOV angle is -2degree. The lidar is placed in flat, and yes with this min FOV angle, meaning it will see ground with farther away from the robot.

  • @TNERA
    @TNERA 6 หลายเดือนก่อน

    very nice explanation of the tech and how you put it together!

  • @AbdrahamaneKOITE
    @AbdrahamaneKOITE 2 หลายเดือนก่อน +1

    Hi Rachid, thanks for the video! I see you're using Ubuntu 20.04 with ROS1 Noetic and ROS2 Galactic, but ROS2 Humble is recommended for the LiDAR. How did you manage to run all these versions on the same system?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  2 หลายเดือนก่อน +1

      @@AbdrahamaneKOITE Hi, yes we can install ROS2 Humble from source. I have two ROS2 on same system, galactic and humble.

    • @AbdrahamaneKOITE
      @AbdrahamaneKOITE 2 หลายเดือนก่อน

      @@stepbystep-robotics6881 Great, thank you!

    • @junyang1710
      @junyang1710 หลายเดือนก่อน +1

      @@stepbystep-robotics6881:must we install it from the source? is it possible to just using the Dock image?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  หลายเดือนก่อน

      @@junyang1710 I haven't tried on Docker, but it seems possible as well.
      If you can install the package inside Docker and could manage to get ROS topic back to local environment, that could work too I guess.
      I don't think the package has pre built Docker image, so we will need to make our own.

  • @TinLethax
    @TinLethax 6 หลายเดือนก่อน +1

    Cook video! I wonder if the Fast LIO can be used with a depth camera (with depth image to point cloud conversion). I have a special version of Asus Xtion that is depth-only instead of typical RGB-D.

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน +1

      if that pointcloud from depth camera is fast enough, I think it might work. And you still need IMU too.

    • @TinLethax
      @TinLethax 6 หลายเดือนก่อน

      @sencis9367 That what I already had. Also It's depth only. So the visual odometry was not available here. I have a plan to mount the camera on the front and back of the robot, as I have two of then. And then write a CPP ros2 node to combine the Point cloud into a single cloud message.

    • @TinLethax
      @TinLethax 6 หลายเดือนก่อน

      @sencis9367 I have tested the Asus Xtion (640x480 depth resolution) with various Laser Odometry SLAM but not all of them. I ported the SSL_SLAM to ros2 and this seems to give the best result. Still, the problem remains when the robot rotates. The estimated odometry just fly off as the robot start to rotate. My guess was that the narrow horizontal FoV compared to 3D LiDAR, But something like Livox Horizon, MID-40 seems to works despite the similar narrow FoV. I yet to test it with KISS-ICP and another IMU coupled SLAM.

    • @PointLab
      @PointLab 6 หลายเดือนก่อน +1

      @@TinLethaxto my opinion kiss-icp is useless for real world application. It is fast, it is easy to understand, but it has no backend optimisation mechanism to handle the long term drift.

  • @F2MFireDraftingDesignsIn-ch4tw
    @F2MFireDraftingDesignsIn-ch4tw 3 หลายเดือนก่อน

    Hi Great video. Do you have any videos on how to program LOAM, SLAM or FAST LIO using a livox avia lidar?

  • @nicholastan170
    @nicholastan170 3 หลายเดือนก่อน

    Great video! Could you also share the code for your fl_nav2_helper package please?

  • @Flynn3778
    @Flynn3778 7 หลายเดือนก่อน

    I wonder what how the point cloud would look like with the LIVOX MID-360 for a wire mesh fence. Our baseball field is surrounded by 6 foot high mesh fencing. In the process of building a mower to do the field and need a way for the mower to see the perimeter.

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  7 หลายเดือนก่อน

      I believe the wire mesh could be detected by the MID-360. It could detect some of power line, so in case of planar fence, it might be no problem.
      The lidar itself is not too expensive, you better to give a try on it. 🙂

  • @KognitiveLearn
    @KognitiveLearn 2 หลายเดือนก่อน

    How do we run this on a GPU?

  • @mr2tot
    @mr2tot 7 หลายเดือนก่อน +2

    Hi, what is the maximum speed that your robot can achieve? And what maximum speed have you tested with this algorithm ? Thanks :D

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน +2

      Max speed with 18V battery is 1.3m/s
      It’s DC brushed motors, so if higher voltage it could go faster. The AT Motor Driver that I am using could go up to 24V or 6 cells battery.
      I have trie run that with full speed outdoor, the Odometry from FAST-LIO still pretty reliable :) they really made a good work.

    • @mr2tot
      @mr2tot 6 หลายเดือนก่อน +1

      @@stepbystep-robotics6881 Thank you for your answer. In addition to the robot depending on the battery voltage, does it also depend on the ROS processing speed? If I'm not mistaken, robots developed on ROS only have a maximum speed of about 3m/s?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน

      @@mr2tot I don’t think any ROS robot will be limited as 3m/s because robot’s physical speed doesn’t relate to ROS processing speed. The vehicle physical speed should be depending on ESC or motor driver.
      From my point of view, I don’t use ROS for all of control level, sometime we need realtime data in low level then better to use RTOS. ROS is better from middle to higher level software I feel.

  • @sebastianrada4107
    @sebastianrada4107 7 หลายเดือนก่อน +1

    Great video
    Does it work with 2d lidars?

    • @stepbystep-robotics6881
      @stepbystep-robotics6881  6 หลายเดือนก่อน

      Fast-Lio only works with 3D lidar. But Nav2 just 2D laserscan is enough.

  • @FactorSocial4797
    @FactorSocial4797 4 หลายเดือนก่อน

    Nice one

  • @sutanmuhamadsadamawal715
    @sutanmuhamadsadamawal715 6 หลายเดือนก่อน

    Nice Project, I working with the similar project for agriculture robot, here in Japan. It will be nice if I can contact you for getting some advice.

  • @salh2665
    @salh2665 6 หลายเดือนก่อน

    ❤❤❤❤