Hi,I had tried something similar a few months ago with PPO,it did worked, however training takes a lot of time since we have to wait for the next state,if we take the next state at very high frequency then current state and next state will be very much similar. Any workaround for this?
Hi anuj patel! Thanks for watching my video! In this case, I think the way to reduce learning time is to set sampling frequency appropriate for the problem you are trying to solve. For example, if your robot is moving only 0.1 m/s, there is no reason to sample every 1ms because robot will move only 0.1mm each time step.
[rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist [rviz2-5] at line 133 in /tmp/binarydeb/ros-foxy-tf2-0.13.14/src/buffer_core.cpp I keep getting this error... How do I fix it?
Hi h01042266056! Thanks for watching my video! I did a little research, but could not find any valuable information to solve this problem. Often it is just a startup sequencing issue that resolves itself in steady state. Here are some similar issues I have found github.com/cra-ros-pkg/robot_localization/issues/660 [ROS2] TF2 can't find existing frame inside a node - ROS Answers: Open Source Q&A Forum
Hello thank you for this video and for sharing your results. I was trying to train it but the boxes are not changing location even though there is a block of code in your code to change the box's locations. Could you tell me why it may happen and if your network was trained with boxes that are changing location in the environment? Also, how could we generate a map while running the test file?
Hi Perla Sammour! Thanks for watching my video! As far as I remember, the model is trained with boxes that are changing location. It is difficult to answer why in your case boxes are not moving. Is “gazebo/set_model_state” topic published? If you need to generate a map, you can do it using “slam-toolbox”.
@robotmania8896 thank you so much do you have any tips on how to use this tool box because i tried but the map topic is publishing empty messages probably because i should connect it somehow to the robot but I m a beginner. If you have any tutorial or tips to follow, I would be grateful.
@@perlasammour892 Here is a tutorial for slam toolbox. navigation.ros.org/tutorials/docs/navigation2_with_slam.html I hope it will help you. You also may use cartographer. I am using cartographer in this tutorial. th-cam.com/video/hcsS-9OIer4/w-d-xo.html
Could you help me explain the application of Deep reinforcement learning in this project which help robot can navigate from the initial point to goal point, tasks for Deep RL is generate paths and tracking that path ?
Hi Nhat Net! Thanks for watching my video! In this project, the objective is to learn a policy which enables the robot to navigate from the start point to the goal point. The policy will not generate a path. It will generate translational velocity and angular velocity in each time step.
Hello, thanks for the video. The reset() function randomly initializes the state of the gazebo environment by initializing robot position, orientation, obstacle position and goal position after each episode. The problem is that this function has no effect: it only changes the goal position, while the others (the obstacles and the robot) initialize at the same position. Even though the data is published in the '/set_model_state' topic, I don't know where the problem lies. Can you help me?
Hi user-zy6iw9jp3k! Thanks for watching my video! I haven’t looked deeply into this problem yet, but does this problem occur only with my ROS2 code or this problem persists with original ROS1 code too?
Amazing project and video pal, Just one question, in the video at 9:03 you show the tensorboard graph, and it says it took 20 hrs of training, is that accurate, if not, how much time did it take to converge?
Hi Serapf-p! Thanks for watching my video! It depends on computational capabilities of your machine. As far as I remember, I used CPU for training in this project. Using GPU, the training time will be shorter, but I cannot tell you for how mush exactly.
Hello. I wanted to ask besides installing PyTorch, Tensorflow, ros2, and gazebo. What else needs to be installed because when launching test. It does not work
Thank you for your video. I have a question now. My /cmd_vel topic doesn't seem to have any output. When I use the 'rostopic info' command, it shows that there are no publishers. What could be the reason for this?
Thanks for your video! I have some questions: 1. Do I have to retrain the model once I want to implement to my own environment, like my office? 2. What should I do if I want to use the trained model? Should I create my own map? then load the model and give the target position?
Hi 蔡欣儒! Thanks for watching my video! 1. I think the robot will move to some extent with a new environment like your office, but if you retrain the model, results will be better. 2. If you would like to validate the trained model, you can execute the “test_velodyne_node.py” script.
Hello, thanks for making this tutorial. I have a question. I've trained the model for more or less 10 hours, but the robot ended up spinning in place. When it comes to validating, the goal is always at [-1, 1]. Have you ever experienced it?
Hi Amaldi Tri Septyanto! Thanks for watching my video! That is strange. Is the robot not learning anything even if you run the training script several times?
@@robotmania8896 Yes, I have run it several times and not given any good results. Do you think I should have your email? I can give you screenshots of the training.
Hello Mr. Robot Mani, Thank you for the video, how did you install the GPU drivers in order for TensorFlow to use the GPU rather than the CPU? What steps did you take to install it properly so it can be utilized?
Hi Kamren James! Thanks for watching my video! In this tutorial I am using pytorch. You just have to install pytorch using pip. If GPU is available, the program will automatically detect and use it.
amazing video robot mania!!!! could you tell me how could change the starting point of the robot for example if I wanted the robot to start in the square just right of the triangle where would I have goto to change that? and can I add any map /world for this or does it need specific measurements i.e 10x10 or 11x11...thank you again and great tutorial video
Hi brad carpenter! Thanks for watching my video! If you want to change the starting point of the robot, you should edit “pose” of “td_robot” model in the “td3.world” file. It is line 2175. Also, you have to alter the “td3.world” file if you want to change the world. You don’t need any specific measurements.
Thanks for you video. This model works well on a 2-wheel robot. I have a 4-wheel Mecanum model, is there any difference in its use, such as modifying the code or retraining it from scratch?
Hi Anime Lovely! Thanks for watching my video! If you would like to use mecanum robot, you should modify the code to convert translational velocity and rotational velocity of the robot to wheel rotational velocity since you cannot use diff-drive controller.
robot mania....I've downloaded the file from and built it. i have the model trained and can run the test_simulation as well. Could you please tell me how to have the trained model go to a specific goal every time vs going to a random goal point. where in the script do i make the adjustments without breaking the code...thank you again
@user-fx8nk6ew4s You have to modify the “change_goal” function. Set the “self.goal_x” and the “self.goal_y” to constant values. But note there will be an error if goal points overlap with obstacles, so you have to avoid this situation.
Thank you for the video robot mania. I had a few questions, do you recommend running this on a dedicated linux machine or will a virtual machine suffice to run the scripts you demonstrated?
Hi kamren james! Thanks for watching my video! If you have a dedicated linux machine with GPU, it is the best. But this simulation will run with virtual machine too.
And I also had another question. I installed this repository on a jetson nano, and I installed all the dependencies(tensorflow, tensorrt, pytorch, etc..) and I keep getting the flowing errors [rviz2-5] [INFO] [1703747640.905973404] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0,200 for reason 'Unknown' [rviz2-5] [INFO] [1703747640.906650187] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0,210 for reason 'Unknown' [rviz2-5] [INFO] [1703747640.906892743] [rviz2]: Message Filter dropping message: frame 'velodyne' at time 0,200 for reason 'Unknown' Im not too sure why, by chance do you have a possible solution to why this is happening?
@@kamrenjames2862 This error is generated by rviz. So, in my opinion, it should not have negative effects on the simulation. I often see people asking about this kind of error on the forums, but I am not sure how to eliminate it.
When building the project from scratch, what steps did you take? Create a workspace with the src folder inside it, then inside the source folder you create the algorithm package ex (TD3)and next to it you clone the simulation package from veldyne_simulation ? please state the steps
@@robotmania8896 there is a problem , i cant reach the repo for veldyne_simulator for ros2 ? in ros index i see it but the page never opens, can you send me the package please?
Thank you for the video. One of my last questions is after the model is trained to a point were you are satisfied with its performance, is there anything else that has to be done to run the tester. When I run the tester I get the error saying that it is failing to load the parameters from the .pth files.
Hi kamren james! Sorry for the late response. Have you run the script using “ros2 launch” command? If you run the script directly from “td3/scripts” directory, python will not be able to find “pth” file unless you change the code.
@@robotmania8896 Thank you for the response, I used the command: ros2 launch td3 test_simulation.launch.py to launch the testing script. However, I still get the error: [test_velodyne_node.py-3] raise ValueError("Could not load the stored model parameters") [test_velodyne_node.py-3] ValueError: Could not load the stored model parameters
I have downloaded my code from google drive and executed it. It seems to me that it runs fine. At least in my environment. What version of pytorch are you using? I am using pytorch 2.1.2 with python3.8.10 .
Hi, when i use colcon build to compile, i found a lot of errors occur from one of the folder inside the velodyne_simulator. I use ros2 iron in ubuntu 22.04 with gazebo 11.0. Can you give any advice about this issue? Like what version do you use?
Hi Shaobo Yang! Thanks for watching my video! I have used ROS2 Foxy for this project. Errors occur probably because “velodyne_simulator” package is not suitable for ROS Iron. Please go to “velodyne_simulator” repository and clone proper version.
Can you open this page? github.com/RobotnikAutomation/velodyne_simulator/tree/ros2-devel You can clone this repository with ROS2 branch using this command $ git clone github.com/RobotnikAutomation/velodyne_simulator.git -b ros2-devel
Thank you for the video. I am having a problem with the test simulation script. It does not run after the model parameters are created. I get the error when launching the test simulation launch file saying. Parameters have failed to load, and the parent directory doesn't exist. [test_velodyne_node.py-3] raise ValueError("Could not load the stored model parameters") [test_velodyne_node.py-3] ValueError: Could not load the stored model parameters
Hi Ming Tseng! Thanks for watching my video! I have downloaded my code from google drive and executed it. It seems to me that it runs fine. At least in my environment. What version of pytorch are you using? I am using pytorch 2.1.2 with python3.8.10 .
That may have been the issue it is now fully functioning. Thank you. A additional question I have is how do I modify the testing node script so that the goal is only spawned at a single location rather than randomly changing? I tried modifying the change_goal function by removing the random points and specifying a single point however, I received several errors or the robot will not move. @@robotmania8896
@MingTseng-ev3tn What kind of errors did you have? I think you are thinking in a the right direction. But note that if the goal is set to the point where obstacles are placed, it will cause an error. So, you should fix not only the goal but obstacles as well.
@@robotmania8896 Thank you for the response. I seem to have solve the errors and it is performing great. My next question is were in the train_velodyne_node.py script are the goal_points being generated. I wanted to experiment whether having multiple instances of targets on the map at once will increase learning. I found several instances in which the goal might be generated but not sure what needs to be changed to spawn more goal points.
@@MingTseng-ev3tn In this script there is only one goal node and it is generated in “train_velodyne_node.py” scripts at lines 297, 298. The goal position is changed every episode. This is done in the “change_goal” function (line 505). So, if you want to create several goal points, probably you have to create an array of goal nodes.
Hi Henok Adisu Mebratu! Thanks for watching my video! For example, you can compare rewards for each parameter set. For better parameters, reward will be bigger.
Hi Mr. Robot Mania, Thank you very much for your hard work! I have a question, and please try to respond. I am working on implementing DQN on a real robot and am having trouble configuring the architecture of my workspace, particularly with the implementation or use of DQN. Could you let me know what the general and basic structure of a typical workspace is?
Hi Imen Mabrouk! Thanks for watching my video! What do you mean by “workspace”? Do you mean workspace of ROS? If so, there is nothing special in the workspace with reinforcement learning implementation. Here is my video with DQN implementation. I hope it will help you. th-cam.com/video/9GLVB6Trn10/w-d-xo.html
@robotmania8896 I apologize for the delay. Yes, I am referring to the ROS 2 workspace. My second question is: how can I define the real environment where the robot will perform the training tasks? I intend to skip the simulation step.
Building a robot from scratch requires some experience in mechanical engineering. So, the fast way is to use production robots such as “TurtleBot”. Also, you will need someone who will check the robot during training.
Thank you for the video. I have a question about tensorboard. After i start the training and then going to vscode to launch tensorboard, it keeps showing up as inactive and there is no event running. What has to be done to fix this?
Hi kamren james! Thanks for watching my video! It is difficult to say, but as a first measure, can you update your visual studio code to the latest version?
I got it working, I wasn’t in the correct director. I had another question what code would I change if I just wanted to change the goal position in the testing script? I wanted to perform test to see performance every 100 iterations or so. Im still kind of new to programming, so I thought I would ask
@@kamrenjames2862 The goal coordinates are changed in the “change_goal” function. So, if you want to change goal coordinates, you have to modify this function.
@@robotmania8896 I see, what if I wanted to change the start position of the robot in the testing script as well? And if I wanted to change the world that spawns in?
@@kamrenjames2862 If you want to change the starting point of the robot, you should edit “pose” of “td_robot” model in the “td3.world” file. It is line 2175. Also, you have to alter the “td3.world” file if you want to change the world.
Hei guy, tks for your video. I have mini question. In navigation stack, we have DWA for local plan and it also have cost funcion. So why we use RL for local plan? What's the pros of its with DWA?
Hi nguyenduygiang_official! Thanks for watching my video! In this approach we are not using any pre-generated cost maps or grid maps, so you don’t have to do map generation. Though we need a map for efficient planning of a path for long distances.
Hi chance macarra! Thanks for watching my video! The test script should work right after the model is generated and no additional steps are required. Do you have any errors in the terminal?
how the robot is moving in rviz @8:15. when i run the training_simulation,launch,py file, the robot is in the rviz but it is not moving anywhere,.. could you please help me with this.
Thank you very much for your video. However, I used the original parameters in the code and trained for over two days, but it did not converge, and the robot kept spinning in place. Do you have any suggestions?
Hi farhadhamidi7260! Thanks for watching my video! This error may occur in the beginning when you launch the simulation, but it should go away eventually.
@@robotmania8896 Okay, but it's not going away eventually. I also changed the fixed frame to base_link, but it's not completely fixed. Could you please give me your email address so that I can send you error images?
Hi I would like to ask you about this autonomous navigation implementation, is it global path planning, combined with deep reinforcement learning for local path planning, is the global path planning algorithm using teb?
Hi 陈易圣! Thanks for watching my video! I would say it is only local path planning. I think the technique described in this video is suitable for local path planning. It is not meant to be used for long distances.
Hi! when i try to ros2 launch the trining_simulation using td3 using this command 'ros2 launch td3 training_simulation.launh.py' i get an error saying 'file 'training_simulation.launh.py' was not found in the share directory of package 'td3' which is at '/home/ashwath/Desktop/DRL_robot_navigation_ros2/install/td3/share/td3' and in the files it doesnt show a trianing_simulation file present within the td3 directory, i just extracted the files from the google drive provided in decription as it is so i am sort of lost as to what is actually wrong and how to fix it, would really appreciate some help!!
Hi Ashwath Shivram! Thanks for watching my video! Can you please extract the zip file into your home directory (/home/"username") and try to build it once again?
Hi James! Thanks for watching my video! Please check if “a_in”(line 560) in “test_velodyne_node.py” contains values other than 0. Also, please check if topic “cmd_vel” contains values other than 0.
@@robotmania8896 Thank you for the response but, I keep getting the error in terminal when running the tester, failed to load stored model parameters, rviz and gazebo continue to load however the model doesn't move.
@@robotmania8896 Yes, it is failing to load the model. The error message that I see in terminal is the following: [test_velodyne_node.py-3] ValueError: Could not load the stored model parameters After which it continues to load Rviz and Gazebo with no errors. However the robot doesn't move. The only other error I see is the following: [test_velodyne_node.py-3] FileNotFoundError: [Errno 2] No such file or directory: './DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models/td3_velodyne_actor.pth'
How long is the training required? I did training for half an hour, but during the test it still crashes? Should I change its location and repeat the test?
Hi Ahmed Aljbry! Thanks for watching my video! You should do training until loss and Q values converge. I don’t remember how long it took in my case, but much longer than half an hour.
Thank you for the video robot mania! I had a question for you. When running the training script, after about an hour of training the robot tends to stop training and errors out and proceeds to drive in circles. The error seen is posted below: [train_velodyne_node.py-3] Traceback (most recent call last): [train_velodyne_node.py-3] File "/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py", line 784, in [train_velodyne_node.py-3] network.save(file_name, directory="./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models") [train_velodyne_node.py-3] File "/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py", line 237, in save [train_velodyne_node.py-3] torch.save(self.actor.state_dict(), "%s/%s_actor.pth" % (directory, filename)) [train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 618, in save [train_velodyne_node.py-3] with _open_zipfile_writer(f) as opened_zipfile: [train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 492, in _open_zipfile_writer [train_velodyne_node.py-3] return container(name_or_buffer) [train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 463, in __init__ [train_velodyne_node.py-3] super().__init__(torch._C.PyTorchFileWriter(self.name)) [train_velodyne_node.py-3] RuntimeError: Parent directory ./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models does not exist. [ERROR] [train_velodyne_node.py-3]: process has died [pid 20095, exit code 1, cmd '/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py --ros-args']. Do you have a idea on why this happens?
Hi Kamren James! Thanks for watching my video! This error says that directory “./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models” does not exist. Have you modified package structure? I think that in the zip file I have provided there should be this directory.
Hi Towerboi! Thanks for watching my video! Since in this tutorial the robot is controlled using a diff-drive-controller, you should not need any packages related to ros2-control. Are there any errors in the terminal?
Thank you for your reply, I'm not really sure which part I should exactly tell, but here are some of the log msgs in my terminal [train_velodyne_node.py-3] 2024-05-21 13:15:20.076166: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. [train_velodyne_node.py-3] 2024-05-21 13:15:20.285949: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. [train_velodyne_node.py-3] To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. [train_velodyne_node.py-3] 2024-05-21 13:15:20.954416: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT [train_velodyne_node.py-3] /usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and
@@towerboi-zg3it It seems to me that there are no error messages in the log you have pasted. Even though it seems that your Scipy version is too old for the numpy version you have installed. If the robot is not moving, can you check whether “cmd_vel” topic has any values using “ros2 topic echo cmd_vel” command.
@@robotmania8896 Hi, I think the problem is solved! Maybe my computer is too slow so it take a few minute to make every thing works properly. So at the beginning, the Gazebo has not been launched, and then when I give up and walk to toilet, everything was working and also the /cmd_vel has value! Thank you very much!
hello, thank you for the video. I followed the steps and launched the training file. However the mobile robot does not move in both gazebo and rviz to show the training and testing outcome, do you have any idea on this?
The log is: [rviz2-5] [INFO] [1689653675.176403766] [rviz2]: Message Filter dropping message: frame 'velodyne' at time 0.200 for reason 'the timestamp on the message is earlier than all the data in the transform cache' [rviz2-5] [INFO] [1689653675.176581294] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.210 for reason 'the timestamp on the message is earlier than all the data in the transform cache' [rviz2-5] [INFO] [1689653675.176650750] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.200 for reason 'the timestamp on the message is earlier than all the data in the transform cache'
Hi yuxincai5409! Thanks for watching my video! In the log you attached, I don’t see any errors. If the robot doesn’t move, there should be another reason. Is “cmd_vel_nav” topic published and does it have values?
@robot mania, it's a very nice video and thank you for sharing the code, I run into the same issue, and there is no “cmd_vel_nav” topic published, any suggestion?
Hi BELABED Abdelkader! Thanks for watching my video! Are you sure that positions of the boxes are not changed? Positions of the boxes should change each time “reset” function is called (line 519 “random_box” function).
I checked the world file. I find out that I forgot to add “gazebo_ros_state” plugin in the world file. Add next part of the code to the “td3.world” file. “cardboard_box” probably will move.
Hello, thanks for the video. I would like to ask how to fix the start and end points when training. There is also a small question, whether this obstacle avoidance relies on the map, is it no map obstacle avoidance.
Hi user-wp3gg2so4w! Thanks for watching my video! Are you talking about start and end points of the robot? In that case, starting point of the robot is already (0, 0). And you cannot fix the end (goal) point of the robot since robot will not learn the right policy if the goal always will be in the same place. No, this obstacle avoidance is not relying on the map.
@@robotmania8896 Hello, thank you for your help, I still have a few questions about reinforcement learning I would like to ask you, when the trained model is imported, how do we set the target point for the robot to go to, I see you said this example he is not dependent on the map, in the test model code I see that it is written as a random target place, there is no map then how does the robot know where the target point is, then if I want to set the target point.is it set through rviz If I want to set the target point, do I set it through rviz, or can I only set it through the code.
@@陈易圣 In this simulation we assume that the robot knows goal coordinates. And it moves according to the policy that it have learned during training. The action is decided depending on the distance between the robot and the goal (test_velodyne_node.py lines 197~199).
Hi, sir When i change the world the rviz2 and gazebo is not working , can i change the world or not and does changing of the gazebo worlds require another setup environement?
Hi Antonio! Thanks for watching my video! Yes, you can change the world. But you have to keep the robot model and several objects, positions of which are changed each episode.
When I brought veldyne_simulation package, what steps did you take to build the project? Are there any steps? I am trying to build my own project from the beginning, I brought the pack in the source file, but when I do (colcon build) he refuses and requests catkin, can you clarify the steps?
Hi Ahmed Aljbry! Thanks for watching my video! These are veldyne_simulation packages for ROS1 and ROS2. You probably have brought ROS1 version. So, please use ROS2 version of veldyne_simulation package.
Do you mean libraries or packages that were installed using apt? If so, I am not remembering all libraries but regarding this particular tutorial there were no special ROS packages.
@@ahmedaljbry1160 Originally, I used this repository. github.com/ToyotaResearchInstitute/velodyne_simulator But it seems that this repository also has a ROS2 branch. github.com/RobotnikAutomation/velodyne_simulator
Hi dakota tudor! Thanks for watching my video! If you would like to set specific goal in the test simulation, please set x and y to constant values in lines 252, 253 in the “test_velodyne_node.py” script.
@@robotmania8896 iam getting this warning [rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist [rviz2-5] at line 133 in /tmp/binarydeb/ros-foxy-tf2-0.13.14/src/buffer_core.cpp
thank you for the video, the src folder , contines two folders "td3" and "velodyne_simulator", what does the "velodyne_simulator" package contain that is essential for the DRL navigation and ROS environment to work? i deleted the package and built the workspace, and it built without errors , the "ros2 launch td3 training_simulation.launch.py" still works. does deketing the "velodyne_simulator" folder and consequently the package cause any hidden problems ? also gazebo keeps pausing at the 200 millisecond mark, I have to manually unpause it, this could cause the launch to fail since the odom doesn't start publishing without modifying the vanilla code, did anyone run into this, and if so, any help is appreciated.
Hi ali sulyman! Thanks for watching my video! "velodyne_simulator" package contains plugins for gazebo. So if you delete that package, no lidar data will be generated.
@@robotmania8896 thank you for the reply. but have you checked that? because I did delete it and build the work space and run "ros2 launch td3 training_simulation.launch.py", then in a second terminal i echo the topics /velodyne_points and /front_laser/scan, which i assume are the lidar data, and they seem to be working (there is data). another quetion I have is where are the cameras,sensors(lidars), and the tf frames defined to be loaded with the robot in the simulation ?I have been looking in all URDF files and couldn't find them. edit: I found them in the td_robot.sdf file, I thoght they should be defined in a URDF file, according to the tutorails that I have went through, does the sdf get converted to URDF at run time? and I am not sure how this works because the sdf file is not even called in launch file edit2: the sensors are in the td_robot.sdf file which is loaded in the td3.world file.
@@alisulyman7824 Yse, SDF files are loaded in the world file. In my understanding, URDF files are used for ROS, and SDF files are used for Gazebo. I manually converted URDF to SDF file. But I am planning to use another method in my future tutorials since every time I make some changes to XACRO file, I have to manually generate URDF and SDF. All sensors are defined in SDF file which is in the “models” folder. You have to add a “gazebo_model_path” statement in XML file so that Gazebo can read SDF file.
@@robotmania8896 what still confuses me is why is the td_robot.urdf is launched in robot_state_publisher.launch.py, if the td_robot.sdf is called in the td3.world? to me it seems redundent to use the td_robot.sdf if we can just use the td_robot.urdf, then why are they both launched in the project? is it necessary or is it convenient in some way? and also if you could answer me if you have checked if the projects works normally if we delete "velodyne_simulator" package , because it seems to works fine with out it but I am not sure if there are some hidden issues. thank you again for the response.
I had issues with reading mesh from URDF file, so I have used SDF to visualize a model in gazebo and URDF for state publisher. But it seems there is a better way, so I am planning to quit using this method.
Hello, thanks for the video and the code sharing, I have the similar issue of robot not moving, and there is no “cmd_vel_nav” topic published, only following three topic published /default/td_robot/base_link/camera1/image /default/td_robot/base_link/camera2/image /introspection/mljibz/items_update Any suggestion?
Hi Dong Wang! Thanks for watching my video! Yes, there is no “cmd_vel_nav” topic, the “cmd_vel” topic is the correct one. There should be around 23 topics in total. In this case it seems that this is not an issue with training process, but with ROS environment. Were there any errors while you were launching the simulation?
@@dongwang2226 I have a question now. My /cmd_vel topic doesn't seem to have any output. When I use the 'rostopic info' command, it shows that there are no publishers. What could be the reason for this?
Hi, I'm trying to run the simulation with the turtlebot3_burger model but when the robot collides with a object it doesn't reset and just do flips. Do you know how to fix this?
Hi shoottz! Thanks for watching my video! Collision calculation is done in the “observe_collision” function. Calculation is done based on data obtained from velodyne lidar. Please check whether your robot publishes PointCloud data correctly.
@@robotmania8896 Thank you so much I got it work now! I want to know how you made a dataset? I want to make a new dataset for an environment I made. I'm new to all of this.
@@robotmania8896 Thank you so much I got it work now! I want to know how you made a dataset in Tensorflow/Tensorboard? I want to make a new dataset for the environment. I'm new to all of this.
Thank you so much I got it work now! I want to know how you made a dataset in Tensorflow/Tensorboard? I want to make a new dataset for my environment. I'm new to all of this.
Hello, your video was really informative. But I watched the video and followed it, but the rviz didn't run [ERROR] [gzclient -2]: process has died [pid 27482, exit code -15, cmd 'gzclient ']. [ERROR] [gzclient -2]: process[gzclient -2] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM' [gzclient -2] ** (gzclient:27484): WARNING **: 23:08:13.716: AT-SPI: Could not obtain desktop path or name I only get these errors. I succeeded in colcon build and completed setup bash, but nothing changed.
Hi 길요한! Thanks for watching my video! Were there any errors prior to those you have pasted? Also, on which machine and ROS version are you executing the program?
@@길요한-y5y It is possible to use this algorithm if lidar publishes a PointCloud2 type message. But still, you have to do some modifications in the code, since lidar specification is different.
Hi aarontilahun4248! Thanks for watching my video! Yes, I think you should be able to execute this simulation with the Humble without any code modifications.
@@robotmania8896 I've tried your code using ros2 foxy but when I start the training simulation, robot doesn't move in Rviz and gazebo. can you help me please?
here is the last log: [rviz2-5] [INFO] [1692221912.671674966] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.210 for reason 'Unknown'
Hello, thanks for the video. I had ros1 so i used the code from github and followed the instructions there. I ran the training but the model didn't learn. I ran it for 1700 episodes, average q value flat lined to -120. I noticed that the robot was turning only in one direction. Before this I used this reference to write my own setup, there also I noticed the same thing, robot turning in only one direction going in circles, but there reward function qas different. So I thought of testing this one first, and it is also not learning, I made no changes to the code. What might be happening?
Hi ramanjeetsingh8684! Thanks for watching my video! I can only guess, but since learning at the first steps is largely stochastic, maybe something there went wrong and the agent started to learn a wrong policy. Can you try the simulation several more times and see whether the results will be the same?
Hello @@robotmania8896, I am also facing the same issue. My robot is moving in circles and not learning any policy. Its reward after 3000 episodes is -160. Please tell me the way to solve it.
@@vamsivamsi2903 Hi vamsi vamsi! Thanks for watching my video! I think the robot has learned a wrong policy. Unfortunately, I don’t have enough time to look into this problem more closely. Can you do the simulation several times and see whether the robot learns properly?
Hello great video. thanks I have a question please please reply me. I tired Ackermann steering controller, its working but the movement is not controllable. it moves by itself, I can't control it and it's movement is also not smooth. can you please help me what kind of issue is this and how can i get out of it.?
Hi Samra Urooj! What do you mean by ‘moves by itself’? I have several videos in which I operate a robot with Ackerman steering geometry. Maybe the code from this video will help you. th-cam.com/video/BFLfVW9f60E/w-d-xo.html
@@SamraUrooj-d4u Do you mean that the robot moves by itself during training? If so, that’s absolutely normal. Or you mean that you are trying to control the robot using joypad but the robot moves by itself?
Hello, it's great work thanks for sharing I have a question i train ddqn for this robot 24 hours , 2500 episodes using cpu , But sometimes the robot swings right and left in front of goal_point or obstacles and does not move forward. Any tips to improve training? This is hyperparameters using in the training : Lr = 0.001 Discount_factor =0.99 tau = 1e-3 update target every = 4 steps Batch_size = 64
Hi Abdelkader Belabed! Thanks for watching my video! I think that hyper parameters are fine. I think this problem can be negated to some extend by giving a proper reward. You may make the reward bigger when robot reaches the goal.
Hello, awesome video. I am trying to develop my self world and attach to the program but when i replaces td3 world, and run the training the robot not enter in the program, the program stucked and no appear the robot, do you have any recommendation to do this..!!! and train in my world? thanks for all.
Hi megasonec! Thanks for watching my video! In this case, replace step by step small components in td3 world with your new components and see at which point the problem will occur.
Hi chance macarra! Thanks for watching my video! To change robot model, please modify “rs_robot.sdf” file which is inside the “/src/td3/models/td_robot/” directory.
import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from squaternion import Quaternion Do i have to download the mentioned libraries?
Hi Ahmed Aljbry! Thanks for watching my video! Yes, you have to install those libraries. I have not tried to run this simulation using ROS Iron, but I think it will run with small modifications or even may run as it is.
@@robotmania8896 Thank you for your answer .....if i want to modifiy the current model that contains many obstachles? what should i do ? There are many model files such as model inside TD3 , model inside velodyne_description? I should to modify both?
Thank you for the video…I’ve installed everything and it runs however I don’t see the launch tensorboard session in vscode train_velodyne_node…please help and thank you again
Hi chance macarra! Thanks for watching my video! If you have installed the tensorboard correctly, “Launch TensorBoard Session” message should appear just above the “from torch.utils.tensorboard import SummaryWriter” line. Please check your installation.
@@robotmania8896 I also wanted to ask where does the training data get stored to. And is there additional configuration needed to view the tensor board data it currently tells me it is inactive. And lastly after the model is trained how would you add this to a physical robot of similar type
@@bestofchance The training data is stored in “runs” and “results” folders under “scripts” directory. After the robot is trained you can use trained model just as I did in “test_velodyne_node.py” script.
@@robotmania8896 ….one last question about the tensor flow is there anything additional that needs to be written? And what what would you do if you wanted to add a different map to this project. Thank you again for your help
@@bestofchance No, you don’t have to do any additional settings. Just install tensorboard using pip. If you want to do simulation using different map, you have to alter “tsds3.world” file inside “worlds” directory.
Its a really good Tutorial Can you please instruct me that in which folder the main algorithm is? and how can I reduce my training time Thanks in advance
Hi Adam Crux! Thanks for watching my video! The network is described by 3 classes: “Actor”, “Critic” and “td3”. You can find them all in the “train_velodyne_node.py” script. Reducing training time is difficult. You have to come up with better learning method to do that.
Hi, sir it's great video I have two Q!! 1. Whay in this project using two lidar (hokuyo and velodyne) ?? 2. How change to Lplidar 2D the modification is just in urdf + subscriber of environement or in another files??
Hi Stephane! Thanks for watching my video! 1. 2D lidar is used just for visualization in Gazebo, for algorithm, velodyne is used. 2. 2D lidar is already defined in the “td_robot.sdf” file (lines 324~351), so you just need to modify parameters so that the sensor resembles Lplidar. And you have to subscribe to the “/front_laser/scan” topic.
Hi Antonio! Thanks for watching my video! Yes, you can implement other methods for this simulation. I also made several videos with PPO and DDQN. I hope it will help you. th-cam.com/video/gREIOD-czf8/w-d-xo.html th-cam.com/video/9GLVB6Trn10/w-d-xo.html
hello Robot mania i have a problem whe trying to run the training simultaion ros2 launch td3 training_simulation.launch.py Package 'td3' not found: "package 'td3' not found, searching: ['/opt/ros/foxy']"
Thanks you so much for your tutorial. Could you please give me more detail about how run the setup.bash . I already install ROS2 and Pytorch also downloaded your project file.
Hi To Xuan Dung! Thanks for watching my video! Please download the file from the google drive. Then move to the project folder and execute the “colcon build” command. After that, you should be able to execute the commands I explained in the tutorial.
Hello, Good job again. I am following you for my project but i can't understand something which is i don't know how to do this. I have a good path planning algorithm for one track with mission. But i don't know where am i. In local localisation i am starting in (0,0) location in world. This is okey but how can i know my location while i am moving. Imu, Vlp-16 these are good but not stabilized. For example i am moving 0.5 meters but in gazebo it moves 0.38 meters and this is not good for my path planning algorithm. I need really good local Localisation. I thinked about sensor fusion and Kalman Filter but so hard to understand or apply it. I have ZED2 stereo, VLP16, Here3 gps etc. for your video, while the car moving it draws path. And it is so stabilised just for an example. So if you have any idea about my project please teach me. Thank you for the video and helps.
Hi Erim Onay! Thanks for watching my video! Are you talking about real robot or gazebo simulation? In case of gazebo simulation, you can get link positions which are 100% accurate in the gazebo world.
@@robotmania8896 Hi In gazebo with your simulation pack, yes i can take nearly %100 accuracy in my project but when i try to communucate with ros2 and take the path topic in Lio-SAM, it gives error about QoS compatible i am still searching for it. By the way after the simulate it, i have to move my project in real life. If you have anything about sensor fusion or whatever to improve localisation in real life, it will be good video. There is no good video in internet about it. Thank you again for your effort to answer anybody!
@user-sl5ob1ji6s I personally recommend using RTK-GNSS at open environment and switching to methods with point cloud in places where GPS signal is not reaching. But still, it is very challenging to get accurate self-position in any environment.
Hi huseynhaydarov5887! Thanks for watching my video! Yes, you can implement it in omni wheel robot. You have to create an omni wheel robot model. You can use the model I have used in my another tutorial. th-cam.com/video/hcsS-9OIer4/w-d-xo.html
Hi BOUKERMOUCHE Mohammed! Thanks for watching my video! As far as I remember, I have modified most of the files in the “/src/td3” directory. I think the only file that I haven’t modified is “replay_buffer.py”.
@@robotmania8896 the error is: [ERROR] [gzclient -2]: process[gzclient -2] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM' [ERROR] [gzserver-1]: process[gzserver-1] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM' [INFO] [gzclient -2]: sending signal 'SIGTERM' to process[gzclient -2] [INFO] [gzserver-1]: sending signal 'SIGTERM' to process[gzserver-1] [ERROR] [gzclient -2]: process has died [pid 5208, exit code -15, cmd 'gzclient ']. [ERROR] [gzserver-1]: process has died [pid 5205, exit code -15, cmd 'gzserver /home/korkyt/colcon_ws/install/td3/share/td3/worlds/td3.world -s libgazebo_ros_init.so -s libgazebo_ros_factory.so -s libgazebo_ros_force_system.so '].
I am not sure is it a pytorch version issue. Can you please try to run training until the first model will be generated, and load that model? If you will load the model successfully, it is probably pytorch version compatibility issue.
hi when i try to traing the model i get this error [INFO] [1715365625.676300703] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.212 for reason 'Unknown'
Hello Sir, thanks for the video. I tried to do this simulation (test_simulation.launch.py) in my ros2 humble. Gazebo is working fine but there is no camera visual and in rviz it shows robot model error (status error). In terminal it shows: [ERROR] [test_velodyne_node.py-3]: process has died [pid 4267, exit code 1, cmd '/home/woops/DRL_robot_navigation_ros2/install/td3/lib/td3/test_velodyne_node.py --ros-args']. [rviz2-5] [INFO] [1711357500.563694057] [rviz2]: Stereo is NOT SUPPORTED [rviz2-5] [INFO] [1711357500.564159690] [rviz2]: OpenGl version: 3.3 (GLSL 3.3) [rviz2-5] [INFO] [1711357500.916516468] [rviz2]: Stereo is NOT SUPPORTED [rviz2-5] [INFO] [1711357502.687828427] [rviz2]: Stereo is NOT SUPPORTED [gzclient-2] context mismatch in svga_surface_destroy [gzclient-2] context mismatch in svga_surface_destroy C[gzserver-1] [INFO] [1711357555.259258199] [camera_controller]: Publishing camera info to [/camera1/camera_info] [gzserver-1] [INFO] [1711357555.782993244] [gazebo_ros_laser_controller]: Velodyne laser plugin missing , defaults to no clipping [ERROR] [gzserver-1]: process has died [pid 4263, exit code -11, cmd 'gzserver /home/woops/DRL_robot_navigation_ros2/install/td3/share/td3/worlds/td3.world -slibgazebo_ros_init.so -slibgazebo_ros_factory.so -slibgazebo_ros_force_system.so']. What can i do to solve this? Can you please help me?
Hello sir, Im create a dynamic box in my environment but when i launch the training of drl algorithm the box stop, do you have an idea or solution for this problem?
@@robotmania8896 im enable kinamtics to true and add velocity in sdf file of world ( -4.88373 2.96722 0.149 0 -0 0 1 1 1 -4.88373 2.96722 0.149 0 -0 0 0 0 0 0 -0 0 0 -1.9e-05 0 0.000128 2e-06 0 1e-06 -3.8e-05 1e-06 0 -0 0 ) No error in the terminal but when i run the training the box is stop and the robot move i want to train this drl with dynamic obstacles, any solution!!!
To move an object in Gazebo you should do like this. pub_state = rospy.Publisher('/gazebo/set_model_state', ModelState, queue_size = 10) object_state = ModelState() object_state.model_name = 'YOUR_OBJECT' rate = rospy.Rate(50) while not rospy.is_shutdown(): object_state.pose.position.x = new_x_pos object_state.pose.position.y = new_y_pos pub_state.publish(object_state) rate.sleep()
Thank you man, I study alot from your videos. Keep doing the good thing man.
Hi Phung Hoa!
Thank you for watching my videos!
It is my pleasure if my videos have helped you!
Hi,I had tried something similar a few months ago with PPO,it did worked, however training takes a lot of time since we have to wait for the next state,if we take the next state at very high frequency then current state and next state will be very much similar. Any workaround for this?
Hi anuj patel!
Thanks for watching my video!
In this case, I think the way to reduce learning time is to set sampling frequency appropriate for the problem you are trying to solve. For example, if your robot is moving only 0.1 m/s, there is no reason to sample every 1ms because robot will move only 0.1mm each time step.
Thats some cool stuff! Glad my repository could be easily transferred to work in ROS2 :)
Hi IreinisI!
Never thought that the author of the initial work will see this video!
Thank you for sharing your work!
[rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[rviz2-5] at line 133 in /tmp/binarydeb/ros-foxy-tf2-0.13.14/src/buffer_core.cpp
I keep getting this error... How do I fix it?
Hi h01042266056!
Thanks for watching my video!
I did a little research, but could not find any valuable information to solve this problem. Often it is just a startup sequencing issue that resolves itself in steady state.
Here are some similar issues I have found
github.com/cra-ros-pkg/robot_localization/issues/660
[ROS2] TF2 can't find existing frame inside a node - ROS Answers: Open Source Q&A Forum
Hello thank you for this video and for sharing your results. I was trying to train it but the boxes are not changing location even though there is a block of code in your code to change the box's locations. Could you tell me why it may happen and if your network was trained with boxes that are changing location in the environment?
Also, how could we generate a map while running the test file?
Hi Perla Sammour!
Thanks for watching my video!
As far as I remember, the model is trained with boxes that are changing location. It is difficult to answer why in your case boxes are not moving. Is “gazebo/set_model_state” topic published? If you need to generate a map, you can do it using “slam-toolbox”.
@robotmania8896 thank you so much do you have any tips on how to use this tool box because i tried but the map topic is publishing empty messages probably because i should connect it somehow to the robot but I m a beginner. If you have any tutorial or tips to follow, I would be grateful.
@@perlasammour892 Here is a tutorial for slam toolbox. navigation.ros.org/tutorials/docs/navigation2_with_slam.html
I hope it will help you. You also may use cartographer. I am using cartographer in this tutorial. th-cam.com/video/hcsS-9OIer4/w-d-xo.html
@@robotmania8896 Thank you so much!
Could you help me explain the application of Deep reinforcement learning in this project which help robot can navigate from the initial point to goal point, tasks for Deep RL is generate paths and tracking that path ?
Hi Nhat Net!
Thanks for watching my video!
In this project, the objective is to learn a policy which enables the robot to navigate from the start point to the goal point. The policy will not generate a path. It will generate translational velocity and angular velocity in each time step.
Hello, thanks for the video.
The reset() function randomly initializes the state of the gazebo environment by initializing robot position, orientation, obstacle position and goal position after each episode. The problem is that this function has no effect: it only changes the goal position, while the others (the obstacles and the robot) initialize at the same position. Even though the data is published in the '/set_model_state' topic, I don't know where the problem lies. Can you help me?
Hi user-zy6iw9jp3k!
Thanks for watching my video!
I haven’t looked deeply into this problem yet, but does this problem occur only with my ROS2 code or this problem persists with original ROS1 code too?
@@robotmania8896 Thank you for your quick response.
I haven't tried the ROS1 version yet. But with this ROS2 version I've noticed that the data for the environment reset is indeed published in the 'set_model_state' topic, but I don't think this data reaches the environment to take effect.
the command "ros2 topic echo gazebo/set_model_state" returns :
model_name: r1
pose:
position:
x: -0.7638727065138555
y: 0.17986490957978774
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.5171582388341937
w: 0.8558898036581083
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_0
pose:
position:
x: -3.116492159638
y: 3.1576827630575153
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_1
pose:
position:
x: 3.9417658862142186
y: 4.232268068907231
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_2
pose:
position:
x: 0.7901411450245304
y: -0.9571975623281563
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_3
pose:
position:
x: 3.238458176028665
y: 4.19627851705426
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: r1
pose:
position:
x: -4.45708460012157
y: -1.7893097247111696
z: 0.0
orientation:
x: 0.0
y: 0.0
z: -0.9968902750214566
w: 0.07880215458757854
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_0
pose:
position:
x: -0.07471123903843502
y: -3.1570772973051486
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_1
pose:
position:
x: 4.217072119106872
y: 3.3806770395690293
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_2
pose:
position:
x: 2.6348969729394085
y: 3.918223589929326
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
model_name: cardboard_box_3
pose:
position:
x: 1.757266708766977
y: -3.766887224290775
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
reference_frame: ''
---
@@AhmedREGRAGUI-l8qhave you figured it out? I was wondering the same thing
Amazing project and video pal,
Just one question, in the video at 9:03 you show the tensorboard graph, and it says it took 20 hrs of training, is that accurate, if not, how much time did it take to converge?
Hi Serapf-p!
Thanks for watching my video!
It depends on computational capabilities of your machine. As far as I remember, I used CPU for training in this project. Using GPU, the training time will be shorter, but I cannot tell you for how mush exactly.
Hello. I wanted to ask besides installing PyTorch, Tensorflow, ros2, and gazebo. What else needs to be installed because when launching test. It does not work
Hi chance macarra!
Thanks for watching my video!
How exactly the simulation doesn’t work? Are there any error messages?
Can someone tell me if goal coordinates should be changed in every iterations (even when collision), or only when previous goal was obtained?
Hi Radosław Tomzik (Di3gos)!
Thanks for watching my video!
Yes, coordinates will be changed each episode.
Thank you for your video. I have a question now. My /cmd_vel topic doesn't seem to have any output. When I use the 'rostopic info' command, it shows that there are no publishers. What could be the reason for this?
Hi 李一铎!
Thanks for watching my video!
Since this is ROS2, can you please try “ros2 topic info” command?
Thanks for your video! I have some questions:
1. Do I have to retrain the model once I want to implement to my own environment, like my office?
2. What should I do if I want to use the trained model? Should I create my own map? then load the model and give the target position?
Hi 蔡欣儒!
Thanks for watching my video!
1. I think the robot will move to some extent with a new environment like your office, but if you retrain the model, results will be better.
2. If you would like to validate the trained model, you can execute the “test_velodyne_node.py” script.
Hello, thanks for making this tutorial. I have a question. I've trained the model for more or less 10 hours, but the robot ended up spinning in place. When it comes to validating, the goal is always at [-1, 1]. Have you ever experienced it?
Hi Amaldi Tri Septyanto!
Thanks for watching my video!
That is strange. Is the robot not learning anything even if you run the training script several times?
@@robotmania8896 Yes, I have run it several times and not given any good results. Do you think I should have your email? I can give you screenshots of the training.
@@amalditriseptyanto441 Here is my e-mail:
robotmania8867@yahoo.com
Hello Mr. Robot Mani,
Thank you for the video, how did you install the GPU drivers in order for TensorFlow to use the GPU rather than the CPU? What steps did you take to install it properly so it can be utilized?
Hi Kamren James!
Thanks for watching my video!
In this tutorial I am using pytorch. You just have to install pytorch using pip. If GPU is available, the program will automatically detect and use it.
amazing video robot mania!!!! could you tell me how could change the starting point of the robot for example if I wanted the robot to start in the square just right of the triangle where would I have goto to change that? and can I add any map /world for this or does it need specific measurements i.e 10x10 or 11x11...thank you again and great tutorial video
Hi brad carpenter!
Thanks for watching my video!
If you want to change the starting point of the robot, you should edit “pose” of “td_robot” model in the “td3.world” file. It is line 2175. Also, you have to alter the “td3.world” file if you want to change the world.
You don’t need any specific measurements.
Thanks for you video. This model works well on a 2-wheel robot. I have a 4-wheel Mecanum model, is there any difference in its use, such as modifying the code or retraining it from scratch?
Hi Anime Lovely!
Thanks for watching my video!
If you would like to use mecanum robot, you should modify the code to convert translational velocity and rotational velocity of the robot to wheel rotational velocity since you cannot use diff-drive controller.
robot mania....I've downloaded the file from and built it. i have the model trained and can run the test_simulation as well. Could you please tell me how to have the trained model go to a specific goal every time vs going to a random goal point. where in the script do i make the adjustments without breaking the code...thank you again
@user-fx8nk6ew4s You have to modify the “change_goal” function. Set the “self.goal_x” and the “self.goal_y” to constant values. But note there will be an error if goal points overlap with obstacles, so you have to avoid this situation.
Thank you for the video robot mania. I had a few questions, do you recommend running this on a dedicated linux machine or will a virtual machine suffice to run the scripts you demonstrated?
Hi kamren james!
Thanks for watching my video!
If you have a dedicated linux machine with GPU, it is the best. But this simulation will run with virtual machine too.
Does the type of Gpu matter? Would an intel gpu or Nvidia gpu be better in this case?
And I also had another question. I installed this repository on a jetson nano, and I installed all the dependencies(tensorflow, tensorrt, pytorch, etc..) and I keep getting the flowing errors
[rviz2-5] [INFO] [1703747640.905973404] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0,200 for reason 'Unknown'
[rviz2-5] [INFO] [1703747640.906650187] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0,210 for reason 'Unknown'
[rviz2-5] [INFO] [1703747640.906892743] [rviz2]: Message Filter dropping message: frame 'velodyne' at time 0,200 for reason 'Unknown'
Im not too sure why, by chance do you have a possible solution to why this is happening?
@@kamrenjames2862 This error is generated by rviz. So, in my opinion, it should not have negative effects on the simulation. I often see people asking about this kind of error on the forums, but I am not sure how to eliminate it.
@@kamrenjames2862 I think nvidia GPUs are used most often for ML. So, if you have nvidia GPU, use it.
did you use any type of sensors in the simulation ? like LiDaR or smthg else .. ?
Hi Ben Youssef Malek!
Thanks for watching my video!
Yes, I am using velodyne lidar in this simulation.
When building the project from scratch, what steps did you take? Create a workspace with the src folder inside it, then inside the source folder you create the algorithm package ex (TD3)and next to it you clone the simulation package from veldyne_simulation ? please state the steps
Yes, I did exactly as you have described.
@@robotmania8896 there is a problem , i cant reach the repo for veldyne_simulator for ros2 ? in ros index i see it but the page never opens, can you send me the package please?
Thank you for the video. One of my last questions is after the model is trained to a point were you are satisfied with its performance, is there anything else that has to be done to run the tester. When I run the tester I get the error saying that it is failing to load the parameters from the .pth files.
Hi kamren james!
Sorry for the late response. Have you run the script using “ros2 launch” command? If you run the script directly from “td3/scripts” directory, python will not be able to find “pth” file unless you change the code.
@@robotmania8896 Thank you for the response,
I used the command: ros2 launch td3 test_simulation.launch.py to launch the testing script. However, I still get the error:
[test_velodyne_node.py-3] raise ValueError("Could not load the stored model parameters")
[test_velodyne_node.py-3] ValueError: Could not load the stored model parameters
I have downloaded my code from google drive and executed it. It seems to me that it runs fine. At least in my environment. What version of pytorch are you using? I am using pytorch 2.1.2 with python3.8.10 .
Hi, when i use colcon build to compile, i found a lot of errors occur from one of the folder inside the velodyne_simulator. I use ros2 iron in ubuntu 22.04 with gazebo 11.0. Can you give any advice about this issue? Like what version do you use?
Hi Shaobo Yang!
Thanks for watching my video!
I have used ROS2 Foxy for this project. Errors occur probably because “velodyne_simulator” package is not suitable for ROS Iron. Please go to “velodyne_simulator” repository and clone proper version.
can you give us the packages for ros2 please , i had a problem i cant open the page from ros index ,the page not loaded
Can you open this page?
github.com/RobotnikAutomation/velodyne_simulator/tree/ros2-devel
You can clone this repository with ROS2 branch using this command
$ git clone github.com/RobotnikAutomation/velodyne_simulator.git -b ros2-devel
Thank you for the video. I am having a problem with the test simulation script. It does not run after the model parameters are created. I get the error when launching the test simulation launch file saying. Parameters have failed to load, and the parent directory doesn't exist.
[test_velodyne_node.py-3] raise ValueError("Could not load the stored model parameters")
[test_velodyne_node.py-3] ValueError: Could not load the stored model parameters
Hi Ming Tseng!
Thanks for watching my video!
I have downloaded my code from google drive and executed it. It seems to me that it runs fine. At least in my environment. What version of pytorch are you using? I am using pytorch 2.1.2 with python3.8.10 .
That may have been the issue it is now fully functioning. Thank you. A additional question I have is how do I modify the testing node script so that the goal is only spawned at a single location rather than randomly changing? I tried modifying the change_goal function by removing the random points and specifying a single point however, I received several errors or the robot will not move. @@robotmania8896
@MingTseng-ev3tn What kind of errors did you have? I think you are thinking in a the right direction. But note that if the goal is set to the point where obstacles are placed, it will cause an error. So, you should fix not only the goal but obstacles as well.
@@robotmania8896 Thank you for the response. I seem to have solve the errors and it is performing great. My next question is were in the train_velodyne_node.py script are the goal_points being generated. I wanted to experiment whether having multiple instances of targets on the map at once will increase learning. I found several instances in which the goal might be generated but not sure what needs to be changed to spawn more goal points.
@@MingTseng-ev3tn In this script there is only one goal node and it is generated in “train_velodyne_node.py” scripts at lines 297, 298. The goal position is changed every episode. This is done in the “change_goal” function (line 505). So, if you want to create several goal points, probably you have to create an array of goal nodes.
Great job. I have a question, if i setup and adjust some params . How can i compare my results quantitatively?
Hi Henok Adisu Mebratu!
Thanks for watching my video!
For example, you can compare rewards for each parameter set. For better parameters, reward will be bigger.
@@robotmania8896 thanks
Hi Mr. Robot Mania,
Thank you very much for your hard work! I have a question, and please try to respond. I am working on implementing DQN on a real robot and am having trouble configuring the architecture of my workspace, particularly with the implementation or use of DQN. Could you let me know what the general and basic structure of a typical workspace is?
Hi Imen Mabrouk!
Thanks for watching my video!
What do you mean by “workspace”? Do you mean workspace of ROS? If so, there is nothing special in the workspace with reinforcement learning implementation. Here is my video with DQN implementation. I hope it will help you.
th-cam.com/video/9GLVB6Trn10/w-d-xo.html
@robotmania8896 I apologize for the delay. Yes, I am referring to the ROS 2 workspace. My second question is: how can I define the real environment where the robot will perform the training tasks? I intend to skip the simulation step.
@@imenmabrouk6769 What do you mean by “define real environment”? You mean that you want to build a real robot and do training in the real world?
@robotmania8896 yes exactly
Building a robot from scratch requires some experience in mechanical engineering. So, the fast way is to use production robots such as “TurtleBot”. Also, you will need someone who will check the robot during training.
Thank you for the video. I have a question about tensorboard. After i start the training and then going to vscode to launch tensorboard, it keeps showing up as inactive and there is no event running. What has to be done to fix this?
Hi kamren james!
Thanks for watching my video!
It is difficult to say, but as a first measure, can you update your visual studio code to the latest version?
I got it working, I wasn’t in the correct director. I had another question what code would I change if I just wanted to change the goal position in the testing script? I wanted to perform test to see performance every 100 iterations or so. Im still kind of new to programming, so I thought I would ask
@@kamrenjames2862 The goal coordinates are changed in the “change_goal” function. So, if you want to change goal coordinates, you have to modify this function.
@@robotmania8896 I see, what if I wanted to change the start position of the robot in the testing script as well? And if I wanted to change the world that spawns in?
@@kamrenjames2862 If you want to change the starting point of the robot, you should edit “pose” of “td_robot” model in the “td3.world” file. It is line 2175. Also, you have to alter the “td3.world” file if you want to change the world.
Hei guy, tks for your video. I have mini question. In navigation stack, we have DWA for local plan and it also have cost funcion. So why we use RL for local plan? What's the pros of its with DWA?
Hi nguyenduygiang_official!
Thanks for watching my video!
In this approach we are not using any pre-generated cost maps or grid maps, so you don’t have to do map generation. Though we need a map for efficient planning of a path for long distances.
@@robotmania8896 well, this is mean that we don't need prebuild map for navigation robot. RL approach is suitable for enviroment no preidentify
Hi Sir!
Can I stop the training and then start it again from the same stopping point because it didn't work for me?
Hi Stephane!
Thanks for watching my video!
Yes, if you feel that something went wrong during the training, you can restart the training.
After the model is trained what is the next task that must be performed in order for the testing script to move. Currently on launch it is not moving
Hi chance macarra!
Thanks for watching my video!
The test script should work right after the model is generated and no additional steps are required. Do you have any errors in the terminal?
@@robotmania8896..I have no errors at all and the test launch doesn’t move at all….any suggestions
Hi can you please tell me from which file the results are generated as from testing python file I am not able to figure it out?
Hi Adam Crux!
Thanks for watching my video!
The model is generated by running the “train_velodyne_node.py” script.
how the robot is moving in rviz @8:15. when i run the training_simulation,launch,py file, the robot is in the rviz but it is not moving anywhere,.. could you please help me with this.
Hi Krissh Cool!
Thanks for watching my video!
Can you please see whether the “cmd_vel” topic contains any values?
@@robotmania8896 I'm having the same problem. But I'm still a beginner, so can you give me some details?
Additionally, I found out that there is no cmd-vel in the node list.
@@이동준-l7p It is not the “cmd-vel”. The topic name should be the “cmd_vel”. Please check the topic list once again.
@@robotmania8896 Thank you for your answer. There is a cmd_vel on the topic list. Any guesses on what error it is to not move though?
Good video , But I have a question , does it require lidar to implement as prototype using aurduino/raspberry pi?
Hi Maneesh . V!
Thanks for watching my video!
If you would like to implement this method to a real robot, yes, you need at least a 2d lidar.
Thank you very much for your video. However, I used the original parameters in the code and trained for over two days, but it did not converge, and the robot kept spinning in place. Do you have any suggestions?
Hi 李小山!
Thanks for watching my video!
I am not sure why it happens. Can you please try several more times?
@@robotmania8896 Thank you for your reply. After many attempts, we got a convergent result.
@@李小山-k8t I am glad that you get satisfying results!
I encounter the following error in the Rviz, Fixed frame Frame [odom] does not exist, what shall i do?
Hi farhadhamidi7260!
Thanks for watching my video!
This error may occur in the beginning when you launch the simulation, but it should go away eventually.
@@robotmania8896 Okay, but it's not going away eventually. I also changed the fixed frame to base_link, but it's not completely fixed. Could you please give me your email address so that I can send you error images?
@@robotmania8896 I try again but it's not fixed, can you give me your email address?
Hi I would like to ask you about this autonomous navigation implementation, is it global path planning, combined with deep reinforcement learning for local path planning, is the global path planning algorithm using teb?
Hi 陈易圣!
Thanks for watching my video!
I would say it is only local path planning. I think the technique described in this video is suitable for local path planning. It is not meant to be used for long distances.
Thank you for the video. Will this project also work on humble?
Hi Tyler!
Thanks for watching my video!
Yes, it should work on Humble.
Hi! when i try to ros2 launch the trining_simulation using td3 using this command 'ros2 launch td3 training_simulation.launh.py' i get an error saying 'file 'training_simulation.launh.py' was not found in the share directory of package 'td3' which is at '/home/ashwath/Desktop/DRL_robot_navigation_ros2/install/td3/share/td3'
and in the files it doesnt show a trianing_simulation file present within the td3 directory, i just extracted the files from the google drive provided in decription as it is so i am sort of lost as to what is actually wrong and how to fix it, would really appreciate some help!!
Hi Ashwath Shivram!
Thanks for watching my video!
Can you please extract the zip file into your home directory (/home/"username") and try to build it once again?
Great video, thank you for the video. I had a question I trained the model but when I launch the tester the model doesn't move.
Hi James!
Thanks for watching my video!
Please check if “a_in”(line 560) in “test_velodyne_node.py” contains values other than 0. Also, please check if topic “cmd_vel” contains values other than 0.
@@robotmania8896 Thank you for the response but, I keep getting the error in terminal when running the tester, failed to load stored model parameters, rviz and gazebo continue to load however the model doesn't move.
@@James-s3e4n What is the error exactly? Is it failing to load the trained model?
@@robotmania8896 Yes, it is failing to load the model. The error message that I see in terminal is the following:
[test_velodyne_node.py-3] ValueError: Could not load the stored model parameters
After which it continues to load Rviz and Gazebo with no errors. However the robot doesn't move.
The only other error I see is the following:
[test_velodyne_node.py-3] FileNotFoundError: [Errno 2] No such file or directory: './DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models/td3_velodyne_actor.pth'
How long is the training required? I did training for half an hour, but during the test it still crashes? Should I change its location and repeat the test?
Hi Ahmed Aljbry!
Thanks for watching my video!
You should do training until loss and Q values converge. I don’t remember how long it took in my case, but much longer than half an hour.
Just curious. Did you use Nav2 framework in implementing this?
Hi Peera Tienthong!
Thanks for watching my video!
No, I have not used Nav2 framework. The robot is operated only by learned policy.
Hello,
Does the training using cpu cause instability???
Hi Abdelkader Belabed!
Thanks for watching my video!
I don’t think so. At least I have never heard of such thing.
Thank you for the video robot mania! I had a question for you. When running the training script, after about an hour of training the robot tends to stop training and errors out and proceeds to drive in circles. The error seen is posted below:
[train_velodyne_node.py-3] Traceback (most recent call last):
[train_velodyne_node.py-3] File "/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py", line 784, in
[train_velodyne_node.py-3] network.save(file_name, directory="./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models")
[train_velodyne_node.py-3] File "/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py", line 237, in save
[train_velodyne_node.py-3] torch.save(self.actor.state_dict(), "%s/%s_actor.pth" % (directory, filename))
[train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 618, in save
[train_velodyne_node.py-3] with _open_zipfile_writer(f) as opened_zipfile:
[train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 492, in _open_zipfile_writer
[train_velodyne_node.py-3] return container(name_or_buffer)
[train_velodyne_node.py-3] File "/home/edit/.local/lib/python3.8/site-packages/torch/serialization.py", line 463, in __init__
[train_velodyne_node.py-3] super().__init__(torch._C.PyTorchFileWriter(self.name))
[train_velodyne_node.py-3] RuntimeError: Parent directory ./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models does not exist.
[ERROR] [train_velodyne_node.py-3]: process has died [pid 20095, exit code 1, cmd '/home/edit/kj_maze/install/td3/lib/td3/train_velodyne_node.py --ros-args'].
Do you have a idea on why this happens?
Hi Kamren James!
Thanks for watching my video!
This error says that directory “./DRL_robot_navigation_ros2/src/td3/scripts/pytorch_models” does not exist. Have you modified package structure? I think that in the zip file I have provided there should be this directory.
Does it have any pre-requirement of the packages before running the program?. I tried this on Humble and I cannot run the training.
Hi Towerboi!
Thanks for watching my video!
Since in this tutorial the robot is controlled using a diff-drive-controller, you should not need any packages related to ros2-control. Are there any errors in the terminal?
Thank you for your reply, I'm not really sure which part I should exactly tell, but here are some of the log msgs in my terminal
[train_velodyne_node.py-3] 2024-05-21 13:15:20.076166: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
[train_velodyne_node.py-3] 2024-05-21 13:15:20.285949: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
[train_velodyne_node.py-3] To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
[train_velodyne_node.py-3] 2024-05-21 13:15:20.954416: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[train_velodyne_node.py-3] /usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and
@@towerboi-zg3it It seems to me that there are no error messages in the log you have pasted. Even though it seems that your Scipy version is too old for the numpy version you have installed. If the robot is not moving, can you check whether “cmd_vel” topic has any values using “ros2 topic echo cmd_vel” command.
@@robotmania8896 Hi, I think the problem is solved! Maybe my computer is too slow so it take a few minute to make every thing works properly. So at the beginning, the Gazebo has not been launched, and then when I give up and walk to toilet, everything was working and also the /cmd_vel has value! Thank you very much!
You’re welcome!
hello, thank you for the video. I followed the steps and launched the training file. However the mobile robot does not move in both gazebo and rviz to show the training and testing outcome, do you have any idea on this?
The log is:
[rviz2-5] [INFO] [1689653675.176403766] [rviz2]: Message Filter dropping message: frame 'velodyne' at time 0.200 for reason 'the timestamp on the message is earlier than all the data in the transform cache'
[rviz2-5] [INFO] [1689653675.176581294] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.210 for reason 'the timestamp on the message is earlier than all the data in the transform cache'
[rviz2-5] [INFO] [1689653675.176650750] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.200 for reason 'the timestamp on the message is earlier than all the data in the transform cache'
Hi yuxincai5409!
Thanks for watching my video!
In the log you attached, I don’t see any errors. If the robot doesn’t move, there should be another reason. Is “cmd_vel_nav” topic published and does it have values?
@robot mania, it's a very nice video and thank you for sharing the code, I run into the same issue, and there is no “cmd_vel_nav” topic published, any suggestion?
Hello sir,
Why the boxes(cardboard_box_0,1,2,3)not change their positions in this code ? because in ros1 it changes normally
Hi BELABED Abdelkader!
Thanks for watching my video!
Are you sure that positions of the boxes are not changed? Positions of the boxes should change each time “reset” function is called (line 519 “random_box” function).
@@robotmania8896 yes im sure is not changed !!!!
im display the position in my terminal and is changed normally but in gazebo not display
@@robotmania8896 in ros2 service list no service /gazebo/set_model_state
I checked the world file. I find out that I forgot to add “gazebo_ros_state” plugin in the world file. Add next part of the code to the “td3.world” file. “cardboard_box” probably will move.
/gazebo
50
@@robotmania8896 ok thank's
Hello, thanks for the video. I would like to ask how to fix the start and end points when training. There is also a small question, whether this obstacle avoidance relies on the map, is it no map obstacle avoidance.
Hi user-wp3gg2so4w!
Thanks for watching my video!
Are you talking about start and end points of the robot? In that case, starting point of the robot is already (0, 0). And you cannot fix the end (goal) point of the robot since robot will not learn the right policy if the goal always will be in the same place. No, this obstacle avoidance is not relying on the map.
@@robotmania8896 Hello, thank you for your help, I still have a few questions about reinforcement learning I would like to ask you, when the trained model is imported, how do we set the target point for the robot to go to, I see you said this example he is not dependent on the map, in the test model code I see that it is written as a random target place, there is no map then how does the robot know where the target point is, then if I want to set the target point.is it set through rviz If I want to set the target point, do I set it through rviz, or can I only set it through the code.
@@陈易圣 In this simulation we assume that the robot knows goal coordinates. And it moves according to the policy that it have learned during training. The action is decided depending on the distance between the robot and the goal (test_velodyne_node.py lines 197~199).
my man is the real hero
Hi Victor LI!
Thanks for watching my video!
It is my pleasure if this video has helped you!
Hi, sir
When i change the world the rviz2 and gazebo is not working , can i change the world or not and does changing of the gazebo worlds require another setup environement?
Hi Antonio!
Thanks for watching my video!
Yes, you can change the world. But you have to keep the robot model and several objects, positions of which are changed each episode.
@@robotmania8896 Can you explain a little if the change is in the sdf (file.world) or in another folder
I think you should alter only the “td3.world” file. Leave lines 2173~2177, otherwise the robot will not appear.
model://td_robot
0 0 0.01 0 0 0
false
@@robotmania8896 it's working thankyou very much
When I brought veldyne_simulation package, what steps did you take to build the project? Are there any steps? I am trying to build my own project from the beginning, I brought the pack in the source file, but when I do (colcon build) he refuses and requests catkin, can you clarify the steps?
Hi Ahmed Aljbry!
Thanks for watching my video!
These are veldyne_simulation packages for ROS1 and ROS2. You probably have brought ROS1 version. So, please use ROS2 version of veldyne_simulation package.
@@robotmania8896 thanks for your answer , last one....can you state all libraries you use it here ?
Do you mean libraries or packages that were installed using apt? If so, I am not remembering all libraries but regarding this particular tutorial there were no special ROS packages.
Can you give me the link of veldyne_simulation for ROS2? I just found one don’t specific if it for ros1 or Ros2?
@@ahmedaljbry1160 Originally, I used this repository.
github.com/ToyotaResearchInstitute/velodyne_simulator
But it seems that this repository also has a ROS2 branch.
github.com/RobotnikAutomation/velodyne_simulator
hello robot....is there a way for the test simulation to goto a specific place every time..can you help with that
Hi dakota tudor!
Thanks for watching my video!
If you would like to set specific goal in the test simulation, please set x and y to constant values in lines 252, 253 in the “test_velodyne_node.py” script.
thank you...@@robotmania8896
Hi sir!
How can i change the robot model like bumperbot or turtlebot3 or....others ?
Hi BELABED Abdelkader!
Thanks for watching my video!
You should replace files inside the “src/td3/models/td_robot” directory with your robot.
@@robotmania8896 and urdf?
@@robotmania8896 and urdf?
Oh, year, URDF too.
when i use the test command nothing happens the robot is still and does not move how can i fix this it would be a huge help if you could help
Hi Rohan Inamdar!
Thanks for watching my video!
Are there any errors in the terminal?
@@robotmania8896 iam getting this warning [rviz2-5] Warning: Invalid frame ID "odom" passed to canTransform argument target_frame - frame does not exist
[rviz2-5] at line 133 in /tmp/binarydeb/ros-foxy-tf2-0.13.14/src/buffer_core.cpp
I think this warning should not effect simulation in a negative way.
thank you for the video,
the src folder , contines two folders "td3" and "velodyne_simulator", what does the "velodyne_simulator" package contain that is essential for the DRL navigation and ROS environment to work?
i deleted the package and built the workspace, and it built without errors , the "ros2 launch td3 training_simulation.launch.py" still works.
does deketing the "velodyne_simulator" folder and consequently the package cause any hidden problems ?
also gazebo keeps pausing at the 200 millisecond mark, I have to manually unpause it, this could cause the launch to fail since the odom doesn't start publishing without modifying the vanilla code, did anyone run into this, and if so, any help is appreciated.
Hi ali sulyman!
Thanks for watching my video!
"velodyne_simulator" package contains plugins for gazebo. So if you delete that package, no lidar data will be generated.
@@robotmania8896
thank you for the reply. but have you checked that? because I did delete it and build the work space and run "ros2 launch td3 training_simulation.launch.py", then in a second terminal i echo the topics /velodyne_points and /front_laser/scan, which i assume are the lidar data, and they seem to be working (there is data).
another quetion I have is where are the cameras,sensors(lidars), and the tf frames defined to be loaded with the robot in the simulation ?I have been looking in all URDF files and couldn't find them.
edit: I found them in the td_robot.sdf file, I thoght they should be defined in a URDF file, according to the tutorails that I have went through, does the sdf get converted to URDF at run time? and I am not sure how this works because the sdf file is not even called in launch file
edit2: the sensors are in the td_robot.sdf file which is loaded in the td3.world file.
@@alisulyman7824 Yse, SDF files are loaded in the world file. In my understanding, URDF files are used for ROS, and SDF files are used for Gazebo. I manually converted URDF to SDF file. But I am planning to use another method in my future tutorials since every time I make some changes to XACRO file, I have to manually generate URDF and SDF. All sensors are defined in SDF file which is in the “models” folder. You have to add a “gazebo_model_path” statement in XML file so that Gazebo can read SDF file.
@@robotmania8896
what still confuses me is why is the td_robot.urdf is launched in robot_state_publisher.launch.py, if the td_robot.sdf is called in the td3.world?
to me it seems redundent to use the td_robot.sdf if we can just use the td_robot.urdf, then why are they both launched in the project? is it necessary or is it convenient in some way?
and also if you could answer me if you have checked if the projects works normally if we delete "velodyne_simulator" package , because it seems to works fine with out it but I am not sure if there are some hidden issues.
thank you again for the response.
I had issues with reading mesh from URDF file, so I have used SDF to visualize a model in gazebo and URDF for state publisher. But it seems there is a better way, so I am planning to quit using this method.
Hello, thanks for the video and the code sharing, I have the similar issue of robot not moving, and there is no “cmd_vel_nav” topic published, only following three topic published
/default/td_robot/base_link/camera1/image
/default/td_robot/base_link/camera2/image
/introspection/mljibz/items_update
Any suggestion?
Hi Dong Wang!
Thanks for watching my video!
Yes, there is no “cmd_vel_nav” topic, the “cmd_vel” topic is the correct one. There should be around 23 topics in total. In this case it seems that this is not an issue with training process, but with ROS environment. Were there any errors while you were launching the simulation?
I just upgrade typing-extensions and the robot is moving now! Thank you!😀
@@dongwang2226…hello could you tell me what you upgraded and what extensions used to get things moving
@@dongwang2226 I have a question now. My /cmd_vel topic doesn't seem to have any output. When I use the 'rostopic info' command, it shows that there are no publishers. What could be the reason for this?
Hi, I'm trying to run the simulation with the turtlebot3_burger model but when the robot collides with a object it doesn't reset and just do flips. Do you know how to fix this?
Hi shoottz!
Thanks for watching my video!
Collision calculation is done in the “observe_collision” function. Calculation is done based on data obtained from velodyne lidar. Please check whether your robot publishes PointCloud data correctly.
@@robotmania8896 Thank you so much I got it work now! I want to know how you made a dataset? I want to make a new dataset for an environment I made. I'm new to all of this.
@@robotmania8896 Thank you so much I got it work now! I want to know how you made a dataset in Tensorflow/Tensorboard? I want to make a new dataset for the environment. I'm new to all of this.
Thank you so much I got it work now! I want to know how you made a dataset in Tensorflow/Tensorboard? I want to make a new dataset for my environment. I'm new to all of this.
@@shoottz What do you mean by “dataset”? Do you want to create a new gazebo environment?
I can't find the setup folder, just found the src
Hi hammouda!
Thanks for watching my video!
Please execute the “colcon build” command in the project directory.
Hello sir,
How display goal point in gazebo
Hi Stephane!
Thanks for watching my video!
You may place a thin object at the goal point.
@@robotmania8896 can you explain more?
Hello, your video was really informative.
But I watched the video and followed it, but the rviz didn't run
[ERROR] [gzclient -2]: process has died [pid 27482, exit code -15, cmd 'gzclient '].
[ERROR] [gzclient -2]: process[gzclient -2] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM'
[gzclient -2] ** (gzclient:27484): WARNING **: 23:08:13.716: AT-SPI: Could not obtain desktop path or name
I only get these errors. I succeeded in colcon build and completed setup bash, but nothing changed.
Hi 길요한!
Thanks for watching my video!
Were there any errors prior to those you have pasted?
Also, on which machine and ROS version are you executing the program?
There was no problem with pasting. I'm using 20.04 ros2 foxy now
and also does it work on the lds-02 lidar that comes standard?
@@길요한-y5y It is possible to use this algorithm if lidar publishes a PointCloud2 type message. But still, you have to do some modifications in the code, since lidar specification is different.
HI,
thanks for your video!
I have ROS2 Humble on Ubuntu 22.04. can I try your project with out version conflict or something else to do?
Hi aarontilahun4248!
Thanks for watching my video!
Yes, I think you should be able to execute this simulation with the Humble without any code modifications.
@@robotmania8896 I've tried your code using ros2 foxy but when I start the training simulation, robot doesn't move in Rviz and gazebo. can you help me please?
here is the last log: [rviz2-5] [INFO] [1692221912.671674966] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.210 for reason 'Unknown'
@@aarontilahun4248 This error is happening at RVIZ, so I think it should not have negative impact on learning process.
Hello, thanks for the video. I had ros1 so i used the code from github and followed the instructions there. I ran the training but the model didn't learn. I ran it for 1700 episodes, average q value flat lined to -120. I noticed that the robot was turning only in one direction. Before this I used this reference to write my own setup, there also I noticed the same thing, robot turning in only one direction going in circles, but there reward function qas different. So I thought of testing this one first, and it is also not learning, I made no changes to the code. What might be happening?
Hi ramanjeetsingh8684!
Thanks for watching my video!
I can only guess, but since learning at the first steps is largely stochastic, maybe something there went wrong and the agent started to learn a wrong policy. Can you try the simulation several more times and see whether the results will be the same?
Hello @@robotmania8896, I am also facing the same issue. My robot is moving in circles and not learning any policy. Its reward after 3000 episodes is -160. Please tell me the way to solve it.
@@vamsivamsi2903 Hi vamsi vamsi!
Thanks for watching my video!
I think the robot has learned a wrong policy. Unfortunately, I don’t have enough time to look into this problem more closely. Can you do the simulation several times and see whether the robot learns properly?
@@robotmania8896 sure I will try once.Thankyou for the reply
@@vamsivamsi2903were you able to figure it out?
Hello great video. thanks
I have a question please please reply me. I tired Ackermann steering controller, its working but the movement is not controllable. it moves by itself, I can't control it and it's movement is also not smooth. can you please help me what kind of issue is this and how can i get out of it.?
Hi Samra Urooj!
What do you mean by ‘moves by itself’? I have several videos in which I operate a robot with Ackerman steering geometry. Maybe the code from this video will help you.
th-cam.com/video/BFLfVW9f60E/w-d-xo.html
@@robotmania8896 sir I mean robot moves by itslef. i can just move it right, left and move forward bujt cant stop it.
@@SamraUrooj-d4u Do you mean that the robot moves by itself during training? If so, that’s absolutely normal.
Or you mean that you are trying to control the robot using joypad but the robot moves by itself?
@@robotmania8896 yes yes i try to control it by myself by moving keys, not about training.
@@SamraUrooj-d4u In that case, can you see what values are published for controller commands? Since it could be either controller or Gazebo problem.
Hello, it's great work thanks for sharing
I have a question i train ddqn for this robot 24 hours , 2500 episodes using cpu ,
But sometimes the robot swings right and left in front of goal_point or obstacles and does not move forward. Any tips to improve training?
This is hyperparameters using in the training :
Lr = 0.001
Discount_factor =0.99
tau = 1e-3
update target every = 4 steps
Batch_size = 64
Hi Abdelkader Belabed!
Thanks for watching my video!
I think that hyper parameters are fine. I think this problem can be negated to some extend by giving a proper reward. You may make the reward bigger when robot reaches the goal.
@@robotmania8896 ok thanks , i will try it
@@robotmania8896 it's working thank you very much
Hello, awesome video. I am trying to develop my self world and attach to the program but when i replaces td3 world, and run the training the robot not enter in the program, the program stucked and no appear the robot, do you have any recommendation to do this..!!! and train in my world? thanks for all.
Hi megasonec!
Thanks for watching my video!
In this case, replace step by step small components in td3 world with your new components and see at which point the problem will occur.
Hello Robot Mania….what’s the best way to change the robot to a different model
Hi chance macarra!
Thanks for watching my video!
To change robot model, please modify “rs_robot.sdf” file which is inside the “/src/td3/models/td_robot/” directory.
@@robotmania8896 …thank you again
thanks for this video
i have probleme whene i launch the project the gol pointe dont appear in rviz reprisantation and the robot stay in his pose init
Hi BOUKERMOUCHE Mohammed!
Are there any errors in the terminal?
@@robotmania8896 when I do colcon build i have this warning
@@robotmania8896 Starting >>> velodyne_description
Starting >>> velodyne_gazebo_plugins
Starting >>> td3
Finished > velodyne_gazebo_plugins
Starting >>> td3
Finished
@@robotmania8896 and when I launch the project with ros2
@@robotmania8896 I have this error
I like this guy and have similar interest
Hi Indramal Wansekara!
Thanks for watching my videos!
I hope you will find videos you are interested in!
@@robotmania8896 definitely, looking for more interesting videos.
this can work in iron ? i cant make it working ? what should i do?
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from squaternion import Quaternion
Do i have to download the mentioned libraries?
Hi Ahmed Aljbry!
Thanks for watching my video!
Yes, you have to install those libraries. I have not tried to run this simulation using ROS Iron, but I think it will run with small modifications or even may run as it is.
@@robotmania8896 Thank you for your answer .....if i want to modifiy the current model that contains many obstachles? what should i do ? There are many model files such as model inside TD3 , model inside velodyne_description? I should to modify both?
To modify fixed obstacles, you should alter the “td3/worlds/td3.world” file.
Thank you for the video…I’ve installed everything and it runs however I don’t see the launch tensorboard session in vscode train_velodyne_node…please help and thank you again
Hi chance macarra!
Thanks for watching my video!
If you have installed the tensorboard correctly, “Launch TensorBoard Session” message should appear just above the “from torch.utils.tensorboard import SummaryWriter” line. Please check your installation.
@@robotmania8896 I also wanted to ask where does the training data get stored to. And is there additional configuration needed to view the tensor board data it currently tells me it is inactive. And lastly after the model is trained how would you add this to a physical robot of similar type
@@bestofchance The training data is stored in “runs” and “results” folders under “scripts” directory. After the robot is trained you can use trained model just as I did in “test_velodyne_node.py” script.
@@robotmania8896 ….one last question about the tensor flow is there anything additional that needs to be written? And what what would you do if you wanted to add a different map to this project. Thank you again for your help
@@bestofchance No, you don’t have to do any additional settings. Just install tensorboard using pip. If you want to do simulation using different map, you have to alter “tsds3.world” file inside “worlds” directory.
Its a really good Tutorial
Can you please instruct me that in which folder the main algorithm is?
and how can I reduce my training time
Thanks in advance
Hi Adam Crux!
Thanks for watching my video!
The network is described by 3 classes: “Actor”, “Critic” and “td3”. You can find them all in the “train_velodyne_node.py” script. Reducing training time is difficult. You have to come up with better learning method to do that.
@@robotmania8896 after the training and testing is complete how one can view the results can you please tell me the process?
What do you mean by “results”? Q values and loss are printed in the tensor board during training.
Can this work without nvidai gpu ?
Yes, this program will work without GPU.
Hi, sir it's great video
I have two Q!!
1. Whay in this project using two lidar (hokuyo and velodyne) ??
2. How change to Lplidar 2D the modification is just in urdf + subscriber of environement or in another files??
Hi Stephane!
Thanks for watching my video!
1. 2D lidar is used just for visualization in Gazebo, for algorithm, velodyne is used.
2. 2D lidar is already defined in the “td_robot.sdf” file (lines 324~351), so you just need to modify parameters so that the sensor resembles Lplidar. And you have to subscribe to the “/front_laser/scan” topic.
@@robotmania8896 thank you but i use 2d lidar directly for algorithm i want to delete the velodyne
@@Stephane-e2e In that case you have to delete lines 217~243 and 352~389 in the td_robot.sdf file.
Hi it's great video
Can i use this file and apply other algorithm's like DQN or PPO... without any problem ?
Hi Antonio!
Thanks for watching my video!
Yes, you can implement other methods for this simulation. I also made several videos with PPO and DDQN. I hope it will help you.
th-cam.com/video/gREIOD-czf8/w-d-xo.html
th-cam.com/video/9GLVB6Trn10/w-d-xo.html
hello Robot mania i have a problem whe trying to run the training simultaion
ros2 launch td3 training_simulation.launch.py
Package 'td3' not found: "package 'td3' not found, searching: ['/opt/ros/foxy']"
Hi Robiti!
Thanks for watching my video!
Did you execute the “source” command as I explained in the video from 7:50?
Thanks you so much for your tutorial. Could you please give me more detail about how run the setup.bash . I already install ROS2 and Pytorch also downloaded your project file.
in the project file only have src folder no have install folder
Hi To Xuan Dung!
Thanks for watching my video!
Please download the file from the google drive. Then move to the project folder and execute the “colcon build” command. After that, you should be able to execute the commands I explained in the tutorial.
@@robotmania8896 thanks you so much for the sharing. I wish you all the best and keep forward
Hello, Good job again. I am following you for my project but i can't understand something which is i don't know how to do this. I have a good path planning algorithm for one track with mission. But i don't know where am i. In local localisation i am starting in (0,0) location in world. This is okey but how can i know my location while i am moving. Imu, Vlp-16 these are good but not stabilized. For example i am moving 0.5 meters but in gazebo it moves 0.38 meters and this is not good for my path planning algorithm. I need really good local Localisation. I thinked about sensor fusion and Kalman Filter but so hard to understand or apply it. I have ZED2 stereo, VLP16, Here3 gps etc. for your video, while the car moving it draws path. And it is so stabilised just for an example. So if you have any idea about my project please teach me. Thank you for the video and helps.
Hi Erim Onay!
Thanks for watching my video!
Are you talking about real robot or gazebo simulation? In case of gazebo simulation, you can get link positions which are 100% accurate in the gazebo world.
@@robotmania8896 Hi
In gazebo with your simulation pack, yes i can take nearly %100 accuracy in my project but when i try to communucate with ros2 and take the path topic in Lio-SAM, it gives error about QoS compatible i am still searching for it. By the way after the simulate it, i have to move my project in real life. If you have anything about sensor fusion or whatever to improve localisation in real life, it will be good video. There is no good video in internet about it. Thank you again for your effort to answer anybody!
@user-sl5ob1ji6s I personally recommend using RTK-GNSS at open environment and switching to methods with point cloud in places where GPS signal is not reaching. But still, it is very challenging to get accurate self-position in any environment.
great video mate! how to apply this to a real robot?
Hy BetaHex!
Thanks for watching my video!
Other than real velodyne sensor, you also need motor drivers and electric motors to rotate wheels.
@@robotmania8896 good to know. cheers mate
Can i implement it in omni wheel robot? How to connect it with any robots in order to navigate it
i am new in robotics and a litllle bit lost
Hi huseynhaydarov5887!
Thanks for watching my video!
Yes, you can implement it in omni wheel robot. You have to create an omni wheel robot model. You can use the model I have used in my another tutorial.
th-cam.com/video/hcsS-9OIer4/w-d-xo.html
thanks a lot for this video your do a grait work
please i wont to now wich files you modify to transform the project of ROS1 to ROS2
Hi BOUKERMOUCHE Mohammed!
Thanks for watching my video!
As far as I remember, I have modified most of the files in the “/src/td3” directory. I think the only file that I haven’t modified is “replay_buffer.py”.
@@robotmania8896 I think also the meshes files
Awesome tutorial ❤
Hi MohitKS!
Thanks for watching my video!
It is my pleasure if this video has helped you!
I also want to make something like this,help me
Hi HANG!
Thanks for watching my video!
What kind of problem exactly do you have?
@@robotmania8896 hello, thank you for your tutorial, I'm getting error when I'm launching training_simulation. What should I do?
@@robotmania8896 the error is:
[ERROR] [gzclient -2]: process[gzclient -2] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM'
[ERROR] [gzserver-1]: process[gzserver-1] failed to terminate '5' seconds after receiving 'SIGINT', escalating to 'SIGTERM'
[INFO] [gzclient -2]: sending signal 'SIGTERM' to process[gzclient -2]
[INFO] [gzserver-1]: sending signal 'SIGTERM' to process[gzserver-1]
[ERROR] [gzclient -2]: process has died [pid 5208, exit code -15, cmd 'gzclient '].
[ERROR] [gzserver-1]: process has died [pid 5205, exit code -15, cmd 'gzserver /home/korkyt/colcon_ws/install/td3/share/td3/worlds/td3.world -s libgazebo_ros_init.so -s libgazebo_ros_factory.so -s libgazebo_ros_force_system.so '].
@@robotmania8896 I can give more detailed error message
@@robotmania8896 do you have discord or something like that?
Très belle vidéo 😊
Hi Kevin Tchangang!
Thanks for watching my video!
It is my pleasure if this video has helped you.
my pytorch version is : 2.3.1 (relpay for the comment, )
I am not sure is it a pytorch version issue. Can you please try to run training until the first model will be generated, and load that model? If you will load the model successfully, it is probably pytorch version compatibility issue.
nice video!🤩
Hi Simon Lopez!
Thanks for watching my video!
It is my pleasure if this video has helped you!
hi when i try to traing the model i get this error [INFO] [1715365625.676300703] [rviz2]: Message Filter dropping message: frame 'front_laser' at time 0.212 for reason 'Unknown'
and i get this to [gzclient -2] context mismatch in svga_surface_destroy
[gzclient -2] context mismatch in svga_surface_destroy
and i get one new error [rviz2]: Message Filter dropping message: frame 'front_laser' at time 63.962 for reason 'Unknown'
Hi Robert Vlasan!
Thanks for watching my video!
That is not an error since it shows “INFO”. It should not have any negative effect for training.
@@robotmania8896 helllo well my robot dosen t move thats the problem i left ifor around 8 h and it didn t move
if you have the time can we get in meet and could you help me ?
Hello Sir, thanks for the video.
I tried to do this simulation (test_simulation.launch.py) in my ros2 humble. Gazebo is working fine but there is no camera visual and in rviz it shows robot model error (status error). In terminal it shows:
[ERROR] [test_velodyne_node.py-3]: process has died [pid 4267, exit code 1, cmd '/home/woops/DRL_robot_navigation_ros2/install/td3/lib/td3/test_velodyne_node.py --ros-args'].
[rviz2-5] [INFO] [1711357500.563694057] [rviz2]: Stereo is NOT SUPPORTED
[rviz2-5] [INFO] [1711357500.564159690] [rviz2]: OpenGl version: 3.3 (GLSL 3.3)
[rviz2-5] [INFO] [1711357500.916516468] [rviz2]: Stereo is NOT SUPPORTED
[rviz2-5] [INFO] [1711357502.687828427] [rviz2]: Stereo is NOT SUPPORTED
[gzclient-2] context mismatch in svga_surface_destroy
[gzclient-2] context mismatch in svga_surface_destroy
C[gzserver-1] [INFO] [1711357555.259258199] [camera_controller]: Publishing camera info to [/camera1/camera_info]
[gzserver-1] [INFO] [1711357555.782993244] [gazebo_ros_laser_controller]: Velodyne laser plugin missing , defaults to no clipping
[ERROR] [gzserver-1]: process has died [pid 4263, exit code -11, cmd 'gzserver /home/woops/DRL_robot_navigation_ros2/install/td3/share/td3/worlds/td3.world -slibgazebo_ros_init.so -slibgazebo_ros_factory.so -slibgazebo_ros_force_system.so'].
What can i do to solve this? Can you please help me?
Hi achayante aradhakan!
Thanks for watching my video!
Were there any errors prior to those you have pasted?
Hello sir,
Im create a dynamic box in my environment but when i launch the training of drl algorithm the box stop, do you have an idea or solution for this problem?
Hi Antonio!
Thanks for watching my video!
How are you trying to move the boxes? Do you have any errors in the terminal?
@@robotmania8896 im enable kinamtics to true and add velocity in sdf file of world
(
-4.88373 2.96722 0.149 0 -0 0
1 1 1
-4.88373 2.96722 0.149 0 -0 0
0 0 0 0 -0 0
0 -1.9e-05 0 0.000128 2e-06 0
1e-06 -3.8e-05 1e-06 0 -0 0
)
No error in the terminal but when i run the training the box is stop and the robot move i want to train this drl with dynamic obstacles, any solution!!!
To move an object in Gazebo you should do like this.
pub_state = rospy.Publisher('/gazebo/set_model_state', ModelState, queue_size = 10)
object_state = ModelState()
object_state.model_name = 'YOUR_OBJECT'
rate = rospy.Rate(50)
while not rospy.is_shutdown():
object_state.pose.position.x = new_x_pos
object_state.pose.position.y = new_y_pos
pub_state.publish(object_state)
rate.sleep()
@@robotmania8896 oh, thankyou very much