Position estimation of an object with YOLO using RealSense

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ต.ค. 2024
  • This is a tutorial of how to estimate position of an object in the real world using RealSense camera.
    The program is here
    drive.google.c...
    This program was tested with
    python 3.6.8
    tensorflow-gpu 1.14.0
    keras 2.1.5
    h5py 2.9.0
    numpy 1.17.2
    opencv-contrib-python 4.2.0.34

ความคิดเห็น • 104

  • @diwakarkumar-qn5hc
    @diwakarkumar-qn5hc 3 หลายเดือนก่อน +2

    1. The equations in the code and the one shown in the image at 3:15 does not match.
    2. I tried both the equations in the image and code. It is working in x axis, but not for y and z axis. Please help me out.

    • @zenlabdese
      @zenlabdese 3 หลายเดือนก่อน

      We are facing this issue. Is there a way to calibrate the position of the Realsense camera with the real world?

    • @robotmania8896
      @robotmania8896  3 หลายเดือนก่อน

      Hi diwakar kumar!
      Thanks for watching my video!
      What do you mean exactly by “It is working in x axis, but not for y and z axis”? What values are you getting?

    • @diwakarkumar-qn5hc
      @diwakarkumar-qn5hc 3 หลายเดือนก่อน

      @@robotmania8896 If I move an object in (x,y,z), as an example, by (100, 0, 100), the output of the code gives a movement of approximately (100, 120, 180). That is I kept the object on the floor and moved it around (y is constant). But the y in the output changed a lot. Only x reflected the actual values of movement. Also can you clarify the mismatch in the transformation of the y and z axis between the video and code.

    • @robotmania8896
      @robotmania8896  3 หลายเดือนก่อน

      @@diwakarkumar-qn5hc Are axes in your coordinate system heading in the same direction as in my video at 5:08~5:54 on the left picture?

    • @diwakarkumar-qn5hc
      @diwakarkumar-qn5hc 3 หลายเดือนก่อน

      ​@@robotmania8896yes it is

  • @TravelwithRasel.
    @TravelwithRasel. 2 หลายเดือนก่อน

    i do not get this code, can you prove me this code please?

    • @robotmania8896
      @robotmania8896  2 หลายเดือนก่อน +1

      Hi TravelwithRasel!
      Thanks for watching my video!
      What exactly do you mean by “prove the code”?

  • @hamidrezaie5866
    @hamidrezaie5866 2 ปีที่แล้ว +1

    Hello Robotmania,
    I use this approach for my own project. It works quite well even for larger distances in a range of 2-4m. I have just one problem. Sometimes the accuracy in Y-X direction is no accurate. The reason is that in this cases the camera coordinates system dont match the world coordinate system, e. g. the camera is a little bit rotatet and this leads to error. In the real scenario, the camera is mounted on a robot system and the camera has rotation in all axis relativ to robot coordinate system. Measuring the rotation manually in all axis is very difficult and leads to systematic error. Do you know an method how to obtain the rotation of the camera automatically? If we take your video as an example and we assume that the camera is rotatet. Could we obtain the rotation by including beside the spider a aruco marker?

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Hamid Rezaie!
      If I understand correctly what you want to do, including an aruco marker beside the spider will not give you absolute rotation of the camera. It will only give you relative rotation angle of the marker to your camera. I don’t know how to automatically obtain rotation angle of the camera to robot coordinate system. Maybe if you place a geomagnetic sensor both on the robot and the camera and compare their outputs you will be able to calculate rotation angle of camera, but I am not sure.

  • @joeljohnson7562
    @joeljohnson7562 3 หลายเดือนก่อน

    Hi! It's a pretty cool video. Can the algorithm also detect the rotation of the spider/object along the x, y and z axes?

    • @robotmania8896
      @robotmania8896  3 หลายเดือนก่อน

      Hi Joel Johnson!
      Thanks for watching my video!
      I tried something similar to what you have asked. I think, to detect rotations along all axis we should combine these two methods.
      th-cam.com/video/qCNzjCmhI_Y/w-d-xo.html
      th-cam.com/video/L8veSy3LdmU/w-d-xo.html

  • @ДмитрийСмоленский-с5ж
    @ДмитрийСмоленский-с5ж 2 หลายเดือนก่อน

    Hi, I'm studying and I'm doing a project and I wanted to find out if your code is suitable for a drone. Which will be at a distance of 1 km and you need to determine the coordinates of the objects. The drone has GPS, and using it to find the coordinates of objects. Can you help me please?

    • @ДмитрийСмоленский-с5ж
      @ДмитрийСмоленский-с5ж 2 หลายเดือนก่อน

      I really liked the video, I want to apply it to my project. But I have a problem, you wrote: "To run the module, you need to run the API & SPA to send data or set the sendRequestToServer variable in the Premier-eye.DS/config/config.ini file to false." And I can't figure out how to do it((. I would be very grateful if you would help me and give me your email to chat with you more.

    • @robotmania8896
      @robotmania8896  2 หลายเดือนก่อน

      Hi Дмитрий Смоленский!
      Thanks for watching my video!
      This code is suitable only for RGBD camera. In theory, this method can be used for any distances, but as far as I know there are no RGBD cameras that can measure distance up to 1km.

    • @ДмитрийСмоленский-с5ж
      @ДмитрийСмоленский-с5ж 2 หลายเดือนก่อน

      ​@@robotmania8896Thanks for the answer, but is it possible to change the code somehow so that it works on other cameras

    • @robotmania8896
      @robotmania8896  2 หลายเดือนก่อน

      To make this code work, you should have means to measure distance to the pixel you are interested in. This is difficult to achieve other than with depth cameras.

    • @ДмитрийСмоленский-с5ж
      @ДмитрийСмоленский-с5ж 2 หลายเดือนก่อน

      ​@@robotmania8896 And what tools do I need to measure the distance to the pixel I'm interested in? Could you help me? Do you have an email address where I can contact you? If you don't mind.

  • @kenanerin393
    @kenanerin393 2 ปีที่แล้ว

    Hello Robotmania
    i use these codes on my project. İ have realsense d435i . But position z (depth) is so bad. How can i increase z position accuracy ?

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว +1

      Hi KENAN ERIN!
      Thanks for watching my video!
      Since depth value is obtained using det_distance function and no other operations are performed, maybe it is a hardware issue. Check if your RealSense camera is constrained parallel to the ground, if the front screen is clean and so on.

  • @selimcan6469
    @selimcan6469 ปีที่แล้ว

    Hi Robotmania,
    will this code work on Yolov4?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว +1

      Hi Selimcan!
      Thanks for watching my video!
      In this code I used yolov3 keras, so it cannot be used for Yolov4. But you can use the part where the coordinates are calculated because the theory is absolutely the same.

  • @real-timeai2812
    @real-timeai2812 ปีที่แล้ว

    Hi, thanks for your great video! I have a question regarding to the distance function, so there are 2 ways to get the distance information, one is like in your video (get_distance(x,y)) and another way is take the value out of the depth frame. But i think that this is not the Zs but equal to sqrt(xs2 + ys2 + zs2). Could you explain it to me if i wrong?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว +1

      Hi Real-Time AI!
      Thanks for watching my video!
      I see your point. I also have the same doubts about depth distance. But generally, depth frame value is taken as a vertical distance from camera to the object. Maybe this is because the angle of view of realsense is not that big, so the difference is negligible.

  • @thulithjayarathne5658
    @thulithjayarathne5658 3 ปีที่แล้ว

    Hee..I got a project in AGV to implement 4wheel steering and computer vision to track the AGV..I have little knowledge about the softwares.. could you please recommend me some books tutorial videos to learn from basics

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว +1

      Hi Thulith Jayarathne!
      Thanks for watching my video!
      If you haven’t got much experience in software, I recommend you pick simple solutions for your project. Do you really need 4-wheel steering mechanism? If you need to track AGV, you will need to implement position and velocity control. Since 4WS robots have 8 actuators, it will be quite challenging. If you need your robot to do omnidirectional movement, and you will use the robot on a flat floor, mecanum or omni wheels will work fine. And control algorithm will be much simpler since you need to control only 4 motors. For pose estimation of the AGV I recommend you use ArUco marker. There are quite a lot of code samples out there how to recognize ArUco markers.

    • @thulithjayarathne5658
      @thulithjayarathne5658 3 ปีที่แล้ว

      @@robotmania8896 thank you for the recommendation..

  • @isacgeorgethankachan6993
    @isacgeorgethankachan6993 ปีที่แล้ว

    hello sir can i be able to use lidar inorder to find the object coordinates?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      Hi Isac George Thankachan!
      Thanks for watching my video!
      I think it is quite difficult to find object coordinates using only a lidar, because in order to calculate the coordinates, you firstly have to recognize the object itself. But with only a lidar you cannot do recognition. Or did you mean to do recognition with an rgb camera and using a lidar to find coordinates?

  • @akhileshdesai8559
    @akhileshdesai8559 2 ปีที่แล้ว

    can i use my normal laptop camera so to do that what changes must be done

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Akhilesh Desai!
      Thanks for watching my video!
      In this case, you don’t have a depth camera, so fix the z (distance) value in the code and change your camera parameters.

  • @gokcesenahocaoglu8631
    @gokcesenahocaoglu8631 ปีที่แล้ว

    Hi, many thanks for the project and the video. I don't have a depth camera. Can I use my computer's camera and get the depth from the LIDAR sensor instead? Can you guide me for this? Thanks ...

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      Hi gokcesenahocaoglu8631!
      Thanks for watching my video!
      In case you are using a 3D lidar, I think it is possible. But you have to calibrate your camera and lidar precisely to get good results. Here are some related works which may help you.
      arxiv.org/pdf/2101.04431.pdf

    • @gokcesenahocaoglu8631
      @gokcesenahocaoglu8631 ปีที่แล้ว

      @@robotmania8896 Thank you for your quick reply. I don't have a 3D LIDAR. I have the 2B rplidar a2m12 LIDAR. I am trying to sync my camera with 2D LIDAR.

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      @@gokcesenahocaoglu8631 I don’t think that it is possible to sync 2D lidar with camera, since 2D lidar gives you distance only on one plane. If you only want to look how the algorithm works, you can fix the z value (depth value) and see how distance changes when you move an object left, right or up, down.

  • @thanhle-ur2hr
    @thanhle-ur2hr ปีที่แล้ว

    Can i apply this to yolov5 ? I'm working on the final project about sorting plastic bottle

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      Hi thanh lê!
      Thanks for watching my video!
      Yes you can. Please refer this tutorial.
      th-cam.com/video/oKaLyow7hWU/w-d-xo.html

    • @thanhle-ur2hr
      @thanhle-ur2hr ปีที่แล้ว

      @@robotmania8896 Thanks for replying, I very appreciate it. Can I send u an email with a detailed question?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      @@thanhle-ur2hr Yes. This week I will be able to reply on Friday.

  • @sharingmylittleinsight
    @sharingmylittleinsight 4 หลายเดือนก่อน

    you always there for me haha, thankyou very much sir

    • @robotmania8896
      @robotmania8896  4 หลายเดือนก่อน +1

      Hi Sharing My Little Insight!
      It is my pleasure if my videos help you!

    • @sharingmylittleinsight
      @sharingmylittleinsight 4 หลายเดือนก่อน

      @@robotmania8896 hello sir, how to publish or brodcast the position of the object into tf2 transform?

    • @robotmania8896
      @robotmania8896  4 หลายเดือนก่อน +1

      @@sharingmylittleinsight I think this page will help you. Please check it.
      ros2-industrial-workshop.readthedocs.io/en/latest/_source/navigation/ROS2-TF2.html

  • @yeohweixiangbenjy1983
    @yeohweixiangbenjy1983 ปีที่แล้ว

    Hello Robotmania,
    This is a nice video, it helped me a lot.
    Btw, I have one question, the 'theta' inside the code need to be set according to my camera tilting/rotation angle, is it ?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว

      Hi YEOH WEI XIANG BENJY!
      Thanks for watching my video!
      Yes, if your camera is tilted, you should set proper theta value. Note that you should also apply translational coordinate transformation if your camera is not tilted around its center.

    • @yeohweixiangbenjy1983
      @yeohweixiangbenjy1983 ปีที่แล้ว

      @@robotmania8896 I see....... Do you know is there any method or device that can detect the tilting angle automatically ?

    • @robotmania8896
      @robotmania8896  ปีที่แล้ว +1

      @@yeohweixiangbenjy1983 Usually, to detect orientation of the object, IMU is used. Here is a tutorial about how to use IMU with raspberry pi.
      th-cam.com/video/yW22igLjkjY/w-d-xo.html
      I hope this tutorial will help you.

    • @yeohweixiangbenjy1983
      @yeohweixiangbenjy1983 ปีที่แล้ว

      @@robotmania8896 appreciated

  • @minhkhanhphantruong3408
    @minhkhanhphantruong3408 2 ปีที่แล้ว

    Can i use a regular camera instead (eg esp32 cam,...), thank you so much

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Minh Khánh Phan Trương.
      Thanks for watching my video!
      You can use a regular camera for object recognition, but to estimate object position, you should also use depth camera.

    • @minhkhanhphantruong3408
      @minhkhanhphantruong3408 2 ปีที่แล้ว

      @@robotmania8896 Can you tell me the names of some cheap depth cameras, I'm newbie. Thank you so much.

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      @@minhkhanhphantruong3408 I think the most widely used and affordable rgbd camera is RealSense D435. There is a lot of information about this camera on the Internet and even if you have trouble it will be easy to find support.

    • @minhkhanhphantruong3408
      @minhkhanhphantruong3408 2 ปีที่แล้ว

      @@robotmania8896 Thank you so much

  • @AmitKumarMallickmes
    @AmitKumarMallickmes 3 ปีที่แล้ว

    Hi..I need your regarding the following:
    I have trained a customized model to detect objects by darknet YOLOv3 and after detection, I need to find the pose of the detected object.

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว

      Hi Amit Kumar Mallick me19s058!
      Thanks for watching my video!
      By pose you mean position and orientation of the object?
      If you want to measure orientation, probably you need to measure coordinates of 3 different points to calculate plane orientation towards the realsense.

    • @AmitKumarMallickmes
      @AmitKumarMallickmes 3 ปีที่แล้ว

      @@robotmania8896 Thank you for your response. I got your point. Can you make a tutorial regarding this?

    • @스마트파워트레인연구
      @스마트파워트레인연구 3 ปีที่แล้ว

      @@robotmania8896I also request a tutorial. Any help would be greatly appreciated.

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว

      Hi Amit Kumar Mallick me19s058!
      I think I can make a tutorial. Estimating an orientation using depth camera is an interesting problem. But since I have to make it from scratch, it will probably take me about a month. I will let you know when I publish it.

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว

      @@스마트파워트레인연구 I think I can make a tutorial. Estimating an orientation using depth camera is an interesting problem. But since I have to make it from scratch, it will probably take me about a month. I will let you know when I publish it.

  • @heitordelesporte3298
    @heitordelesporte3298 2 ปีที่แล้ว

    Hi, thanks for the video! In my project I'm using a RealSense d435i camera and YOLOv5, and I need to find the angles of the center point of the bounding box in relation of the camera. Can you help me with this problem? If you could provide another way to talk, I would really appreciated it. Thanks for your attention!

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว +1

      Hi Heitor Delesporte!
      Thanks for watching my video!
      You can write here
      robotmania8867@yahoo.com
      Do you want to find yaw, pitch, roll angles of the detected object?

    • @heitordelesporte3298
      @heitordelesporte3298 2 ปีที่แล้ว

      @@robotmania8896 sorry about the late answer. Yes, I want to find every angle that I can about the image in relation with the camera. I'm gonna send an e-mail with the codes I'm using and we can continue the chat. Thanks for the attention and help!!

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว +1

      Hi Heitor Delesporte!
      I made a video about measuring angles with yolo. It may not be exactly what you wanted, but I hope it helps you.

    • @heitordelesporte3298
      @heitordelesporte3298 2 ปีที่แล้ว

      ​@@robotmania8896 sorry about the late answer! Thank you for the other video, it helped me a lot! I will send you an e-mail with a few questions and I hope we can discuss about that.

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว +1

      Hi Heitor Delesporte!
      It is my pleasure if the video has helped you!

  • @hamidrezaie5866
    @hamidrezaie5866 3 ปีที่แล้ว

    Wow, amazing! This is what i need. And yes! A tutorial about esimating the pose will be amazing!

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว +1

      Hi Hamid Rezaie!
      Thanks for watching my video!
      I am glad that this video has helped you.
      I am going to make the tutorial about orientation estimation next month.

    • @hamidrezaie5866
      @hamidrezaie5866 3 ปีที่แล้ว

      @@robotmania8896 thanks! this would be amazing! I will suggest my friends to show your videos and to subscribe your channel! :D

    • @robotmania8896
      @robotmania8896  3 ปีที่แล้ว

      @@hamidrezaie5866 Thanks a lot!
      I really appreciate your support!

    • @hamidrezaie5866
      @hamidrezaie5866 2 ปีที่แล้ว

      Your are welcome!
      I have a question. I try this video for a similar project. But I noticed that if the camera is rotated relativ to the object then the pose estimation becomes inaccurate. What would you suggest to solve this problem?

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Hamid Rezaie!
      If camera is rotated relative to the object, you have to apply coordinate transformation. I explained the theory in “Coordinate transformation” of this tutorial. I also implemented this method in “RoboMaster Sentry Robot Launching Projectile Simulation” tutorial in the pub_command.py script (lines 102~104). Please check the script, it may give you an idea of how to do this.

  • @niranjansujay8487
    @niranjansujay8487 2 ปีที่แล้ว

    how to use this with YOLOV5

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว +1

      Hi Niranjan Sujay!
      Thanks for watching my video!
      I implemented this method in the “Estimating angles of object with YOLOv5 and RealSense” video. You can use the code from this tutorial.

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      @@niranjansujay8487 The distance is obtained in the line 221 of yolo.py script. The function get_distance returns distance in meters, so to convert it to centimeters, you have to multiply it by 100. Note that you also must convert other constants to centimeters to get correct coordinates.
      You can write to this mail address:
      robotmania8867@yahoo.com

  • @burakyasinkarakus
    @burakyasinkarakus 2 ปีที่แล้ว

    Hello, thank you for the great tutorial.
    What alterations should I make to just have only X and Y coordinates? In my project, a standard webcam is mounted at the top of the area to have a bird-eye view, and the playground where the objects are placed is also fixed. So, I don't need to have depth information since the items do not approach or move away from the camera.
    Thanks in advance!

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Burak Yasin Karakus!
      Thanks for watching my video!
      In your case, z value will be the distance between the camera and the playground. So, use fixed distance value for z.

    • @burakyasinkarakus
      @burakyasinkarakus 2 ปีที่แล้ว

      @@robotmania8896 Thank you for your response.
      Can I contact you to add position estimation to my existing YOLO-based object detection project? I got confused because since I do not use a depth camera I don't have a depth value, therefore can not make the necessary changes. Also, I struggle due to notation differences between my project and yours. It's my capstone project and your help is greatly appreciated.
      If you accept, I can send you my zip file with the details through your e-mail address provided in the Google Drive file.
      Thanks again for your interest!

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      @@burakyasinkarakus Please write here.
      robotmania8867@yahoo.com

    • @burakyasinkarakus
      @burakyasinkarakus 2 ปีที่แล้ว

      @@robotmania8896 I did! Please check your email.

    • @jayantii22
      @jayantii22 ปีที่แล้ว

      @@burakyasinkarakus Hi i facing the same difficulty for my capstone project. Did you get your answer? can you please share

  • @niranjansujay8487
    @niranjansujay8487 2 ปีที่แล้ว

    Hi @robotmania, Hope your doing great!!!!!......
    I have a question;
    how can i get the height and width from yolo.py code, i am currently using yolov3 tiny for object detection and a real sense camera, in yolo.py, i inserted this piece of code -
    '''"dist1 = depth_frame.get_distance(int(center_coordinates_array[i][2]), int(center_coordinates_array[i][3]))*100
    height = dist1*(center_coordinates_array[i][3] - intr.ppy)/intr.fy - dist*(center_coordinates_array[i][1] - intr.ppy)/intr.fy"""
    but when i run this, i am getting an 'IndexError: list index out of range',
    how to get height as well as width along with distance using yolov3 can you please help me out and if possible along with solution ..could you show me how to write code for getting width of an object also...
    and yes thank you so much!!!! for the creating an entire video for me on yolov5!!!

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Niranjan Sujay!
      You get this error because center_coordinates_array holds only the center coordinates of the images, like [[234, 328], [261, 412],…]. This array doesn’t have elements [i][2] and [i][3]. Instead of x_center and y_center you should pass top, left, bottom, right variables.

    • @niranjansujay8487
      @niranjansujay8487 2 ปีที่แล้ว

      @@robotmania8896 can you please help me, 1.could write a piece of code to show height using yolov3, i mean i am unable to use the left, right, bottom, top to use write them in terms of x and y coordinates!... If you could write a piece of code that get the height of the code.. It would be awesome
      2. How to convert yolov4 weights and cfg files to h5 format , i tried doing few things from the internet but did not work out ..could you suggest any methods
      3. Is there any way to find out the width of the object apart from height and distance ...if yes could you tell me how and if possible could add another piece of code to find width along with the height code using yolov3.

    • @niranjansujay8487
      @niranjansujay8487 2 ปีที่แล้ว

      And by width i mean, thickness of a object as we can get the depth information from realsense camera ..so using that can we get the thickness of a object

    • @robotmania8896
      @robotmania8896  2 ปีที่แล้ว

      Hi Niranjan Sujay!
      1. The simplest way is the following: in yolo.py script line 155
      center_coordinates_array.append([top, left, bottom, right])
      then in detect_video function under
      “for i in range(len(center_coordinates_array)):” loop
      Calculate real world coordinates.
      2. The only way I know is to use the convert.py script. But I am not sure whether it is possible to convert weights of yolov4 to the weights of yolov3 since network structure is different.
      3. I think it is difficult to calculate thickness using only yolo. Maybe you can do this using semantic segmentation, but it will be computationally very demanding, and it will not work with jetson nano or raspberry pi.

    • @niranjansujay8487
      @niranjansujay8487 2 ปีที่แล้ว

      @@robotmania8896 Thank you for explaining height theory will test this out and let you know the results!!,
      2. I tried to use your tiny-yolov3 tutorial, which Indeed helpful but instead of cloning Keras yolo3 repo, I found another Keras yolo4 repo on the internet and I git cloned it later following your tutorial I tried to convert the yolov4 weights (again downloaded from internet) to Keras format but it was not able to convert to h5 format...there was constant failure error whenever I tried to it again, what to do. I even tried deleting/uninstalling all the libraries and then reinstalling all the libraries from the tutorial. I don't know whether is there any other additional library to download for the convert.py from Keras yolo4 to work .....do you have any idea how to go about "converting yolov4 weights and cfg files to h5 format"?
      3. Yes, even I thought so... do you think jetson TX2i/Xavier would be able to do the work of semantic segmentation?