- 134
- 37 466
Bharath kumar
India
เข้าร่วมเมื่อ 19 ก.พ. 2011
Computer Vision, Deep Learning, iot, AI, STEM, medical image processing
Visual Odometry
@github.com/bharath5673/testing.git
Visual odometry is the process of determining the location and orientation (trajectory) of a camera by analyzing a sequence of images. Visual odometry is used in a variety of applications, such as mobile robots, self-driving cars, and unmanned aerial vehicles. Odometry in Robotics is a more general term, and often refers to estimating not only the distance traveled but the entire trajectory of a moving robot.
Visual odometry is the process of determining the location and orientation (trajectory) of a camera by analyzing a sequence of images. Visual odometry is used in a variety of applications, such as mobile robots, self-driving cars, and unmanned aerial vehicles. Odometry in Robotics is a more general term, and often refers to estimating not only the distance traveled but the entire trajectory of a moving robot.
มุมมอง: 104
วีดีโอ
gps demo
มุมมอง 2428 วันที่ผ่านมา
Vehicle GPS Visualization using Folium for KITTI and nuScenes @ github.com/bharath5673/testing.git This script demonstrates how to visualize vehicle GPS trajectories on an interactive map using the Folium library. It supports datasets like KITTI and nuScenes, which provide GPS coordinates recorded during autonomous driving experiments. Overview GPS Data Extraction: The script reads GPS coordina...
simple IPM (BEV)
มุมมอง 63หลายเดือนก่อน
Inverse Perspective Mapping (IPM) Tool @github.com/bharath5673/IPM-Mapping.git u can use this same video for testing in this above code// download this vid and try Easy IPM also known as Bird's Eye View (BEV) Overview This repository provides an simplest and easy-to-use tool for transforming a perspective view image (e.g., a road scene) into a Bird's Eye View (BEV), also known as Inverse Perspe...
simple IPM (BEV)
มุมมอง 77หลายเดือนก่อน
Inverse Perspective Mapping (IPM) Tool @github.com/bharath5673/IPM-Mapping.git Easy IPM also known as Bird's Eye View (BEV) Overview This repository provides an simplest and easy-to-use tool for transforming a perspective view image (e.g., a road scene) into a Bird's Eye View (BEV), also known as Inverse Perspective Mapping (IPM). BEV is widely used in applications like autonomous driving, robo...
autonomus steering and lane detection on nuscenes city traffic scenes
มุมมอง 83หลายเดือนก่อน
#realtime #Crop and #PIP tool @ github.com/bharath5673/crop_and_pip.git A simple python script that crops a section from a video and overlays a Picture-in-Picture (PIP) image, allowing for seamless video editing with custom regions of interest (ROI) and interactive overlays. #nvidia #nvidiartx #jetson #python #deeplearning #deepstream #edgecomputing #edgeai #yolo #yolov5 #github #objectdetectio...
camera and lidar fusion demo ( RGB_PCD )
มุมมอง 28หลายเดือนก่อน
#Realtime #Crop and #PIP tool @ github.com/bharath5673/crop_and_pip.git A simple python script that crops a section from a video and overlays a Picture-in-Picture (PIP) image, allowing for seamless video editing with custom regions of interest (ROI) and interactive overlays. #nvidia #nvidiartx #jetson #python #deeplearning #deepstream #edgecomputing #edgeai #yolo #yolov5 #github #objectdetectio...
Crop and PIP
มุมมอง 62หลายเดือนก่อน
#Realtime #Crop and #PIP tool @ github.com/bharath5673/crop_and_pip.git A simple python script that crops a section from a video and overlays a Picture-in-Picture (PIP) image, allowing for seamless video editing with custom regions of interest (ROI) and interactive overlays. #nvidia #nvidiartx #jetson #python #deeplearning #deepstream #edgecomputing #edgeai #yolo #yolov5 #github #objectdetectio...
BEV 360 Top-Down View on #Nuscenes 🚗
มุมมอง 156หลายเดือนก่อน
"Take a moment to enjoy the beauty of computer vision with nuscenes 6 cameras fusion 3D bounding boxes visualized in the Bird's Eye View (BEV) lnkd.in/dvjEuC2m try 3D cube-rcnn @ lnkd.in/gH2gNy-T YOLOv8-3D is a #LowCode and lightweight and user-friendly library designed for efficient 2D and 3D bounding box object detection. yolov8-3d @ lnkd.in/dqEhTYd2 Integrated 3D Bounding Box Detection and D...
BEV 360 Top-Down View on #Nuscenes 🚗
มุมมอง 51หลายเดือนก่อน
"Take a moment to enjoy the beauty of computer vision with nuscenes 6 cameras fusion 3D bounding boxes visualized in the Bird's Eye View (BEV) lnkd.in/dvjEuC2m try 3D cube-rcnn @ lnkd.in/gH2gNy-T YOLOv8-3D is a #LowCode and lightweight and user-friendly library designed for efficient 2D and 3D bounding box object detection. yolov8-3d @ lnkd.in/dqEhTYd2 Integrated 3D Bounding Box Detection and D...
3D Bounding Boxes in Top-Down BEV View on Nuscenes
มุมมอง 58หลายเดือนก่อน
"Take a moment to enjoy the beauty of computer vision with accurate 3D bounding boxes visualized in the Bird's Eye View (BEV). 🌐✨ These representations not only demonstrate precision in object detection but also provide a top-down perspective that enhances our understanding of spatial relationships in real-world environments. 🔍 Whether it's pedestrians, vehicles, or obstacles, the 3D bounding b...
3D Bounding Boxes in Top-Down BEV View
มุมมอง 369หลายเดือนก่อน
"Take a moment to enjoy the beauty of computer vision with accurate 3D bounding boxes visualized in the Bird's Eye View (BEV). 🌐✨ These representations not only demonstrate precision in object detection but also provide a top-down perspective that enhances our understanding of spatial relationships in real-world environments. 🔍 Whether it's pedestrians, vehicles, or obstacles, the 3D bounding b...
🌟 Object Detection with Depth (Distance) Estimation 🌟
มุมมอง 505หลายเดือนก่อน
🌟 Object Detection with Depth (Distance) Estimation 🌟
🚗 Autonomous Driving with Tesla Vision on CARLA Simulator 🌍
มุมมอง 169หลายเดือนก่อน
🚗 Autonomous Driving with Tesla Vision on CARLA Simulator 🌍
Deepstream Simplified (easy detections, segmentstions, face, pose, multimodels)
มุมมอง 96ปีที่แล้ว
Deepstream Simplified (easy detections, segmentstions, face, pose, multimodels)
realtime simple YOLOv8-3D-BEV-Tracking demo
มุมมอง 159ปีที่แล้ว
realtime simple YOLOv8-3D-BEV-Tracking demo
Yeah, kind of useless without sharing the code for the hobbyists.
search.app/ffpSuXHQpQZhcwFu6
cool
Cool Bro. Something I'm looking for to estimate road width from video, to help visual inspection for road condition survey. Maybe you can make tutorial to run the code, I don't know about python but I interested to try. Thank you Bro.
Actually I'm not a tutor.. but if u look into that code ... Definitely u can get explained how actually it is working... It's simple and easy code....
code?? the code which you gave is for different function..
coollll
Can you tell me how you drawed the bev labels?
Hi Bharath, I really impressed by your deepstream multi-model repository. I want to use your face module in my project. I have tried to run the face module but i am getting some issues. I have install the deepstream 6.3 and ubuntu 20.04 as the instructions in the repository. Brother I really need your help. So please reply and let's connect to solve my issue.
Sure
@@bharath5673__ Brother, can we connect on google meet
bro pls iska to tutorial or github dedo
Hi, where can find this code.
if you start tutorial and sourcecode of this type of videos so then only you can get subs and views
indeed
@@bharath5673__ i have subed you waiting to learn a lot about this stuf and that steering control video was cool
Hey can you provide code for this
Greetings! Dear Bharath kumar,could you tell me what specific version of openvino you've used to make this.pls.
tested on openvino-dev==2023.3.0
try this github.com/bharath5673/OpenVINO-ADAS.git
hi sir i need Source code for this project
Hi, Do you plan to share the code? It looks awesome.
while running the test file i got error no such file exists
thammudu ala chesavo cheppara installation G D thunnaru ra plz tell ra
Is it really balancing or you just placed two pins down there😂
Thanks for your video
How to calibrate the scale and shift in the MiDaS model? It is a relative depth output.
Hello, good afternoon, you know that I tried to do all the steps, but my computer turned off, what are the minimum requirements to run ORB-SLAM3?
Please provide source code ?
It's in the description box
do u have any github repo ?
In description box
Hello Bharath I am working on depth estimation I would like to ask if you have GitHub for this experiment thanks
hey, can u provide the source code?
Hi, what are you using to find the orientation and dimensions of the 3d boxes? Your work seems to be able to run close to real time (I am guessing its running at 10 - 12 fps).
nice, what kind of gpu are you using? and which midas model you are using?
I train my dataset on YOLOv8n and is for my final year project
I am using Nvidia CUDA Version: 12.3
Can I get a reference of your final project bro?@@YoussoufaAlhadji-w6m
can i get the source code
Anna put a video or short of the thing beyond
Can you please share the code
Where is the slam demo?
What u r looking is the slam demo
@@bharath5673__ interesting. The movement is too little to notice. Where is the odometry output?
@@KensonLeung0 u have to see the API.. inside u can uncommment those lines and u can visualise.. it's in pyplot type.
Hi, how can I make that useful in my robot
Please provide me the source code!!!
where is the code
Code is inside the system 🤔
Hi, can you share the code?
How is the performance of yolov7 tiny in real time testing using cpu?
It depends on cpu
Hey just a heads up but your code has got a typo in it that completely changes how the performance is perceived. When you're finding the parsing time you are getting the current time which is in seconds and the start time which is also in seconds. That means your parsing time has to be in seconds, not milliseconds. So in reality instead of you getting 0.14ms you're getting 0.14s or 140ms which comes to around 7fps. But the guide was still pretty helpful as I couldn't find anything else I got my models up and running with no issues and currently my custom model is running at 110ms which is beating out the 600ms I was getting before so a great improvement.
noted .. i will update that..
Could your share the source code?
hai .... check out my new repo /// much easy// much robust @ github.com/bharath5673/StrongSORT-YOLO
Would you mind to share the source code?
Yeah. I am waiting for this code too
What hardware this detection is running? Thank you!
i5, 8th gen
How do we install on CPU based laptop, and how to train and predict. Can you give proper illustration of usage
Lol.. Training wud be a disaster on cpu machine :p..,,
Helllo. Here is a blog and you would use a free gpu on google colab. If you don`t have a dataset you can use one from the roboflow platform too and export it in a yolov7 pytorch format to use and train following the blog. Good luck blog.roboflow.com/yolov7-custom-dataset-training-tutorial/
th-cam.com/video/wzhUPFV0b8M/w-d-xo.html
Can you explain this
But it is showing lane lines everywhere on the road 😬
hi, which step motor should i buy?
For this type of build.. NEMA 17 is enough and this one is also running on the same.. But if u want to build bigger, if u want to load some weight on them.. U hve to buy 23 or 34 or other
sir how can i contact you
Y sir..? To laugh?
@@bharath5673__ hehe i needed more info on this project pls
@@thelaughtermedia9702 sure... Project linked below the video
Great work... please share your code.....
Bharat did u get this rover to drive autonomously using OakD
sir can I get source code for this please
Nice
Great work Bharath !