- 78
- 137 205
Robotics with Sakshay
เข้าร่วมเมื่อ 12 มิ.ย. 2018
All about Robotics:
- Mechanics
- Sensing
- Localization
- Path Planning and Control
- Computer Vision
- Machine Learning
- Mechanics
- Sensing
- Localization
- Path Planning and Control
- Computer Vision
- Machine Learning
DDPG | Panda Robot Arm | Deep Reinforcement Learning
DDPG (Deep Deterministic Policy Gradient) is a reinforcement learning technique for continuous action spaces that combines Deep Q Learning and Policy Gradients. DDPG is an Actor Critic based algorithm, where the Actor learns the optimal policy to determine the next action in a state and the Critic acts a Q-value network to score the actions generated by the Actor. In this video, we apply the DDPG algorithm to the Robot Reacher task using the Panda Robot Arm.
Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements.
Twitter: MahnaSakshay
LinkedIn: www.linkedin.com/in/sakshaymahna/
Links
Notebook Code: www.kaggle.com/code/sakshaymahna/panda-robot-ddpg/notebook
Deep Reinforcement Learning Playlist: th-cam.com/play/PL0sla3wvhSnblecvfPdyQXBJKGFeBxEkZ.html
Panda Gym Environment: panda-gym.readthedocs.io/en/latest/
DDPG Blog: towardsdatascience.com/deep-deterministic-policy-gradient-ddpg-theory-and-implementation-747a3010e82f
DDPG Paper: arxiv.org/abs/1509.02971
Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements.
Twitter: MahnaSakshay
LinkedIn: www.linkedin.com/in/sakshaymahna/
Links
Notebook Code: www.kaggle.com/code/sakshaymahna/panda-robot-ddpg/notebook
Deep Reinforcement Learning Playlist: th-cam.com/play/PL0sla3wvhSnblecvfPdyQXBJKGFeBxEkZ.html
Panda Gym Environment: panda-gym.readthedocs.io/en/latest/
DDPG Blog: towardsdatascience.com/deep-deterministic-policy-gradient-ddpg-theory-and-implementation-747a3010e82f
DDPG Paper: arxiv.org/abs/1509.02971
มุมมอง: 4 109
วีดีโอ
Unboxing the TortoiseBot Robot from RigBetel Labs
มุมมอง 1.4K2 ปีที่แล้ว
In this video, we are going to unbox and assemble the TortoiseBot Robot from RigBetel Labs. TortoiseBot is a minimalistic mobile robot that runs on ROS - Robot Operating System. The complete robot is Open Source. It can do typical Mobile Robot tasks like Mapping, Localization and Navigation. Buy TortoiseBot Here: rigbetellabs.com/shop Feel free to leave a comment or message me on Twitter/Linked...
Car Following | OpenCV Object Tracker | Machine Learning in ROS
มุมมอง 2K2 ปีที่แล้ว
Object Tracking is the task of locating and keeping track of a moving object in a video. OpenCV provides us with a number of built-in functions to do Object Tracking. In this video, we program an application for a Self Driving Car in ROS (Robot Operating System) to follow the car, driving in front of it using Object Tracking techniques. Feel free to leave a comment or message me on Twitter/Link...
DQN | Self Driving Car on Highway | Deep Reinforcement Learning
มุมมอง 2K2 ปีที่แล้ว
Q Learning is one of the most popular Reinforcement Learning algorithm. Q Learning works by learning the Q - Value function for a given environment and using that to derive the optimal policy. Deep Q Learning makes use of Deep Neural Networks to estimate the Q Value function in the classical Q Learning technique. In this video, we apply the Deep Q Learning technique to teach a Self Driving Car ...
Obstacle Avoidance | Neural Networks | Machine Learning in ROS
มุมมอง 6K2 ปีที่แล้ว
Neural Networks are a Supervised Learning based Machine Learning technique. In this video, we program the Data Collection Pipeline and train the Neural Network model to avoid obstacles in the environment. The TurtleBot3 robot is used for this task in a ROS (Robot Operating System) based simulation. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, ...
Basic Concepts | FrozenLake | Deep Reinforcement Learning
มุมมอง 1.1K2 ปีที่แล้ว
Reinforcement Learning is a paradigm of Machine Learning Algorithms, that work on the principle of Learning by Doing. Reinforcement Learning uses several basic ideas about the Agent, Environment, States, Actions, Observations, Policy and Value Functions. The complete setting of Reinforcement Learning along with these concepts are discussed. Feel free to leave a comment or message me on Twitter/...
Custom Mobile Robot | Part - 4 | ROS Learning Series
มุมมอง 9632 ปีที่แล้ว
This video is Part 4 of the ROS Learning Series. In this video, we discuss how to create a Custom Mobile Robot in ROS. Mainly, the dynamics involved in building the URDF file are discussed. The simulated robot is a differential drive robot with a laser sensor. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter: t...
ROS 2 | Part - 6 | ROS Learning Series
มุมมอง 6972 ปีที่แล้ว
This video is Part 6 of the ROS Learning Series. In this video, we discuss ROS 2, and it's differences with ROS1. We discuss all the different commands in ROS 2 and run a simple Dolly example in ROS2. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter: MahnaSakshay LinkedIn: www.linkedin.com/in/saksha...
MoveIt! Robot Manipulators | Part - 5 | ROS Learning Series
มุมมอง 2.8K2 ปีที่แล้ว
This video is Part 5 of the ROS Learning Series. In this video, we discuss how to simulate Robot Arms in ROS, using the ROS Package - MoveIt! We create a simple Motion Planning example using Panda Robot Arm and then simulate Grasping in a Pick and Place scenario. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter...
ROS Concepts | Part - 3 | ROS Learning Series
มุมมอง 7112 ปีที่แล้ว
This video is Part 3 of the ROS Learning Series. In this video, we discuss the different ROS concepts, like ROS Graph, Publisher-Subscriber, Services and Actions. We also simulate an Obstacle Avoidance based TurtleBot3 AI using the concepts discussed. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter: twitter.co...
Python for AI | Live Session
มุมมอง 3052 ปีที่แล้ว
The recording of the Live Session: Python for AI. The session covers What is AI and Search Algorithms in AI. Also, the Binary Search algorithm is also implemented in Python, introducing the basic concepts of Python required along the way. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter: MahnaSaksha...
Simulating TurtleBot3 Robot | Part - 2 | ROS Learning Series
มุมมอง 2.3K2 ปีที่แล้ว
Python for AI Live Session, on 22nd April 2022 at 9.30 PM IST. Sign Up and then Register on this Link: www.lighthall.co/class/30791c3d-bbe4-4a4e-8357-319560dba59a This video is Part 2 of the ROS Learning Series. In this video, we discuss the different components of Mobile Robot: Localization, Mapping and Navigation, and simulate the TurtleBot3 Robot to see these components in action. Feel free ...
Reinforcement Learning | TurtleBot3 Robot | Motion Planning for Robots
มุมมอง 4.1K2 ปีที่แล้ว
Reinforcement Learning is a paradigm of Machine Learning Algorithms, that work on the principle of Learning by Doing. Q Learning is one of the most popular Reinforcement Learning algorithm. The algorithm uses Bellman Update Equations to plan paths given the start and goal positions. The algorithm has been demonstrated on the TurtleBot3 robot in ROS (Robot Operating System) based simulation. Fee...
Introduction to ROS | Part - 1 | ROS Learning Series
มุมมอง 4.2K2 ปีที่แล้ว
This video is Part 1 of the ROS Learning Series. This video introduces ROS (Robot Operating System), it's use cases and applications. Towards the end, installing ROS on a Windows system and running Gazebo and RViz is also demonstrated. Feel free to leave a comment or message me on Twitter/LinkedIn in case of any questions, doubts, suggestions or improvements. Twitter: MahnaSakshay L...
DWA Planner | Husky Robot | Motion Planning for Robots
มุมมอง 8K2 ปีที่แล้ว
DWA Planner is a popular local path planning algorithm. The algorithm is quite efficient and used as a default planner in the ROS Navigation Stack. The algorithm utilizes robot kinematics to generate various different candidate paths, and then uses an optimization function to find the best trajectory to follow. The algorithm is discussed on the video and has been demonstrated on the Husky UGV r...
Virtual Force Field | Formula 1 Robot Car | Motion Planning for Robots
มุมมอง 1K2 ปีที่แล้ว
Virtual Force Field | Formula 1 Robot Car | Motion Planning for Robots
Frenet Frames | Self Driving Cars | Motion Planning for Robots
มุมมอง 4.8K2 ปีที่แล้ว
Frenet Frames | Self Driving Cars | Motion Planning for Robots
RRT Planner | Spot Robot | Motion Planning for Robots
มุมมอง 2.4K2 ปีที่แล้ว
RRT Planner | Spot Robot | Motion Planning for Robots
Genetic Algorithm | UR5 Robot | Motion Planning for Robots
มุมมอง 3.6K2 ปีที่แล้ว
Genetic Algorithm | UR5 Robot | Motion Planning for Robots
A* (A Star) Search | TurtleBot3 Robot | Motion Planning for Robots
มุมมอง 10K2 ปีที่แล้ว
A* (A Star) Search | TurtleBot3 Robot | Motion Planning for Robots
Siamese Networks | Face Recognition | Computer Vision on Humans
มุมมอง 9K2 ปีที่แล้ว
Siamese Networks | Face Recognition | Computer Vision on Humans
DCGAN | Fake Face Generator | Computer Vision on Humans
มุมมอง 1.5K2 ปีที่แล้ว
DCGAN | Fake Face Generator | Computer Vision on Humans
Part Affinity Fields | Human Pose Estimation | Computer Vision on Humans
มุมมอง 1.8K2 ปีที่แล้ว
Part Affinity Fields | Human Pose Estimation | Computer Vision on Humans
Local Binary Features | Face Landmark Detection | Computer Vision on Humans
มุมมอง 4532 ปีที่แล้ว
Local Binary Features | Face Landmark Detection | Computer Vision on Humans
Haar Cascade | Human Body and Face Detection | Computer Vision on Humans
มุมมอง 2K2 ปีที่แล้ว
Haar Cascade | Human Body and Face Detection | Computer Vision on Humans
UNetXST | Camera to Bird's Eye View | Perception for Self Driving Cars
มุมมอง 6K2 ปีที่แล้ว
UNetXST | Camera to Bird's Eye View | Perception for Self Driving Cars
SFA 3D | 3D Object Detection | Perception for Self Driving Cars
มุมมอง 2.7K2 ปีที่แล้ว
SFA 3D | 3D Object Detection | Perception for Self Driving Cars
Multi Task Attention Network (MTAN) | Multi Task Learning | Perception for Self Driving Cars
มุมมอง 1.1K2 ปีที่แล้ว
Multi Task Attention Network (MTAN) | Multi Task Learning | Perception for Self Driving Cars
KITTI 3D Data Visualization | Homogenous Transformations | Perception for Self Driving Cars
มุมมอง 5K2 ปีที่แล้ว
KITTI 3D Data Visualization | Homogenous Transformations | Perception for Self Driving Cars
Deep SORT | Object Tracking | Perception for Self Driving Cars
มุมมอง 16K2 ปีที่แล้ว
Deep SORT | Object Tracking | Perception for Self Driving Cars
Bro when you will upload next video?? 😢
it's just not working. at each of your videos there are lots of cars that are not detected. if they are detected, some are later lost. Tracking with occlusion is also evidently not working
Como se llama la aplicación
Khouya l hendi lizoum
triplet loss is also used for the same
Can you help with the bev data of the dataset?
how can I record for dataset
Thank you very much. You really helped me. Godsend
Really helpful sir. Do you have any idea about these same trajectories using MATLAB?
Mind blowing Work!
Thanks
Sir I am making a facial recognize attendance system using Siamese network the capture image's train successfully but when I want to track image it's not recognize the registered person can you please help me with this I'll pay charge's too but please don't ignore my comment I'm in trouble since 2 day's
You're videos are truly gems! Having the codes in front of you and explaining them line by line, no course will do this!
Plz help me there is some problem in the root directory to the dataset
One of the best 👍🏿
Multitask learning and Vision transformer are same ?
Good explanation, thanks, upvoted
WHAT IS THE INPUT WE SHOULD WE GIVE TO IT TO RUN THE CODE
Great video man, keep up the good work
Absolutely Amazing
how do i combine this with a global path planning algorithm like a star for obstacle avoidance
how do i set the astar algorithm as my global path planning algorithm
Awesome, thanks!
it is a very useful content. thank you everything.
isit possible to combine astar and rrt for mobile robots to build an hybrid
Not reaching the goal ?
thankyouu
Bruh how to track a recognised person? With his name
I was thinking to make robotics channel and then this guy with already 78 videos. Are you looking for collaboration?
thank you for this nice course. is there any way to get an efficiency on the test data as a value also in the jupyter notebook ?
should I copy the code and paste it in order in one py file? or what please clarify
Thanks bro for this.
Unable to download the weights. Can you pls help.
I'm trying to make a LFR(Line follower robot) till now I have made it to run on Balck and white line, all complex and simple turns, line discontinue etc. But I'm facing problem in loops, how to make a LFR to detect loop?
I am learning JavaScript can I make using tenserflow js
th-cam.com/video/h5vJjWGCtsI/w-d-xo.html So the graph at above time in the video is showing continuous joint angles along with r,p,y. Can there be an instance in time (0 to 1 second) in the above graph where that particular set of joint angles (and r,p,y) are not feasible, meaning a singularity condition appears ? in between good effort on the topic and videos.
can i use a pretained model and make an application where i will allow students to register their face and once they are done, they can login to the app and start verifying their face. I want the app to work in way whereby i need not have to take time to collect facial data of every students in the class. I want the data collection to be done while registration of the student to the app.
Brother can you suggest me any resources for doing the same , i wanna to implement my path planning in ros like you did , but i don't know how to start . I know the algorithm and implemented it in cpp without ros but i don't know to integrate like you did can you help me brother
i want some resources to refer
How the Truncation is calculated during annotation, is it based on visible pixel or it is from actual 3D box?
Excellent effort, thanks for sharing.
Tell me, I did everything as you said, but the following error is displayed, I haven’t been able to solve it for a long time, what can I do? :/opt/ros/noetic/share/MLinROS/car_ws/src/perception/scripts$ python 3 line_follower.ру Traceback (most recent call last): File "lane_follower.py", line 5, in <module> from prius_msgs.msg import Control Module Not Found Error: No module named 'prius msgs'
pp_msgs is not present at lerp_motion_planner
model_history = model.fit(dataset['train'].repeat(), epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data = dataset["val"], validation_steps=STEPS_PER_EPOCH// 10, callbacks = callbacks) ValueError: Unexpected value for `steps_per_epoch`. Received value is 0. Please check the docstring for `model.fit()` for supported values.
I have a environment with multiple continuous action what should I do.
how can i use it for ROS2?
bhai isme error bhut ha
Why has nobody collected data from the real world? You could attach a pole with a downward-facing camera on top of a bunch of teslas and drive around collecting data. Then you have all the ground truth you need for making 360 parking cameras much better.
Good Job! Please upload more.
But why we can use semnatlcly
add subtitles.