you can train to search items as well..like missing keys ..by showing the keys ..then hide it ..let the bot search mode ..it will notfity to you by phone..or alarm device
Awesome, I'm working on my turtlebot project, using ROS's gazebo to do simulation and A3C(a Reinforcement Learning algorithm) to train the BOT. It can save tons of time by avoiding gathering data and label them.
jack flynn, interesting. How is the simulation environment generated? It would be cool to see a side by side comparison of both methods. I would imagine that the ideal training set is a combination of both sims and real data.
@@ZacksLab well the problem is not about the simulation, its about Deep RL algorithms. As DeepMind's research from DQN to DDPG and A3C(Deep RL methods) take raw pixels as input, and learn how to avoid obstacles and even navigate through maze.
jack flynn So does the AI in the video you shared always stay inside the simulated environment? What happens when you put AI that was trained strictly on simulated data into a physical device in the real world and it encounters scenery, lighting conditions, objects, and scenarios that the simulator wasn’t able to provide? The issue with training on data generated in simulators is that the real world throws scenarios at the AI that the simulations just can’t account for. Are you saying that the A3C method solves this issue?
@@ZacksLab okay, I got it. Actually RL's main idea is learning by interaction. It let the agent(jetson bot in this case) try moves and gain rewards. If it hit an obstacle then this episode is finished and gain negative reward. Agent's goal is to maximize total reward. So after episodes of episodes of learning, it adjusts parameters in the agents NN. This can be done in a simulated environment or done in real world. The agent is a policy network which in A3C methods is also called an Actor. It's a policy network, when you input a state(a image from camera) it output an action(left right forward or stop). A3C is a RL methods, RL is different from supervised learning in that you don't need to give your learner labeled data. Back to the simulated env, if train the bot is difficult in real world. Then it can be done in simulated env. Which ROS(robot operating system based on Ubuntu) provides tools to do that. When your agent/actor is well performed in the simulation, putting it into real world is easy(ROS makes sure of it). I did some experiments, after the robot is trained in the simulated env, it works on every kinds of ground surface. Maybe because the weights on pixels related to the grounds are small.(Still working on the prove that).
Of all the jet nano video I saw, all I can say this is like a practical demonstration or almost a real world scenario, congrtas bro, by the way hope you can reading numbers like speed limit simulating car limit
Thank you! Yes adding road sign detection is actually what I want to work on next, I was thinking about putting a nano and camera on my car dashboard to collect data and start working on sign and streetlight detection and interpretation.
Great video Zach! I'm looking to get into the NVidia Jetson Nano for signal processing. Nice to see how easily it is to use pytorch to train the classifier, download it to the Jetson board and run it. This example you gave is really cool. Liked. Subbed. Smashed the bell.
omg I know, UO ruined every other game for me, nothing will ever compare. what server did you play on? I was on Atlantic from 2002-2004... I was on yamato before that because I had no idea what servers were when I was first starting and I just randomly chose one.
Ariel, thank you! The jetbot is an open source project built by the Nvidia community, I didn't personally design the Jetbot or the code used in this video, it's all available on the Jetbot github for anyone to use/experiment with!
Hi Ryan, thank you! If you follow the collision avoidance example that is on the jetbot's github repo under NVIDIA-AI-IOT/jetbot/notebooks/collision_avoidance you will find a jupyter notebook called data_collection.ipynb. Launch this notebook on the jetbot and run through the code, if your jetbot hardware is set up correctly everything should go smoothly. I can definitely do a step by step video on this but it will take me a bit to get it posted!
Great video. I don't know about much about AI but your video make me excited. Unfortunately, I cant buy motor driver and plastic ball in my country, what should i use to replace it? And after i craft the car, how i use your data? Should i buy the Jetbot's Notebook?
I've jetpack 4.6 installed on my 2 GB jetson nano and I've interfacd Rasbperry pi V2 CSI Camera. The issue which I am facing right now is the live execution of Thumbs task in free DLI course (sample programs). Nano is working fine while taking samples of thumbs up and down Infact it is training the neural network perfectly. But during Live execution for prediction purposes it is unable to determine whether I am holding thumbs up or down. I've been stucked to this matter months ago rather I've ran the same sample on my friends nano but i couldn't find a remedy. Will be waiting for beneficial reponse.
I'd love to do something with the Nano and a drone platform, it's definitely on my project list. I was working for a startup using the Jetson TX2 (the big brother of the Nano) for vision based collision avoidance for industrial drones... I wrote a medium blog post about the hardware development for it if you're interested! medium.com/iris-automation/the-journey-to-casia-part-one-faea27491f02
Important point, it DOES NOT support Wifi and Bluetooth out of the box. You need to purchase and install a module. Also, I just learned the hard way that power is an issue too. On mine, after installing the module, it will not turn on with the USB power.
Sohaib Arif, good point, I should have explicitly stated that. Are you using the m.2 key or a USB dongle? I have not had any power issues with the Intel WiFi/BT card. If you're using the dongle and the issue is due to power draw on VBUS, you could try setting the jumper to use power coming from the barrel jack which allows for up to 4A I believe. You'd have to adapt the output of your battery to this connector though.
@@ZacksLab I am using the M.2 key. Interestingly, I tried powering it via a portable USB phone charger that I know sends 5V/2A and it worked but it does seem to be slower now. You are right about the 4 A barrel jack, I will add that soon. Do you have any suggestions for a portable version of that config? I am mostly a software guy so I don't have much experience with the electrical stuff.
I would look for a battery that can source 5V up to 4A from a single port (I think the battery on the bill of materials for the jetbot can do 3A per port, which is likely more than enough although I haven't done a power study on the jetbot). Then, use a USB type A to barrel jack adapter like this one: www.bhphotovideo.com/c/product/1368294-REG/startech_usb2typem_3_usb_to_type.html/?ap=y&gclid=CjwKCAjwq-TmBRBdEiwAaO1enw753uFBGzvPy3oIlOcMy3uRFGAFWwvLlx5PHGL2FudDY-Jb9OE1qhoCOvAQAvD_BwE&lsft=BI%3A514&smp=Y Make sure you connect the J48 Power Select Header pins to disable power supply via Micro-USB and enable 5V @ 4A via the J25 power jack.
Awesome project ! Was looking for something like this only. Can the same concept be applied using a Raspberry Pi 3 b+ ? Please keep posting related stuff because youtube has got tons of electronics videos as well as tons of DL/NN videos ... But electronics along with AI, its really not there much. Subscribed !!
Great Video, I am currently doing a similar project using the waveshare JetRacer. This is a simple question but how do you save the images for training? Also I am doing supervised learning first as I have an oval track to use !
hey graham, thanks! I obtained/saved the images using this jupyter notebook: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb You could use this as a starting point for data gathering and tagging for your oval track.
hey! yes, you generally need to work with an image sensor that has a sync or trigger pin that allows you to synchronize a frame capture with data from other sensors.
@@ZacksLab Ohhh boy I got some learning to do, any tips on how to get started on that? So my plan is to use the following: -Nvidia Jetson Nano on the rc car to run the machine learning model - A basic computer camera to capture images -A built RC car with an ESC and Motor and a controller Is there any specific way to connect these tools to collect the data or will I need something special? Sorry for the complex questions here haha but any helpful directions would be appreciated! Or if you have videos on this I would love to watch. Thank you!
have you chosen an image sensor? i would start with the datasheet to learn its different capture modes from there, define your sensors for steering position, throttle, etc... and figure out their interfaces. it's likely you can use the Jetson's GPIO or SPI/I2C (what ever the interface is) to orchestrate all the triggering of data. you'll then need to define some sort of data structure for storing the image data + sensor data. i doubt something like this exists exactly for your use case, so you'll have to write your own drivers and software for handling all of the above. depending on the image sensor and other sensors you chose, the drivers may actually already exist in the linux kernel, but you'll have to enable them. i don't have any youtube videos on how to do this, but basically you have to reinstall Jetpack and recompile the kernel, device tree, and modules. there really is no easy shortcut for doing this, you will have to go down the rabbit hole of linux. alternatively, you can add a microcontroller that orchestrates the frame sync with other data and pass the data over the jetson side of things and then handle it in software at that point, it won't be as high of performance given the latency through the micro, but if your frame rate is low, it probably won't matter.
@@ZacksLab Thank you so much for your response I will keep all of these notes in mind going forward. It seems like I have a lot of work ahead of me and Nope I haven't pocked an image sensor yet but I certainly will soon to get started. If all goes well, in about 8 months, I'll have it done and I shall show you it. Thanks agaiN!
a36538 yes absolutely! This could control anything, an RC car, your car, a drone, heavy machinery, you name it. It’s just a matter of interfacing it properly to all of the sensor and actuators!
This might be a simple question but How do you transfer your dataset to the desktop pc, and transfer the trained model back to the Nano for demos ? I know you mentioned via Wifi but I'm kind of curious on a bit more depth explanation, Thanks.
i use WinSCP to do secure file transfer. you can open a SFTP connection to the IP address and transfer files to and from your PC and the remote connection (in this case, the Jetbot)
i believe these are the ones i bought: www.pololu.com/product/185 the BOM on github has adafruit listed but they are always sold out, alternatively there are STLs available that you could print. hope that helps!
i do not know the max detection range of this camera (its also a function of the size of the object). i have worked with high megapixel cameras capable of doing object classification out to 1km. of course this depends on your computer vision and post processing algorithms as well.
Thank you Ayobami! Yes, the 2070mq is certainly powerful enough to get started with machine learning and training neural networks! Especially for the Jetbot or other implementations for the Jetson Nano.
@no one expected the spanish inquisition I'm doing an msc in ai which has a project, picked reinforcement learning going to get it to learn in a simulator then transfer it to hardware!
Talk on. I'm looking for a things recognition for blind people, so they can point with the head or hand and nano speaks what it sees. This would work quite out of the box with a speaker and camara connected, I hope. There are also those nice 3D mapping cameras, helping to map the blind peoples enviroment. An idea for you.
that's an interesting idea. having it speak what it sees would be relatively easy, the hard part would be accurately determining what the person is pointing at reliably from different camera angles and such. do you imagine that this device would just get placed somewhere in the room and as the person moves around and points to things it would respond (assuming the person and object are within its field of view)? or would the person hold the device and use it to point? the latter would be much easier, but the former could be solved too.
@@ZacksLab I imagine a hand wrist band with the camera and lateral blinding to get a narrow, defined angle of view. So it shoud be a portable system with accu pack. Maybe in a rucksack. You definitely do not want a fisheye camera for this task.
There are Android smartwatches now that have two cameras on them, one facing up, and one looking out away from the hand. Tensorflow lite runs on Android. Personally I don't know why people bother with things like Jetson Nano, when modern smartphones and smart watches have so much capability now. Unless you are doing robotics, in which case these embedded devices have all the IO ports, etc.
This is amazing ! I wonder Are you moving the Jetbot back and forth while it avoids obstacles, or Do you command it to go to a desired destination ( obviously avoiding obstacles by itself in the process )?
Thanks! no there is no input from me, it just attempts to navigate any environment you put it in while avoiding any collisions. I could modify the program to give it more of a “purpose” rather than just moving around and avoiding things.
@@ZacksLab Oh ok I see, Great work man ! new sub, would absolutely love to see more stuff of this sort. Thinking of doing one school project on this matter.
hey as a beginner i had a question regarding your training data images, did you use augmentation in any form to increase the amount of images that you could have trained your NN on?
hi Surya, no I did not use any sort of augmentation (I believe you're referring to translations, rotations, etc...). I would be interested in seeing how this affects performance if there were a tool that would automatically grow a dataset using this technique. thanks for the question!
Hi Surya, the data augmentations that you can apply depends on the task. For the JetBot collision avoidance, the data augmentation only includes pixel-wise color distortion (brightness, hue, saturation, etc.). Horizontal flipping might also be appropriate for this task, since it doesn't matter whether we're blocked on the left or right. However, cropping, translations, and scaling change the perspective of the camera relative to objects, which would change the objective. For example, if we 'zoom' the image as a form of data augmentation, we would end up seeing some 'far away' objects as near by, which we would want to label as 'blocked', but it would falsely label with the original tag 'free'.
John makes sense, so in essence he can get twice the amount of data by flipping images on the vertical axis but any other form of augmentation is not worthwhile. Did I get that right?
Hi zack! i got myself a jet bot but i'm having trouble with the training of the last layer of the AlexNet model. I moved the dataset over to my laptop and ran the code using my gpu but but it gave me this error CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` I tried running it on the cpu
@@ZacksLab sorry i should've edited the comment cause i submitted it by accident without finishing and then i forgot to do it X.X my bad. anyway the code is the exact same one that you showed in the video, the same one on the jetbot github. copied and pasted it from github to a jupyter notebook on my laptop but it doesnt run the last cell where you create the new model based on the alexnet model and the dataset. if i run it on the gpu i get the error message i wrote in the previous comment. if i run it on the cpu i get another error message pointing to the line "loss = F.cross_entropy(outputs, labels)" in the last cell of the code saying that target 2 is out of bounds. the code is the exact same one as on the jetbot github, which is kinda weird cause everywhere i look on youtube everyone seems to have no issues with this collision avoidance program, meanwhile i'm having trouble running some code that is supposed to be good as it is. by the way thank you for replying!!!
average current draw is around 1A, so with a 10Ah battery you get close to 10 hours of run time. under full load the nano can draw 10W, so run time will be closer to 5 hours if you're doing a lot of compute.
You can do quite a bit with just Python and a familiarity with Linux. C++ is useful for OpenCV, but there is a python library that wraps the original opencv libraries into a library called opencv-python. You will sacrifice some run-time performance using this python wrapper instead of developing in cpp, but development in python is generally considered easier.
Nice video Zack! I genuinely fear the day when I see driverless cars everywhere, but I think AI is fascinating. I hope you make more videos on this subject. BTW, I know Tesla is all-in with electric cars, but I am not convinced that our existing electrical infrastructure can safely and efficiently supply such an increase of demand if electric cars become popular. The brownouts in CA associated with the PG+E grid and the wild fires is just one example. Ohio just passed a law to subsidize Perry and Davis-Besse Nuclear Power plants $150 Million dollars per year for 6 years because they cannot compete with gas-fired turbines (gas is abundant and cheap). These nuclear power plants are 40 years old and should be retired. And even though there is new technology for higher efficiency nuclear power, I know of only one nuke plant (in SC) that has been significantly upgraded in the past 10 years due to environmental concerns. I am not an advocate of nukes, but I seriously question if this nations's electrical grid can handle such an increased demand. So, please convey my message to Elon the next time you see him! Take care. sj
Thanks Steve! AI is a tricky subject that carries a lot of social and ethical concerns. It also has a lot of promising benefits that are currently in use and improving quality of life for many people today. But it is a double edged sword. I do want to do more projects with AI and hardware. I’ve been getting crushed at work so my time for TH-cam has diminished... but I look forward to jumping back into it when I free up!
The Jetson Nano could be used onboard a drone for many different functions, however you’d still want to use LiPo batteries intended for use with motors, as the BLDC motors commonly found on drones can pull a lot more current than this battery is capable of providing safely.
Wow. Before I saw this video, I was like "man, I don't need a $250 wall avoiding robot. I have an ArcBotics sparki(a programmable robot with it's own C++ IDE)." Seriously, big difference. You might be like, "oh, it just avoids walls so what", but really, this is just something else. Anybody who is just scrolling through the comments, this is a MUST SEE. I hate the natural human inclination toward clickbait instead of valuable and worthwhile content like this. I wish more people would seek out what actually is fulfilling, and benefits their career long-term. Instead, they look for trash like "Spongebob licking the marble for 10 hours". But people are people, so here's a suggestion: make your titles more concise, and the thumbnail self-explaining(that is, not including terms alot of people don't get like 'jet bot'). Also, presentation is BIG. And I don't just mean presentation in the video, but also the thumbnail, AND the title. TKOR(grant Thompson) is really a pro at this, and that's how he gets so many views and followers. His content isn't inherently interesting; it's just the title, and his thumbnail. If you could make a video with 1. Good content, 2.A short, concise, self-explanatory thumbnail AND title that draws interest to someone new to your channel, you'd be unstoppable. Even novice [engineers, technicians, chemists ect.] like TKOR, NurdRage(I'll admit he's fairly advanced) and The Slow Mo Guys(really, I think those people hardly even know about the chemistry of the explosives they use) make good channels with basic information And TONS of followers. Dude, if you want to get even more money for better projects via youtube(ads, maybe sponsors) you've gotta get more relateable. No, I'm not saying you need to get super basic like the above youtubers mentioned, I mean you have to Explain and Draw people in such a way that you can attract impatient newbies looking for clickbait, then when they least expect it, shove something that is rare and valuable down their throats(knowledge, skill, circuit science, computer programming). Then, they realize that youtube isn't just instant gratification and click-happy pleasure; theres MUCH, MUCH more to it than that. Awesome work Zack! Your videos and those alike make TH-cam worth it. God bless!
@@ZacksLab aha! Today I tried transferring around 60 pictures. But when I was planning to download the 'dataset.zip' file in the collision avoidance demo, it said that I had no permission! I have no permission to download that zip file... Do you might have a clue? Thanks in advance
@@ZacksLab well all these things happen in jupyter notebook. I always download my zip files with WinRAR. I heard it took a while for the jetbot to process all the pictures. But 3 hours later it still didn't worked..
Hmm, if you're trying to transfer to a Windows PC, download WinSCP and you can use SFTP to transfer files to and from your Jetbot (use the Jetbot username and pw to login). If you're having issues locally on the Jetbot, it could be a Linux permissions issue, which you can adjust for that file with the terminal command: sudo chmod 777 /path/to/file.zip
It looks like OpenMV H7 is an ARM based computer vision platform. I have seen people implement neural networks on microcontrollers, but I would imagine that you will quickly reach its limitations. Also, you cannot take advantage CUDA or libraries and frameworks like TensorFlow, TensorRT, PyTorch, Keras, etc... unless you're developing for a system that can run Linux and has an NVIDIA GPU available to it (like in the case of the Jetson Nano).
Amazing content! I'm an aspiring electronics engineer and hope to be like you one day. Do you do this work professionally? Any tips for someone like me who wants to start working with software like PyTorch but only has a general understanding of statistics and calculus? Thanks.
Hey Louie, thanks for checking out my channel! Yes, I'm an electrical engineer working on collision avoidance systems for autonomous drones. At work my focus is mostly in hardware design but at home I like to explore other topics (like this). I'd recommend checking out some courses online, there's a course called "Practical Deep Learning with PyTorch" on Udemy that covers all the fundamentals (I'm not affiliated with the course author or Udemy in any way). Udemy usually has 90% off on their courses so look around for coupon codes -- don't ever pay the full price.
absolutely. just need access to the autopilot. pixhawk (and similar drone APs) can take external commands that could be coming from an AI system such as this.
I'm not sure if there is one specifically for what you're talking about doing, but there is plenty of documentation on pixhawk autopilots for drones if you search for it. what drone platform do you intend to work with?
@@ZacksLab I have an intel aero ready to fly drone, it comes with a pixhawk as the flight controller. I have seen some works with the same configuration but a raspberry pi is used as the companion computer. I would like to use the Jetson nano as the companion computer instead for the purposes of data collection & collision avoidance as you have shown here with the jetbot, but obviously in my drone.
ah, got it. you have to look into the documentation for that platform, however, since you said it uses pixhawk you can likely use the jetson nano to send maneuver commands to it via serial or CAN.
Hey Jared! It was written in python and can be found here: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb
You've highlighted the system very well. It's encouraging especially to an older engineer that just used to program in HEX. Thanks very much!
Dave, I'm really glad that it was able to help you, I hope you enjoy the adventure! :)
"I can use this as data" hilarious!
you can train to search items as well..like missing keys ..by showing the keys ..then hide it ..let the bot search mode ..it will notfity to you by phone..or alarm device
I was going to say "ha, that's an overtraining right there", but it turns out working pretty good!! Very nice work!
I love the incoming light bulb threats. Great video
haha, thank you! :D
It's interesting how good the quiality of your videos, how good you explain stuff but you doesn't even reached 1K
I think you deserve more.
thank you Lila, I'll keep the videos coming regardless, I hope to hit 1k soon!
Awesome, I'm working on my turtlebot project, using ROS's gazebo to do simulation and A3C(a Reinforcement Learning algorithm) to train the BOT. It can save tons of time by avoiding gathering data and label them.
jack flynn, interesting. How is the simulation environment generated? It would be cool to see a side by side comparison of both methods. I would imagine that the ideal training set is a combination of both sims and real data.
@@ZacksLab well the problem is not about the simulation, its about Deep RL algorithms. As DeepMind's research from DQN to DDPG and A3C(Deep RL methods) take raw pixels as input, and learn how to avoid obstacles and even navigate through maze.
@@ZacksLab this is a Video by DeepMind, it's a result on playing TORCS using A3C methods:
th-cam.com/video/0xo1Ldx3L5Q/w-d-xo.html
jack flynn So does the AI in the video you shared always stay inside the simulated environment? What happens when you put AI that was trained strictly on simulated data into a physical device in the real world and it encounters scenery, lighting conditions, objects, and scenarios that the simulator wasn’t able to provide?
The issue with training on data generated in simulators is that the real world throws scenarios at the AI that the simulations just can’t account for. Are you saying that the A3C method solves this issue?
@@ZacksLab okay, I got it. Actually RL's main idea is learning by interaction. It let the agent(jetson bot in this case) try moves and gain rewards. If it hit an obstacle then this episode is finished and gain negative reward. Agent's goal is to maximize total reward. So after episodes of episodes of learning, it adjusts parameters in the agents NN.
This can be done in a simulated environment or done in real world. The agent is a policy network which in A3C methods is also called an Actor. It's a policy network, when you input a state(a image from camera) it output an action(left right forward or stop). A3C is a RL methods, RL is different from supervised learning in that you don't need to give your learner labeled data.
Back to the simulated env, if train the bot is difficult in real world. Then it can be done in simulated env. Which ROS(robot operating system based on Ubuntu) provides tools to do that. When your agent/actor is well performed in the simulation, putting it into real world is easy(ROS makes sure of it).
I did some experiments, after the robot is trained in the simulated env, it works on every kinds of ground surface. Maybe because the weights on pixels related to the grounds are small.(Still working on the prove that).
Of all the jet nano video I saw, all I can say this is like a practical demonstration or almost a real world scenario, congrtas bro, by the way hope you can reading numbers like speed limit simulating car limit
Thank you! Yes adding road sign detection is actually what I want to work on next, I was thinking about putting a nano and camera on my car dashboard to collect data and start working on sign and streetlight detection and interpretation.
@@ZacksLab this will be great and awesome, you deserve one subscriber, in 3..2..1,bell button click done
Thank you. Best demo i have seen with information that really help me move forward.
you're welcome!
Awesome stuff Zack. This one I understood a little more than the subscribe counter. Love the video structure and creativity!
This video is so well done, awesome job! I like the JetBot color scheme and Jupyter theme :)
Thanks so much! I ordered 3d printing filament just to get those colors for the chassis :)
I became a big fan to this channel! It is what I wanted to do.
So damn glad you started making videos, they’re really entertaining and inspiring.
Thank you Flavius, that means a lot to me!!
Great video Zach!
I'm looking to get into the NVidia Jetson Nano for signal processing. Nice to see how easily it is to use pytorch to train the classifier, download it to the Jetson board and run it. This example you gave is really cool. Liked. Subbed. Smashed the bell.
Thank you very much. I might build this as my first robot!
You’re welcome, let me know how it goes!
I'm loving the shirt man, best game ever!
omg I know, UO ruined every other game for me, nothing will ever compare. what server did you play on? I was on Atlantic from 2002-2004... I was on yamato before that because I had no idea what servers were when I was first starting and I just randomly chose one.
"breakdancing? ups.. I'll consider this a feature"
Went to the comments to see if somebody already commented :D
wow i want to invest in this product you engineered from Nividia, that is complex code and you did amazing
Ariel, thank you! The jetbot is an open source project built by the Nvidia community, I didn't personally design the Jetbot or the code used in this video, it's all available on the Jetbot github for anyone to use/experiment with!
I'm loving these video
Hi, it is great video. I just got nano and wondering how to setup training to save my own data. Will you have step by step video? Thanks
Hi Ryan, thank you! If you follow the collision avoidance example that is on the jetbot's github repo under NVIDIA-AI-IOT/jetbot/notebooks/collision_avoidance you will find a jupyter notebook called data_collection.ipynb. Launch this notebook on the jetbot and run through the code, if your jetbot hardware is set up correctly everything should go smoothly.
I can definitely do a step by step video on this but it will take me a bit to get it posted!
@@ZacksLab thank you if it takes a while it is ok but it would be very very helpful with step by step thanks again dude
@@ZacksLab I would also like to see that in one of your upcoming videos, Hopefully It's still in your plans.
Great video. I don't know about much about AI but your video make me excited. Unfortunately, I cant buy motor driver and plastic ball in my country, what should i use to replace it? And after i craft the car, how i use your data? Should i buy the Jetbot's Notebook?
I've jetpack 4.6 installed on my 2 GB jetson nano and I've interfacd Rasbperry pi V2 CSI Camera.
The issue which I am facing right now is the live execution of Thumbs task in free DLI course (sample programs).
Nano is working fine while taking samples of thumbs up and down Infact it is training the neural network perfectly.
But during Live execution for prediction purposes it is unable to determine whether I am holding thumbs up or down.
I've been stucked to this matter months ago rather I've ran the same sample on my friends nano but i couldn't find a remedy.
Will be waiting for beneficial reponse.
Wonderful video.... Thanks for sharing. I am eagerly waiting for my nano kit ordered from amazon :)
Would be great to see a Drone project made with Jetson Nano
I'd love to do something with the Nano and a drone platform, it's definitely on my project list. I was working for a startup using the Jetson TX2 (the big brother of the Nano) for vision based collision avoidance for industrial drones... I wrote a medium blog post about the hardware development for it if you're interested! medium.com/iris-automation/the-journey-to-casia-part-one-faea27491f02
I would like to know more about how you configured a custom theme for the jupyter notebook.
I believe you can only set the theme with jupyter lab, not jupyter notbeook. In jupyter lab, go to settings -> jupyterlab theme
Important point, it DOES NOT support Wifi and Bluetooth out of the box. You need to purchase and install a module. Also, I just learned the hard way that power is an issue too. On mine, after installing the module, it will not turn on with the USB power.
Sohaib Arif, good point, I should have explicitly stated that. Are you using the m.2 key or a USB dongle? I have not had any power issues with the Intel WiFi/BT card. If you're using the dongle and the issue is due to power draw on VBUS, you could try setting the jumper to use power coming from the barrel jack which allows for up to 4A I believe. You'd have to adapt the output of your battery to this connector though.
@@ZacksLab I am using the M.2 key. Interestingly, I tried powering it via a portable USB phone charger that I know sends 5V/2A and it worked but it does seem to be slower now. You are right about the 4 A barrel jack, I will add that soon. Do you have any suggestions for a portable version of that config? I am mostly a software guy so I don't have much experience with the electrical stuff.
I would look for a battery that can source 5V up to 4A from a single port (I think the battery on the bill of materials for the jetbot can do 3A per port, which is likely more than enough although I haven't done a power study on the jetbot). Then, use a USB type A to barrel jack adapter like this one: www.bhphotovideo.com/c/product/1368294-REG/startech_usb2typem_3_usb_to_type.html/?ap=y&gclid=CjwKCAjwq-TmBRBdEiwAaO1enw753uFBGzvPy3oIlOcMy3uRFGAFWwvLlx5PHGL2FudDY-Jb9OE1qhoCOvAQAvD_BwE&lsft=BI%3A514&smp=Y
Make sure you connect the J48 Power Select Header pins to disable power
supply via Micro-USB and enable 5V @ 4A via the J25 power jack.
Awesome. Love the breakdancing. LOL
Awesome project ! Was looking for something like this only.
Can the same concept be applied using a Raspberry Pi 3 b+ ?
Please keep posting related stuff because youtube has got tons of electronics videos as well as tons of DL/NN videos ... But electronics along with AI, its really not there much.
Subscribed !!
Underrated video
Cool channel Zack. I have a 1080 Ti that will be used for training data, but I'm still waiting on the Nano to be delivered :S
Great Video, I am currently doing a similar project using the waveshare JetRacer. This is a simple question but how do you save the images for training? Also I am doing supervised learning first as I have an oval track to use !
hey graham, thanks! I obtained/saved the images using this jupyter notebook: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb You could use this as a starting point for data gathering and tagging for your oval track.
Any tips on how you collect data such as image, throttle, steering angle into a dataframe for machine learning?
hey! yes, you generally need to work with an image sensor that has a sync or trigger pin that allows you to synchronize a frame capture with data from other sensors.
@@ZacksLab Ohhh boy I got some learning to do, any tips on how to get started on that? So my plan is to use the following:
-Nvidia Jetson Nano on the rc car to run the machine learning model
- A basic computer camera to capture images
-A built RC car with an ESC and Motor and a controller
Is there any specific way to connect these tools to collect the data or will I need something special?
Sorry for the complex questions here haha but any helpful directions would be appreciated! Or if you have videos on this I would love to watch. Thank you!
have you chosen an image sensor? i would start with the datasheet to learn its different capture modes
from there, define your sensors for steering position, throttle, etc... and figure out their interfaces. it's likely you can use the Jetson's GPIO or SPI/I2C (what ever the interface is) to orchestrate all the triggering of data. you'll then need to define some sort of data structure for storing the image data + sensor data.
i doubt something like this exists exactly for your use case, so you'll have to write your own drivers and software for handling all of the above. depending on the image sensor and other sensors you chose, the drivers may actually already exist in the linux kernel, but you'll have to enable them. i don't have any youtube videos on how to do this, but basically you have to reinstall Jetpack and recompile the kernel, device tree, and modules. there really is no easy shortcut for doing this, you will have to go down the rabbit hole of linux.
alternatively, you can add a microcontroller that orchestrates the frame sync with other data and pass the data over the jetson side of things and then handle it in software at that point, it won't be as high of performance given the latency through the micro, but if your frame rate is low, it probably won't matter.
@@ZacksLab Thank you so much for your response I will keep all of these notes in mind going forward. It seems like I have a lot of work ahead of me and Nope I haven't pocked an image sensor yet but I certainly will soon to get started. If all goes well, in about 8 months, I'll have it done and I shall show you it. Thanks agaiN!
Cool! Thanks for sharing!
You’re welcome, glad you liked it!
Really cool! Could you mate this with an rc car chassis? Could one make an autopilot rc car?
a36538 yes absolutely! This could control anything, an RC car, your car, a drone, heavy machinery, you name it. It’s just a matter of interfacing it properly to all of the sensor and actuators!
How much time does it take to train the data set? The Jupyter bar shows "Busy" for the last 45 mins.
With a 1070 ti gpu it took a few mins
Great video! Thanks for sharing it 👍
thank you! and you're welcome :)
This might be a simple question but How do you transfer your dataset to the desktop pc, and transfer the trained model back to the Nano for demos ? I know you mentioned via Wifi but I'm kind of curious on a bit more depth explanation, Thanks.
i use WinSCP to do secure file transfer. you can open a SFTP connection to the IP address and transfer files to and from your PC and the remote connection (in this case, the Jetbot)
Where did you get wheels from? Or did you print them yourself?
i believe these are the ones i bought: www.pololu.com/product/185
the BOM on github has adafruit listed but they are always sold out, alternatively there are STLs available that you could print. hope that helps!
I just bought Jetson Nano too. Have you tried running faster rcnn on the nano? I'm still waiting until I get a power brick and battery.
Great video. Thank you. +1 for breakdancing.
Do you know the actual range of the camera ? how far can it detect objects from ? Great video by the way !
i do not know the max detection range of this camera (its also a function of the size of the object). i have worked with high megapixel cameras capable of doing object classification out to 1km. of course this depends on your computer vision and post processing algorithms as well.
Wow... I bet this could be used to spot potholes on road.
I love this, do you think a mobile 2070mq is a good GPU to learn deep learning and other artificial intelligence things
Thank you Ayobami! Yes, the 2070mq is certainly powerful enough to get started with machine learning and training neural networks! Especially for the Jetbot or other implementations for the Jetson Nano.
why is the camera movement so jittery? How did you do it?
which camera? the one I'm filming with or the jetbot's?
YES this is awesome! i'm going to stick my nano on a quad drone and make it learn how to FLY ITSELF :D
@no one expected the spanish inquisition I'm doing an msc in ai which has a project, picked reinforcement learning going to get it to learn in a simulator then transfer it to hardware!
Talk on. I'm looking for a things recognition for blind people, so they can point with the head or hand and nano speaks what it sees. This would work quite out of the box with a speaker and camara connected, I hope. There are also those nice 3D mapping cameras, helping to map the blind peoples enviroment. An idea for you.
that's an interesting idea. having it speak what it sees would be relatively easy, the hard part would be accurately determining what the person is pointing at reliably from different camera angles and such.
do you imagine that this device would just get placed somewhere in the room and as the person moves around and points to things it would respond (assuming the person and object are within its field of view)? or would the person hold the device and use it to point? the latter would be much easier, but the former could be solved too.
@@ZacksLab I imagine a hand wrist band with the camera and lateral blinding to get a narrow, defined angle of view. So it shoud be a portable system with accu pack. Maybe in a rucksack. You definitely do not want a fisheye camera for this task.
There are Android smartwatches now that have two cameras on them, one facing up, and one looking out away from the hand. Tensorflow lite runs on Android. Personally I don't know why people bother with things like Jetson Nano, when modern smartphones and smart watches have so much capability now. Unless you are doing robotics, in which case these embedded devices have all the IO ports, etc.
This is amazing ! I wonder Are you moving the Jetbot back and forth while it avoids obstacles, or Do you command it to go to a desired destination ( obviously avoiding obstacles by itself in the process )?
Thanks! no there is no input from me, it just attempts to navigate any environment you put it in while avoiding any collisions. I could modify the program to give it more of a “purpose” rather than just moving around and avoiding things.
@@ZacksLab Oh ok I see, Great work man ! new sub, would absolutely love to see more stuff of this sort. Thinking of doing one school project on this matter.
hey as a beginner i had a question regarding your training data images, did you use augmentation in any form to increase the amount of images that you could have trained your NN on?
hi Surya, no I did not use any sort of augmentation (I believe you're referring to translations, rotations, etc...). I would be interested in seeing how this affects performance if there were a tool that would automatically grow a dataset using this technique. thanks for the question!
Hi Surya, the data augmentations that you can apply depends on the task. For the JetBot collision avoidance, the data augmentation only includes pixel-wise color distortion (brightness, hue, saturation, etc.). Horizontal flipping might also be appropriate for this task, since it doesn't matter whether we're blocked on the left or right. However, cropping, translations, and scaling change the perspective of the camera relative to objects, which would change the objective. For example, if we 'zoom' the image as a form of data augmentation, we would end up seeing some 'far away' objects as near by, which we would want to label as 'blocked', but it would falsely label with the original tag 'free'.
John makes sense, so in essence he can get twice the amount of data by flipping images on the vertical axis but any other form of augmentation is not worthwhile. Did I get that right?
Does the Jetson Nano take any camera that has a CSI(MIPI) interface?
Yes, it supports MIPI-CSI2 cameras, here's an example for getting the raspberry pi v2 camera working with the nano: bit.ly/2oCborL
Hi zack! i got myself a jet bot but i'm having trouble with the training of the last layer of the AlexNet model. I moved the dataset over to my laptop and ran the code using my gpu but but it gave me this error
CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
I tried running it on the cpu
hey! without seeing your code it will be hard to help you... do you have it in a github repo? i could take a look if so
@@ZacksLab sorry i should've edited the comment cause i submitted it by accident without finishing and then i forgot to do it X.X my bad. anyway the code is the exact same one that you showed in the video, the same one on the jetbot github. copied and pasted it from github to a jupyter notebook on my laptop but it doesnt run the last cell where you create the new model based on the alexnet model and the dataset. if i run it on the gpu i get the error message i wrote in the previous comment. if i run it on the cpu i get another error message pointing to the line "loss = F.cross_entropy(outputs, labels)" in the last cell of the code saying that target 2 is out of bounds. the code is the exact same one as on the jetbot github, which is kinda weird cause everywhere i look on youtube everyone seems to have no issues with this collision avoidance program, meanwhile i'm having trouble running some code that is supposed to be good as it is. by the way thank you for replying!!!
Amazing job !!!!
thank you!
Thanks Zack! 😎
welcome :D
How long does your jetbot last with a fully charged battery? You said you are using a 10 amp battery.
average current draw is around 1A, so with a 10Ah battery you get close to 10 hours of run time. under full load the nano can draw 10W, so run time will be closer to 5 hours if you're doing a lot of compute.
Really appreciate this video
thanks! appreciate the comment :)
Great video !
What language would I have to learn to use the Nano? I always wanted to do something with face recognition.
You can do quite a bit with just Python and a familiarity with Linux. C++ is useful for OpenCV, but there is a python library that wraps the original opencv libraries into a library called opencv-python. You will sacrifice some run-time performance using this python wrapper instead of developing in cpp, but development in python is generally considered easier.
Nice video Zack! I genuinely fear the day when I see driverless cars everywhere, but I think AI is fascinating. I hope you make more videos on this subject.
BTW, I know Tesla is all-in with electric cars, but I am not convinced that our existing electrical infrastructure can safely and efficiently supply such an increase of demand if electric cars become popular. The brownouts in CA associated with the PG+E grid and the wild fires is just one example. Ohio just passed a law to subsidize Perry and Davis-Besse Nuclear Power plants $150 Million dollars per year for 6 years because they cannot compete with gas-fired turbines (gas is abundant and cheap). These nuclear power plants are 40 years old and should be retired. And even though there is new technology for higher efficiency nuclear power, I know of only one nuke plant (in SC) that has been significantly upgraded in the past 10 years due to environmental concerns. I am not an advocate of nukes, but I seriously question if this nations's electrical grid can handle such an increased demand. So, please convey my message to Elon the next time you see him! Take care. sj
Thanks Steve! AI is a tricky subject that carries a lot of social and ethical concerns. It also has a lot of promising benefits that are currently in use and improving quality of life for many people today. But it is a double edged sword.
I do want to do more projects with AI and hardware. I’ve been getting crushed at work so my time for TH-cam has diminished... but I look forward to jumping back into it when I free up!
hi ,Zack ,it is an amazing jetbot. Could you share the brand and model of your motor driver and DC-motor ?
Hi lan liu, here are the links for the motor driver and motors:
amzn.to/2FbcLmE (driver)
amzn.to/2Ri28mY (motors)
@@ZacksLab thanks
Can you use the jetson mano to build a drone using the battery that you have
The Jetson Nano could be used onboard a drone for many different functions, however you’d still want to use LiPo batteries intended for use with motors, as the BLDC motors commonly found on drones can pull a lot more current than this battery is capable of providing safely.
I'm fresh.How to use this model to recognize multiclass object? especially code
you're trying to use a NN to identify objects? if so, start here maybe: th-cam.com/video/2XMkPW_sIGg/w-d-xo.html
this is also good: th-cam.com/video/k5pXXmTkPNM/w-d-xo.html
Wow. Before I saw this video, I was like "man, I don't need a $250 wall avoiding robot. I have an ArcBotics sparki(a programmable robot with it's own C++ IDE)." Seriously, big difference. You might be like, "oh, it just avoids walls so what", but really, this is just something else. Anybody who is just scrolling through the comments, this is a MUST SEE. I hate the natural human inclination toward clickbait instead of valuable and worthwhile content like this. I wish more people would seek out what actually is fulfilling, and benefits their career long-term. Instead, they look for trash like "Spongebob licking the marble for 10 hours".
But people are people, so here's a suggestion: make your titles more concise, and the thumbnail self-explaining(that is, not including terms alot of people don't get like 'jet bot'). Also, presentation is BIG. And I don't just mean presentation in the video, but also the thumbnail, AND the title. TKOR(grant Thompson) is really a pro at this, and that's how he gets so many views and followers. His content isn't inherently interesting; it's just the title, and his thumbnail.
If you could make a video with
1. Good content,
2.A short, concise, self-explanatory thumbnail AND title that draws interest to someone new to your channel,
you'd be unstoppable. Even novice [engineers, technicians, chemists ect.] like TKOR, NurdRage(I'll admit he's fairly advanced) and The Slow Mo Guys(really, I think those people hardly even know about the chemistry of the explosives they use) make good channels with basic information
And TONS of followers.
Dude, if you want to get even more money for better projects via youtube(ads, maybe sponsors) you've gotta get more relateable.
No, I'm not saying you need to get super basic like the above youtubers mentioned, I mean you have to Explain and Draw people in such a way that you can attract impatient newbies looking for clickbait, then when they least expect it, shove something that is rare and valuable down their throats(knowledge, skill, circuit science, computer programming). Then, they realize that youtube isn't just instant gratification and click-happy pleasure; theres MUCH, MUCH more to it than that.
Awesome work Zack! Your videos and those alike make TH-cam worth it. God bless!
Thank you FriedPickle, your comment and feedback means a lot to me. :)
;)
How long did it take for you to transver the data to your PC?
I log into the Jetbot using WinSCP and transfer the files over SFTP. Took less than a minute... both my computer and Jetbot were near my router.
@@ZacksLab aha! Today I tried transferring around 60 pictures. But when I was planning to download the 'dataset.zip' file in the collision avoidance demo, it said that I had no permission!
I have no permission to download that zip file... Do you might have a clue?
Thanks in advance
What software are you using to transfer the file?
@@ZacksLab well all these things happen in jupyter notebook. I always download my zip files with WinRAR.
I heard it took a while for the jetbot to process all the pictures. But 3 hours later it still didn't worked..
Hmm, if you're trying to transfer to a Windows PC, download WinSCP and you can use SFTP to transfer files to and from your Jetbot (use the Jetbot username and pw to login). If you're having issues locally on the Jetbot, it could be a Linux permissions issue, which you can adjust for that file with the terminal command: sudo chmod 777 /path/to/file.zip
Nice work man! RIP beer.
TheWildJarvi thanks! Haha, yeah. Looking back at the video I came close to knocking it over a few other times :P
2:04 ~ 3:40 is the part of data scientist's life which no one wants to take on :)
Can you do this on a open MV h7?
It looks like OpenMV H7 is an ARM based computer vision platform. I have seen people implement neural networks on microcontrollers, but I would imagine that you will quickly reach its limitations. Also, you cannot take advantage CUDA or libraries and frameworks like TensorFlow, TensorRT, PyTorch, Keras, etc... unless you're developing for a system that can run Linux and has an NVIDIA GPU available to it (like in the case of the Jetson Nano).
Amazing content! I'm an aspiring electronics engineer and hope to be like you one day. Do you do this work professionally? Any tips for someone like me who wants to start working with software like PyTorch but only has a general understanding of statistics and calculus? Thanks.
Hey Louie, thanks for checking out my channel! Yes, I'm an electrical engineer working on collision avoidance systems for autonomous drones. At work my focus is mostly in hardware design but at home I like to explore other topics (like this). I'd recommend checking out some courses online, there's a course called "Practical Deep Learning with PyTorch" on Udemy that covers all the fundamentals (I'm not affiliated with the course author or Udemy in any way). Udemy usually has 90% off on their courses so look around for coupon codes -- don't ever pay the full price.
Awesome sir !
How can I get in contact with you ?
I am a beginner in electronics and Computer Vision.
Do you think it's possible to do same with a drone ?
absolutely. just need access to the autopilot. pixhawk (and similar drone APs) can take external commands that could be coming from an AI system such as this.
@@ZacksLab Is there any blog or site where I can read more about it ? I'm working on a project and would love to implement some of this.
I'm not sure if there is one specifically for what you're talking about doing, but there is plenty of documentation on pixhawk autopilots for drones if you search for it. what drone platform do you intend to work with?
@@ZacksLab I have an intel aero ready to fly drone, it comes with a pixhawk as the flight controller. I have seen some works with the same configuration but a raspberry pi is used as the companion computer. I would like to use the Jetson nano as the companion computer instead for the purposes of data collection & collision avoidance as you have shown here with the jetbot, but obviously in my drone.
ah, got it. you have to look into the documentation for that platform, however, since you said it uses pixhawk you can likely use the jetson nano to send maneuver commands to it via serial or CAN.
great video
What program did you use for data collection and tagging so fast with the on board camera?
Hey Jared! It was written in python and can be found here: github.com/NVIDIA-AI-IOT/jetbot/blob/master/notebooks/collision_avoidance/data_collection.ipynb
Can you list your pc's specs?
Sure, my PC is:
Intel i7-4790K
NVIDIA 1070Ti
32GB DDR3
500GB SSD
Just drive up and down local roads several times ..maybe throw in some deer models
incoming jetbot
I went to it as a how to video and found it very entertaining
and probably cuz there's not very many people doing raspberry or Alex Webb repos
awesome, glad you liked it :)
NOT THE BEER
wow...it can play fetch the stick...just throw the stick it will chase it and run over it ..waiting next command master
we have to be nice to our robots so they are nice to us once they become sentient ;)
"I'll consider this a feature"
typical programmer LOL consider a bug as feature
Cat gives 0 fucks.