Hey everyone! 2 things. First of all, we have instructions on the written guide of how to BOTH decrease the resolution and convert to NCNN to get greatly improved FPS (thank you very much Philipcodes from the forums). And talk about rough timing, Yolo11 launched the day after this video but it will work perfectly fine with this guide. In our guide, we have the line: model = YOLO("yolov8n.pt") You will just need to change it to: model = YOLO("yolo11n.pt") to start using Yolo11.model = YOLO("yolov8n.pt")
The AI Kit is still quite fresh software wise, right now they have a fantastic set of instructions on getting it, but its not running out of a thonny script like this: www.raspberrypi.com/documentation/accessories/ai-kit.html#getting-started
These projects have progressed a lot since we made the old video, this one is easier, quicker to get going, and runs more than 10x faster! This one actually also uses OpenCV as well!
Its not a simple task to run this code on a dedicated AI chip, for the AI Kit you need to jump through a few hoops to convert the model to the specific format it needs. The AI Kit library does come with YoloV8n ready to go and we have seen reports of people getting FPS in the 50-60 range which is incredible! Right now it is a little difficult to actually use the AI Kit in a project (it feels a little more like a tech demo), but software support for it is developing rapidly so that shouldn't be a problem for too long. When the software support is mature enough you will definitely find a video here!
Training with your own data is a little bit more involved. Ultralytics has some great documentation on it, but be warned you will need some decent hardware. On a 4080 it ussually takes 2 or so hours, no GPU may take days or a week, and on a raspberry Pi it may take months. docs.ultralytics.com/yolov5/tutorials/train_custom_data/#23-organize-directories
Thanks for the video. What’s the approximate max distance in which the detection will work? Or which size on screen should be object we want to detect and does this parameters affected by video resolution and model size?
A lower resolution will lower the distance it can detect, and a smaller model will also lower the distance. We found that the medium model, when converted to NCNN (so at the standard 640x640) could recognise a cup at about 8-10 meters away.
You would need to first get YOLO to detect the animal first. Here is a list of all the things that are in the COCO library that can be detected: tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ Then you would need to connect up a motor driver and motor. We have a guide on how to do that here to get you started! th-cam.com/video/ea6tSppgZlY/w-d-xo.htmlfeature=shared And if you need a hand with it we have a maker community forum where lots of makers can help out with your project! forum.core-electronics.com.au/
I think you may have a hard time with that, they may be too small to be seen by the camera, and they may be too fast and blurry! On top of this I don't think the model will be able to identify them sorry.
I cant believe how simple, uncomplicated and pragmatic this video is.....however is there anyway to have a pass-through to outsource that compute to an x86 system or couple of rpi 5 clusters ? Also this thing could run full frame rate on that odroid with that Rockchip monster with 16GB Ram. Will give it a try, but need to get me a RPi 5 first, have an RPi 4 already.
The Ultralytics implementation of YOLO is very cross platform, so if you can get it set up on an x86 system, you should be able to use nearly the same python code we cover here! In terms of the Odroid, It may come down to an issue of optimisation, even when we convert it to NCNN it still doesn't fully utilise all of the Pi's hardware, would need to test though. And RAM isn't a big factor here, the biggest model is barely using 2GB of RAM which is incredible! Best of luck when you can give this a go!
@@Core-Electronics Wow! ok Chris from explaining computers did a video yesterday on a new Radxa with an Intel N100 chip and similar build to a RPi5. This x86 thing could do it, just guessing. I have a tiny Thinkcenter lying around with an i7 6700....Now all I need is to be able to connect the camera to the PCie interface or search for some usb module that can connect to the cam....may need to research more....
We have some code in the written guide that uses a webcam instead. There can be some issues with the colour profile used by the camera, and we talk a little about it in there. core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/#appendix-using-a-webcam
@Core-Electronics I needed your help. So basically I am using it for my Quadcopter so I wanted to use YOLO v5 on my pi5 so can you tell me which camera would be good and the objects I have to find is that the plastic and the styrofoam. How do I train my YOLOv5 to Do that ?
Really any camera will do, the Pi camera module v2 and v3 might be a good pick (you can also use a webcam and we have some code in the written guide linked below the video). Your issue would be in getting the model to detect Styrofoam and plastic. Training a model is quite involved and without a GPU can take several days. There are some pre-trained models that you can find here that might fit your needs, but if not you may need to deep-dive into training your own model, which we unfortunately don't cover 😭. huggingface.co/models?other=yolo
Mate, really great, educational, and interesting video. Could you show this with the new AI HAT+ 26 TOPS from Raspberry Pi? It would be very interesting to learn how to extract the output in the form of a CSV file or something similar, to use the information to find out how many people pass by the camera and at what time, or how many cyclists, etc. Maybe even make graphs from this?
We definitely have some AI HAT videos in the pipeline (but the setup and usage is very different), I don't know about data logging thought. Large langauge models like ChatGPT and Claude would be more than capable of helping you write the code your looking for though!
I’m trying to automate something based off object recognition and I was wondering if you might be able to help me out, specifically I want it to play a noise whenever it detects certain objects, for example when it see a person, it would play a .wav file that correlates
You can easily achieve this with the Pygame library, we don't have a specific tutorial on this but you can find a million others online demonstrating how to use it. The important lines should be something along the lines of: import pygame pygame.mixer.init() pygame.mixer.music.load("myFile.wav") pygame.mixer.music.play() You'll just need to whack the .wav file in the same folder as the object detection script. If you get stuck or need a hand though, feel free to chuck a post on our community forums!
I previously had this issue and it was caused by not running the first set of commands properly: sudo apt update sudo apt install python3-pip -y pip install -U pip If that doesn't work, a fresh installation of Bookworm OS might help. If all that fails feel free to post on our community forum topic for this video, we have lots of makers over there that can help! forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923
The hardware control script can definitely be modified to use yolo world. You should only need to change the line where we choose the model to use, and add in the line where we prompt it what to look for!
There are a few steps between running the models that come with the AI kit, and getting YOLO to run on it. (But we may be working on an AI HAT guide as we speak 😏)
The ncnn portion of the code doesn’t work for me! I get an error “ModuleNotFoundError: No module named ‘ncnn’ “. I have the exact lines of code running and the main code works as well so i’m unsure how to fix this
Is this when running the conversion script or trying to run the object detection code after converting it? Make sure that your script is saved and is in the same folder as all your other code and models. If this still doesn't work, feel free to chuck a post on our community forum topic for this video, we have lots of makers over there that can help. forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923 We are also in the process of updating the NCNN conversion section as we have found a better way so that should be up sometime today if you want to give it a try!
@@Core-Electronics This is when running the conversion script. It tries to run update but spits out: AutoUpdate skipped (Offline) I’ll post on the forum but thanks!
We haven't tested it, but it will most likely work on a Pi 4, Ultraytics says it has support. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.
Have you tried running the line multiple times? It installs quite a lot with that line and you may need to run it a few times to let it do its thing. If that doesn't fix it, feel free to post your issue on our dedicated community forum topic for this video. Try and include some information about the specific dependency issue. We have a lot of makers over there that are happy to help! forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923/6
You most definitely could! The troubles may be in supplying power to it, and getting it an internet connection to send data back. You would also need to experiment to see which types of wildlife it will pick up. It may recognise everything 4-legged as a dog!
Half of this is missing. (1) You don't say you need a sudo apt upgrade after the sudo apt update. (2) As far as I can tell, the ultralytics install does not install PyTorch so that is another step. (3) There seems to be a load of settings needed to make the camera work - although these may be out of date, I can't tell because I cannot make the install work. Given that you show setup from a new set of components all that stuff is necessary. All I get running your tutorial is a load of errors about torch>=1.7.0 (no - re-running does not magically fix the issue).
Sorry to hear you are having issues. This installation process was taken directly from Ultralytics who have made most of the modern YOLO models. Running apt upgrade won't hurt but it's not entirely needed here as we are mainly focused on ensuring that Python and pip are up to date. You may have encountered an issue in your installation process as it will most definitely install Pytorch. That or you may have an issue with your virtual environments. The camera settings can vary depending on the Pi and could be many things. Feel free to post your issue on our community forum post for this guide with a little bit of information about your setup and where the issue is, we have lots of makers over there that can help!
Hey, Loved your content i am an Intern at ISRO(India Space Research Organisation) and i am working on deploying a yolov8 model on raspberry pi can you help me deploy that with raspberry pi ai kit and improve the model for real time inference. What format would be best to deploy as i have seen few videos that says convert the model into onxx then convert it into Hailo hf format using the hailo dataflow compiler or model zoo then copying and then running the code am i going right?? your help is highly appreciated.
That sounds exactly right! The AI Kit only works with the Hailo .HEF model format, and the easiest way is to first convert it to ONXX, then HEF. Just be aware that when you convert it to ONXX you will often "bake in" a lot of configuration. When its in Pytorch format, we can change the resolution, and for things like YOLO world we can change the prompts for it to look for, but when we convert it to ONXX it locks these in and we cant change it. So get the settings right, convert to ONXX, then to HEF and run on the hat. The usage is different than our script here though, we are using a nice library which lets us run it with high level Python code and its not as easy yet to do this with the kit. Best of luck mate!
Another thing! If you run into issues with the AI Kit check out the AI camera that just launched - it uses the Sony imx-500. We have had a lot more ease in using it and writing custom scripts with it. It may not be as powerful, but it still runs well.
We didn't test it on an RPi4, but it should work pretty much the same, Ultralytics says that it is supported. Just be ready for it to run about 2x slower :(
Hey everyone! 2 things.
First of all, we have instructions on the written guide of how to BOTH decrease the resolution and convert to NCNN to get greatly improved FPS (thank you very much Philipcodes from the forums).
And talk about rough timing, Yolo11 launched the day after this video but it will work perfectly fine with this guide. In our guide, we have the line:
model = YOLO("yolov8n.pt")
You will just need to change it to:
model = YOLO("yolo11n.pt")
to start using Yolo11.model = YOLO("yolov8n.pt")
Love this! Just picked up a pi5 and a camera, going to start here for sure. Your vids are always so easy to follow and super helpful. Keep it up!
Thank you! this is the best beginner tutorial I've come across. Can you please do a video on implementing AI kit to boost fps as well?
The AI Kit is still quite fresh software wise, right now they have a fantastic set of instructions on getting it, but its not running out of a thonny script like this:
www.raspberrypi.com/documentation/accessories/ai-kit.html#getting-started
Amazing video. Thank you so much sir, you deserve more views!
This is a very helpful tutorial!!!! Nice work ❤
I saw your videos using OpenCV and now with Yolo. Which one should I start as a beginner? Appreciate your super advice.
These projects have progressed a lot since we made the old video, this one is easier, quicker to get going, and runs more than 10x faster! This one actually also uses OpenCV as well!
great info & video - will definately use some of this
Nice vid, can you make a tutorial working with the AI Kit or the Coral Edge TPU? I'm interested to see the perfomance gain on those
Its not a simple task to run this code on a dedicated AI chip, for the AI Kit you need to jump through a few hoops to convert the model to the specific format it needs. The AI Kit library does come with YoloV8n ready to go and we have seen reports of people getting FPS in the 50-60 range which is incredible! Right now it is a little difficult to actually use the AI Kit in a project (it feels a little more like a tech demo), but software support for it is developing rapidly so that shouldn't be a problem for too long. When the software support is mature enough you will definitely find a video here!
thanks you!!! I made it. I use VNC not hdmi. FPS: 1.7
I try the ncnn, and the new FPS: 6
@@weihong8337 brother means ? wt is ncnn ? how to use it ?
How can I train with my own dataset?
Training with your own data is a little bit more involved. Ultralytics has some great documentation on it, but be warned you will need some decent hardware. On a 4080 it ussually takes 2 or so hours, no GPU may take days or a week, and on a raspberry Pi it may take months.
docs.ultralytics.com/yolov5/tutorials/train_custom_data/#23-organize-directories
Do you know of any alternative to connect the raspberry camera via USB instead of the flex cable (very short)?
Thanks for the video. What’s the approximate max distance in which the detection will work? Or which size on screen should be object we want to detect and does this parameters affected by video resolution and model size?
A lower resolution will lower the distance it can detect, and a smaller model will also lower the distance. We found that the medium model, when converted to NCNN (so at the standard 640x640) could recognise a cup at about 8-10 meters away.
Hello thanks for the video. I have a some questions Is it possible to if animal detected then i should spin the motor, but I don’t know how to do it
You would need to first get YOLO to detect the animal first. Here is a list of all the things that are in the COCO library that can be detected:
tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
Then you would need to connect up a motor driver and motor. We have a guide on how to do that here to get you started!
th-cam.com/video/ea6tSppgZlY/w-d-xo.htmlfeature=shared
And if you need a hand with it we have a maker community forum where lots of makers can help out with your project!
forum.core-electronics.com.au/
Can it also recognize small flying animals such as wasps, flies or even mosquitoes?
I think you may have a hard time with that, they may be too small to be seen by the camera, and they may be too fast and blurry! On top of this I don't think the model will be able to identify them sorry.
I cant believe how simple, uncomplicated and pragmatic this video is.....however is there anyway to have a pass-through to outsource that compute to an x86 system or couple of rpi 5 clusters ?
Also this thing could run full frame rate on that odroid with that Rockchip monster with 16GB Ram.
Will give it a try, but need to get me a RPi 5 first, have an RPi 4 already.
The Ultralytics implementation of YOLO is very cross platform, so if you can get it set up on an x86 system, you should be able to use nearly the same python code we cover here! In terms of the Odroid, It may come down to an issue of optimisation, even when we convert it to NCNN it still doesn't fully utilise all of the Pi's hardware, would need to test though. And RAM isn't a big factor here, the biggest model is barely using 2GB of RAM which is incredible!
Best of luck when you can give this a go!
@@Core-Electronics Wow! ok
Chris from explaining computers did a video yesterday on a new Radxa with an Intel N100 chip and similar build to a RPi5. This x86 thing could do it, just guessing.
I have a tiny Thinkcenter lying around with an i7 6700....Now all I need is to be able to connect the camera to the PCie interface or search for some usb module that can connect to the cam....may need to research more....
Are you able to use any USB camera for this type of integration?
We have some code in the written guide that uses a webcam instead. There can be some issues with the colour profile used by the camera, and we talk a little about it in there.
core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/#appendix-using-a-webcam
@Core-Electronics I needed your help. So basically I am using it for my Quadcopter so I wanted to use YOLO v5 on my pi5 so can you tell me which camera would be good and the objects I have to find is that the plastic and the styrofoam. How do I train my YOLOv5 to Do that ?
Really any camera will do, the Pi camera module v2 and v3 might be a good pick (you can also use a webcam and we have some code in the written guide linked below the video). Your issue would be in getting the model to detect Styrofoam and plastic. Training a model is quite involved and without a GPU can take several days.
There are some pre-trained models that you can find here that might fit your needs, but if not you may need to deep-dive into training your own model, which we unfortunately don't cover 😭.
huggingface.co/models?other=yolo
Mate, really great, educational, and interesting video. Could you show this with the new AI HAT+ 26 TOPS from Raspberry Pi? It would be very interesting to learn how to extract the output in the form of a CSV file or something similar, to use the information to find out how many people pass by the camera and at what time, or how many cyclists, etc. Maybe even make graphs from this?
We definitely have some AI HAT videos in the pipeline (but the setup and usage is very different), I don't know about data logging thought. Large langauge models like ChatGPT and Claude would be more than capable of helping you write the code your looking for though!
I’m trying to automate something based off object recognition and I was wondering if you might be able to help me out, specifically I want it to play a noise whenever it detects certain objects, for example when it see a person, it would play a .wav file that correlates
You can easily achieve this with the Pygame library, we don't have a specific tutorial on this but you can find a million others online demonstrating how to use it. The important lines should be something along the lines of:
import pygame
pygame.mixer.init()
pygame.mixer.music.load("myFile.wav")
pygame.mixer.music.play()
You'll just need to whack the .wav file in the same folder as the object detection script.
If you get stuck or need a hand though, feel free to chuck a post on our community forums!
Can you help me in this:
pip install ultralytics[export]
These packages do not match the Hashes from the requirement file.
I previously had this issue and it was caused by not running the first set of commands properly:
sudo apt update
sudo apt install python3-pip -y
pip install -U pip
If that doesn't work, a fresh installation of Bookworm OS might help. If all that fails feel free to post on our community forum topic for this video, we have lots of makers over there that can help!
forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923
Nice . From india
Can you use yolo world to control hardware as well, or does that only work with the base models?
The hardware control script can definitely be modified to use yolo world. You should only need to change the line where we choose the model to use, and add in the line where we prompt it what to look for!
Sir can tell me raberry pi 4B setup from basics please sir😢
Can you create a that take things using object detection pls😁
Nice video! I just bought an AI kit from you guys (today!) hoping this will boost fps significantly?
There are a few steps between running the models that come with the AI kit, and getting YOLO to run on it.
(But we may be working on an AI HAT guide as we speak 😏)
The ncnn portion of the code doesn’t work for me! I get an error “ModuleNotFoundError: No module named ‘ncnn’ “. I have the exact lines of code running and the main code works as well so i’m unsure how to fix this
Is this when running the conversion script or trying to run the object detection code after converting it? Make sure that your script is saved and is in the same folder as all your other code and models. If this still doesn't work, feel free to chuck a post on our community forum topic for this video, we have lots of makers over there that can help.
forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923
We are also in the process of updating the NCNN conversion section as we have found a better way so that should be up sometime today if you want to give it a try!
@@Core-Electronics This is when running the conversion script. It tries to run update but spits out: AutoUpdate skipped (Offline)
I’ll post on the forum but thanks!
Can i use the raspberry pi 4b and raspberry pi camera?
Im working on a project that works with iot and connected to esp32
We haven't tested it, but it will most likely work on a Pi 4, Ultraytics says it has support. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.
I keep getting error dependency in installing the Ultralytics[export]. Any has encountered this before and how it can be fixed?
Have you tried running the line multiple times? It installs quite a lot with that line and you may need to run it a few times to let it do its thing. If that doesn't fix it, feel free to post your issue on our dedicated community forum topic for this video. Try and include some information about the specific dependency issue. We have a lot of makers over there that are happy to help!
forum.core-electronics.com.au/t/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/20923/6
Can you use this for Wildlife live streaming?
You most definitely could! The troubles may be in supplying power to it, and getting it an internet connection to send data back. You would also need to experiment to see which types of wildlife it will pick up. It may recognise everything 4-legged as a dog!
Can we implement this in rpi4b 4gb ram? (using external camera)
We haven't tested it, but it will most likely work on a Pi 4. Just be prepared as it may be very slow, the Pi 5 is about 2-3x faster than the Pi 4.
How slow will it be on pi 4?
Probably about 2-3 times slower 😞
Half of this is missing. (1) You don't say you need a sudo apt upgrade after the sudo apt update. (2) As far as I can tell, the ultralytics install does not install PyTorch so that is another step. (3) There seems to be a load of settings needed to make the camera work - although these may be out of date, I can't tell because I cannot make the install work. Given that you show setup from a new set of components all that stuff is necessary. All I get running your tutorial is a load of errors about torch>=1.7.0 (no - re-running does not magically fix the issue).
Sorry to hear you are having issues. This installation process was taken directly from Ultralytics who have made most of the modern YOLO models. Running apt upgrade won't hurt but it's not entirely needed here as we are mainly focused on ensuring that Python and pip are up to date.
You may have encountered an issue in your installation process as it will most definitely install Pytorch. That or you may have an issue with your virtual environments.
The camera settings can vary depending on the Pi and could be many things. Feel free to post your issue on our community forum post for this guide with a little bit of information about your setup and where the issue is, we have lots of makers over there that can help!
Hey, Loved your content i am an Intern at ISRO(India Space Research Organisation) and i am working on deploying a yolov8 model on raspberry pi can you help me deploy that with raspberry pi ai kit and improve the model for real time inference.
What format would be best to deploy as i have seen few videos that says convert the model into onxx then convert it into Hailo hf format using the hailo dataflow compiler or model zoo then copying and then running the code am i going right?? your help is highly appreciated.
That sounds exactly right! The AI Kit only works with the Hailo .HEF model format, and the easiest way is to first convert it to ONXX, then HEF. Just be aware that when you convert it to ONXX you will often "bake in" a lot of configuration. When its in Pytorch format, we can change the resolution, and for things like YOLO world we can change the prompts for it to look for, but when we convert it to ONXX it locks these in and we cant change it. So get the settings right, convert to ONXX, then to HEF and run on the hat.
The usage is different than our script here though, we are using a nice library which lets us run it with high level Python code and its not as easy yet to do this with the kit.
Best of luck mate!
Another thing! If you run into issues with the AI Kit check out the AI camera that just launched - it uses the Sony imx-500. We have had a lot more ease in using it and writing custom scripts with it. It may not be as powerful, but it still runs well.
@@Core-Electronics Thanks a lot.
Can we do it on rpi4
We didn't test it on an RPi4, but it should work pretty much the same, Ultralytics says that it is supported. Just be ready for it to run about 2x slower :(
yes but can't get above 2 fps. It's rather 1 fps
If anyone has got any ideas on how to use this for a night vision camera that will turn lights on when a fox is detected please let me know
First