Hey everyone! We have a new updated version of this guide that uses a more advanced model and runs a bit smoother. You can check it out here: th-cam.com/video/XKIm_R_rIeQ/w-d-xo.html Please note that we are keeping this old guide up for legacy reasons and that it requires the older Buster OS (the new one is running on the new Bookworm OS).
This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi. Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more. So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!
Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨
Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.
Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!
Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races. Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html
Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)
Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers
Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - th-cam.com/video/hLMfcGKXhPM/w-d-xo.html
Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition. If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?
Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.
Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you I have used the raspberry 3b+ If you use raspberry pi 4, it could be much faster and smoother
If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)
Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle. Thanks Tim, Keep up the great work :D
Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/
Hi. I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.
Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.
Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.
Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?
Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi
This is a cool, clear, straightforward video. Well done. Question: does selecting specific objects make the identification faster? for example I only want birds, cats, people to reduce load. Would it work?
Thats a really great question that I am not 100% sure on. My first guess is that you might see a bit of improvement, but I don't think it would be incredibly significant. If you do some of these tests, let us know we are very curious as well!
You are a legend bro i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor
Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): | Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) | By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.
great video! good for beginner. I want to get the name of the objects into a string and print it when object detected. Can you give me any tips or help to me? Thank you so much.
Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script. Hope that helps 😊
Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates
How can I add a more detections objects like a light bulb on a wall that turns a curtain color? And can I add code to play a sound on a speaker when detection happens?
I have the perfect application for this but the objects I need to identify are very similar and incredibly difficult for experienced humans to see accurately. Would this just mean supplying more training data to the system?
Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.
@@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!
Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...
Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - th-cam.com/video/7BkHcJu57Cg/w-d-xo.html
@@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research
For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/
very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you
Edge Impulse is your friend here - www.edgeimpulse.com/ This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊
can you create a new dataset annotations for a new object and use it with this coco model? Example, I want to detect a soccer ball. Can I just create annotations with something like datatorch and use those annotations in conjunction with the provided model and weights?
It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/ That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.
hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>
Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊
All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.
hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering
hi awesome video and great content, pls can I also get this same code to identify FIRE ? can you guide me to how i can do that. also can i get the trained dataset for Fire and how do i get the library into the folder
I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/
Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks
There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html
I got error Traceback (most recent call last): File "", line 35 cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30), SyntaxError: invalid syntax What mean with this error, i already install cv2
Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)
You're the most closet project of my idea in fact it's practically that. But I would like to run it 7/24 during a 10 day period ( my holiday) I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else : Here is a cat : wait 10 minute press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs ) is this possible/faisable with this?
Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner. If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/
Great video! I’m wondering if instead of the green rectangle with the name of the object, I can get the names of the objects into a string so I can print. I am trying to use a Text To Speech software so that whatever the object’s name, it is said out loud. Do you have any tips, help, or advice to give me?
Definitely something you can do! Theres a lot of great text to speech packages that will work with Raspberry Pi, Pico TTS is a great example of one. With a little bit code adjustment you'll be off to the races. Come make a forum post (link in description) on your idea and then we can give you a much better hand than I can here 😊
Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.
Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!
Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.
Hey Luc, To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 - And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest Kind regards, Tim
Hi Tim, Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi. I am willing to follow along your process to install OpenCV and test it out. I am just wondering if OpenCV will run on the new Raspberry Pi OS
At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/
100% any USB webcam can work with this script. You will just need to adjust some code. Likely you will just need to change | cap = cv2.VideoCapture(0) | to | cap = cv2.VideoCapture(1) |. Hope that helps 😊.
You may be able to run a Pi with other power supplies but it's recommended to use the official Raspberry Pi Power Supply. They actually provide 5.1V to prevent issues from voltage drop that you might run into with a generic power supply.
Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.
The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.
Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?
Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested. Cheers!
Also as inspiration check out what this man managed to do with a Pico and a thermal camera! (if only he shared his code) - th-cam.com/video/xO4RsO3nBZ8/w-d-xo.html
Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.
COCO was trained off 325k images of just day to day environments and objects. They have the research paper here if you are interested! arxiv.org/abs/1405.0312 (loading the PDF may take a little while)
hi, i am using raspberry pi 3 model B+ in this project. I uploaded the codes and was successful. But there is a delay of 8-10 seconds and it detects an object many times. You mentioned in the forum that we can reduce the latency by lowering the camera resolution. I can't find where to set this setting, can you help me? (I am using raspberry pi camera module v2 as camera.)
Sure mate, lower the values you find in the line here | net.setInputSize(320,320) |. Make sure both numbers are matching. Most AI Vision systems depend on the inputted video data to be square. If you type | net.setInputSize(160,160) | it will yield faster responses.
@@Core-Electronics The image became faster, but object recognition worsened. It draws the boundaries in different parts of the object. Thanks for your reply.
If you want to keep those deer in frame the whole time perhaps an automatic Machine Learned tracking system would help 😊 something like this core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/
Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.
Ah I see now, depends on the pest. If your interested in large pests like possums, rats, skunks, baboons or the like then this could be useful. Smaller critters like gross bugs likely not. Unless you had some doorway to the outside where you could watch the bugs come in from and you had a camera up really close.
3 Hours is definitely too long for installation! Come jump into the full written up article. At the bottom is a whole bunch of successful troubleshooting that you utilise.
@@Core-Electronics Thanks for Reply. I did that successfully. Thanks for your help. One more thing, I want to connect multiple camera with Raspberry pi via GPIO. Is it possible? Can you help me in that?
@@Core-Electronics I ordered mine the day it was announced and have been running it nonstop a few days now just with a demo detection running just to see how it goes.
Hey everyone! We have a new updated version of this guide that uses a more advanced model and runs a bit smoother. You can check it out here: th-cam.com/video/XKIm_R_rIeQ/w-d-xo.html
Please note that we are keeping this old guide up for legacy reasons and that it requires the older Buster OS (the new one is running on the new Bookworm OS).
This was the fastest, cleanest comprehensive guide I have found on OpenCV for Pi.
Only thing that would make this better would be an Install script, but even then I think its good for some manual work to be left anyways. Get peoples hands dirty and force them to explore and learn more.
So cool to have the power of machine learning and Computer Vision in our hands to explore and experiment with. What a time to be alive!
Very glad you have your system all up and running 🙂 and I absolutely agree. Something about a machine learned system that runs on a palm-sized computer that you have put together yourself really feels like magic ✨✨
Excellent. I came to this after seeing the facial recognition video as it would help with a project I have in mind. However, after seeing this and how easy it is to set up and use my project will be more ambitious. Thanks again and keep up the good work.
Trust me . I just find everything I was looking for about my raspberry pi 🌹
this was exactly the thing i was looking for. i will be buying things from their store as compensation!
Hey this is great, thanks for putting this together. Really easy to follow along as a beginner. Is there a tutorial that builds on this and allows you to connect a speaker to the raspi so that whenever a specific object is detected, it makes a specific noise? Would love to see it!
Such a good idea. Yet to find a project talk about it directly, but where I added the extra code in for the Servo control if you instead replace that with code to set up a speaker and activate it, you would be off to the races.
Here is a related guide on speakers - core-electronics.com.au/tutorials/how-to-use-speakers-and-amplifiers-with-your-project.html
@@Core-Electronics Supertar, thanks!
Your website, products and educational resources are amazing. I was wondering if you had any advice as to how to further train the machine to identify less common objects? I was hoping to use it for a drone video feed and train it to identify people, for basic search and rescue functions. I am a volunteer in my local community, hence my specific question :-)
Hi Great Video! I know this may be unrelated but how about recognition of objects on screen without a camera? Is there any projects you know of that use AI detection to control the cursor of the computer when it detects an object on screen? Cheers
Cheers mate and excellent ideas. You can definitely feed this system data that has been pre-recorded or streamed in from another location, would require some adjustments to the script. Also in regards to AI detection to control a cursor on a Raspberry Pi come have a look at this video - th-cam.com/video/hLMfcGKXhPM/w-d-xo.html
To use a USB cam install fswebcam then change cv2.VideoCapture(0) to cv2.VideoCapture(0, cv2.CAP_V4L2) in the script
whheere should i install this
You're a life saviour. Thank you so much ❤
Hi Tim, I would like to ask how can I speed up the fps and speed up the recognition rate? Or do I need to use the lite version to speed up the speed?
Thanks for this. I want to use my pi to do custom recognition of trees from their bark in a portable field unit. I already tried an tensorflow lite and off the shelf database to do common object recognition.
If I had a small need to recognize say 50 trees, how many labelled images do I need of each tree for the training data?
Hi Charles, some Australian scientists concluded in a 2020 paper “How many images do I need?” (Saleh Shahinfar, et al) that the minimum number of data points for a class should be in the 150 - 500 range. So if you had 50 species of trees to identify from you'd need roughly between 7,500 - 25,000 images/data points.
@@Core-Electronics thanks so much for this info. I have to get to work! I’m checking out the paper.
Hey tim! I successfully have managed to run this project in about an hour. I didn't compile opencv from source though. I installed it through pip but still got it working and its running pretty smooth. Hope you could change the opencv compiling part as it takes tooo long (took me 3 days and was still unsuccessful)and is unnecessary. Thank you
I have used the raspberry 3b+
If you use raspberry pi 4, it could be much faster and smoother
If you can provide some more information I'd happily update the guides 😊 (Perhaps jump onto our core electronics forum and do a quick write up on your process)
Amazing, Easy to follow, Comprehensive video for object detection. Gonna use this to turn my RC car into a autonomous vehicle.
Thanks Tim, Keep up the great work :D
Oh man that sounds like an amazing project 😊! Definitely keep me posted on how it goes. The Forum is a great place for a worklog - forum.core-electronics.com.au/
brother i too am working on this project can you leave any leads i am sending you an email if you have time please reply
Hi.
I wanted to ask, do you think the raspberry pi Zero cam could be used as a substitute? I'm currently working on a project that involved Raspberry Pi's and camera's and have done a lot of research on what hardware to acquire, I haven't seen much benefit in using the V2 camera instead of the Zerocam. I actually think the raspberry pi zero cam has better specs for its price when compared to the V2.
Should work perfectly fine 😊. If the video data is coming into the Raspberry Pi through the ribbon cable I don't think you would even need to change anything in the script.
thank you, great preview on how to get started !
And a really big thanks to you for explaining this so well😁😁
Hey man great video. Any chance you can cover how to use this same concept to detect anomalies instead? Rather than looking for specific objects expected to be there in the camera, the program learns the objects expected to be there and detects when an unusual object is found. Thanks.
This is amazing ! this is soo very cool! Thank you for introducing me to coco!
Lost two nights trying to run it on the latest OS! Use the previous one, it is mentioned in the article.
thank you, I was struggling with this and was utterly confused.
Great video, I just came up with an idea for a project using this. I have no experience with Pi's but basically it would be using a camera to detect a squirrel on a bird feeder and then playing some loud noise through a speaker. Would this be a difficult thing to do?
Sounds like an absolutely excellent idea that could definitely be implemented using this kind of Object Detection. We just had a new project posted on our website worth checking out all about using a Raspberry Pi to track Kangaroos and when it does it sends photos of them to a website server - core-electronics.com.au/projects/rooberry-pi
Thanks for sharing, this is really good and easy to follow
This video helped a lot! 👍
Sweet! 😊
Hi Tim, the video was great btw do you know another dataset that i could use with this code, and can you explain how to train a new object to detect?
do you have any guides for using an ultra low light camera module such as Arducam B0333 camera module (Sony Starvis IMX462 sensor)
This is a cool, clear, straightforward video. Well done.
Question: does selecting specific objects make the identification faster? for example I only want birds, cats, people to reduce load. Would it work?
Thats a really great question that I am not 100% sure on. My first guess is that you might see a bit of improvement, but I don't think it would be incredibly significant. If you do some of these tests, let us know we are very curious as well!
You are a legend bro
i have a question what if when it detects particular image in my case (garbage) it has to generate a gps location or it has to send the location of that point to another vehicle like you did to your servo motor
Hi this was a really great project and helped me a lot but can you help in how can we change the size of the box made around our object?
Size of the boxes tend to be based on the size of the detected object. But the Colour and Width of box can definitely be altered. Inside the code look for the section | if (draw): |
Then below that the line | cv2.rectangle(img,box,color=(0,255,0),thickness=2) |
By altering the (0,255,0) numbers you can change the colour of the box. By changing the thickness number you can have very thin lines or very bold lines. Font and other aesthetic changes can be done in the following lines.
@@Core-Electronics Thank you very much
Thank you very much for your work!
great video! good for beginner.
I want to get the name of the objects into a string and print it when object detected.
Can you give me any tips or help to me? Thank you so much.
Cheers mate! In the main script underneath the line | result, objectInfo = getObjects(img,0.45,0.2) | is another line stating | #print(objectInfo) |. If you delete that | # | then save and run it again you will be printing the name of the identified object to the shell script.
Hope that helps 😊
Hello, great video but how do I get the coordinates of the tracked objects I am trying to build a robot that can identify and pick up objects, how would I find the coordinates
How can I add a more detections objects like a light bulb on a wall that turns a curtain color? And can I add code to play a sound on a speaker when detection happens?
Just perfect, thanks a lot man!
cool, but what would it take to make this work with 60 fps (doing the image recognition in every frame and not lagging behind when things move fast)
Thank you VERY much!
I have the perfect application for this but the objects I need to identify are very similar and incredibly difficult for experienced humans to see accurately. Would this just mean supplying more training data to the system?
Hey great video, May I know where to tinker if i will be using esp32 camera to stream the video? Thank you in advance!
Hey mate cheers 🙂 the line to alter in code is | cap = cv2.VideoCapture(0) | changing that 0 to another index number that will represent your esp32 camera stream. Come make a forum post if you need any extra hand.
@@Core-Electronics Hi would like to some extra hands on this one. How can I implement esp32 cam as my video stream for real time object detection using the code. Thankss!
Definitely a great question for our Core Electronics Forum 😊
great video, big help for my thesis. it can be used also to the pest?
Glad to be of help 🙂 not quite sure what you mean though.
Hello can it be possible if you can join the animal, object and person or facial recognition at the same time? I'm working that kind of project could you help me sir? Please...
Aww what an excellent idea! You will start wanting more powerful hardware very quickly going down this path. Come check out the Oak-D Lite (which is an excellent way to start stacking multiple AI system whilst still using a Raspberry Pi) - th-cam.com/video/7BkHcJu57Cg/w-d-xo.html
@@Core-Electronics how about just identifying if it is an animal, things or a person or some kind of moving object and at the same time it will capture a preview picture of it? How can you make this? and also how to create like if the raspberry pi detects a person in can email to you but if it is not a person it will not email you. Hoping you can help me with my research
Hello
I'm verry happy to see this tuto
Thank for help
Is it possible to detect drugs or pills ?
For sure but you will need to create a custom Machine Learnt Edge system. Come check out Edge Impulse, personally I think they are the best in the game for this kind of stuff (and totally free for a maker) - www.edgeimpulse.com/
very good video and explanations are well detailed. please I have a project that consists of detecting paper your technique works with other objects but does not work with paper. I don't know if it's possible to teach the system to recognize paper. Thank you
Edge Impulse is your friend here - www.edgeimpulse.com/
This will let you customise already created AI systems like the CoCo Library. Stepping through this system you will be able to modify CoCo library to recognise paper 😊
Can I use a normal usb camera with this?
Hi Great video! can I use a usb webcam instead of the pi cam, is it just a case of changing the code
Ty for a great video
Where could i found library for specific stuff i need?
I am looking for Cans, Bottles, glass bottles etc
Hello, did you ever find a library of the things you needed? I also need a library for specific items and was wondering if you found a good resource.
i did not
made myself a model by training it using Roboflow
@@kos309
Can I run your project on MacBook if possible and in this case what kind of modifications to have with hardware please. Thanks
Hi great videos! How do I add to the dataset, is there a file to add to or is it an adjustment in the code Thanks again
Thank you man! This was really helpful.
can you create a new dataset annotations for a new object and use it with this coco model? Example, I want to detect a soccer ball. Can I just create annotations with something like datatorch and use those annotations in conjunction with the provided model and weights?
Amazing sir how can add speak module beacause easily to understand to detect of any object and after to speak a text in any objects
Nicely explain. Please do I apply this on new dataset different from this?
It will require some dedicated effort but you can customise this object detection dataset using edge impulse. www.edgeimpulse.com/
That way you can add whatever object or creature you'd like 😊 I hope I understood correctly.
How do you transfer a data set to the pi do you store it in a file or does it need adding to the code.
Great video !!
hi, this tutorial helped a lot for my project. i successfully set up and run the codes on raspberry 4 model b terminal, i just couldn't figure out how can i see the video output while the code is running on the terminal (not on geany or thonny). maybe u could help me out :>>
Not quite sure why it wouldn't do that for you when you run the script in the terminal. Come write up a post here at our forum and post some pictures of the situation - forum.core-electronics.com.au/. Reference in the post me and I'll best be able to help 😊
Great video! Can you run this portable on a battery not connected to the internet?
All the processing is done on the edge, thus you only need the hardware (no calculations happen over Wifi or via the Cloud). So if you had a big enough battery you could definitely run this system via a battery without Internet 😊.
Thanks for the quick reply!
hey tim ! i seem to encounter a problem while following your instructions on the make -j $(nproc) it stops every time on the 40% and i re-typed and entered the same line several times but it didnt work is there any solution thanks for answering
Check the description for the article page. Scroll down to the questions section and you'll find the answer
I have a imx219 apparently it will not work with opencv. Is there a way to use gstreamer to make it work in open cv?
Hi is there a way that then creates a log with all recognized animals/humans so data can be consumed ?
Great video, thank you for sharing.
hi awesome video and great content, pls can I also get this same code to identify FIRE ? can you guide me to how i can do that.
also can i get the trained dataset for Fire and how do i get the library into the folder
Amazing , thank you !
Our pleasure!😊😊
Excuse me, i need help please, tiny yolo is better to raspberry pi or normal yolo can be used?
Thanks for the tutorial. Can you maybe show how to implement a new library? I want it to just detect If there is an animal, the kind doesnt matter.
I've been learning more about this recently. A great way to create custom libraries that a Raspberry Pi can then implement is through Edge Impulse. With this you will be able to train and expand the amount of Animals that default COCO library comes with. Tutorials on this hopefully soon. www.edgeimpulse.com/
@@Core-Electronics Hi Do you have tutorials for Custom Object Detection using your own model?
Hi is there any option to tracking the QR code with pan and tilt module?
Hi core electronics, I am looking for a lens for my Raspberry Pi HQ camera module... I want good quality image and a closer view for defect detection for my FFF 3D printed parts...can you suggest some lenses. Thanks
There is a microscope lens that might be suitable for looking at 3D print defects. Give that a look. core-electronics.com.au/microscope-lens-for-the-raspberry-pi-high-quality-camera-0-12-1-8x.html
Hi,
Can I execute this project with a Raspberry Pi 3 A+ ?
You definitely can, it will just run a little bit slower.
thank youu veryy muchh🙇
❤😍
I got error
Traceback (most recent call last):
File "", line 35
cv2.putText(img,classNames[classId-1].upper(),(box[0] 10,box[1] 30),
SyntaxError: invalid syntax
What mean with this error, i already install cv2
it should be (box[0]+10,box[1]+30)
Would this program be able to email somebody about what object is seeing, like instead of turning the servo email somebody?
Absolutely! Here is a straight forward code to send an email through a Python Script. If you merge those two lands together you'll be smooth sailing - raspberrypi-guide.github.io/programming/send-email-notifications#:~:text=Sending%20an%20email%20from%20Python,-Okay%20now%20we&text=import%20yagmail%20%23%20start%20a%20connection,(%22Email%20sent!%22)
instead of raspberry pi 4 can we use a raspberry pi zero 2w if the speed doesn't matter to me?
You're the most closet project of my idea in fact it's practically that.
But I would like to run it 7/24 during a 10 day period ( my holiday)
I would like it press a button 10 minutes after each time it identify a cat (mine) and nothing else :
Here is a cat :
wait 10 minute
press the smart button ( I looking for a way to flush the toilet each time after my cats have done their needs )
is this possible/faisable with this?
Definitely possible and an excellent project to eliminate a chore 😊 or make for an even more in-dependent kitty. The Coco Library used in this guide has | Cat | as one of the animals it can identify. And Raspberry Pi's are excellent at running 24/7. So I reckon your in for a project winner.
If you follow through the Full Write up you'll be able to have a system that can Identify Cats (and only cats). That the hard bit done. Solenoids are a way to trigger the button, check this guide for the process on getting one to run with Raspberry Pi - core-electronics.com.au/guides/solenoid-control-with-raspberry-pi-relay/
Great video! I’m wondering if instead of the green rectangle with the name of the object, I can get the names of the objects into a string so I can print. I am trying to use a Text To Speech software so that whatever the object’s name, it is said out loud. Do you have any tips, help, or advice to give me?
Definitely something you can do! Theres a lot of great text to speech packages that will work with Raspberry Pi, Pico TTS is a great example of one. With a little bit code adjustment you'll be off to the races.
Come make a forum post (link in description) on your idea and then we can give you a much better hand than I can here 😊
@Niseem Bhattacharya Did you figure out how to do that? I need help with it.
Hey tim! Here's a question, Is the model trained by your coco generated by the yolo algorithm? This is related to the writing of my graduation thesis. I will be more grateful if you can provide more suggestions.
Sorry for getting to this so late. A lot can be learned here - cocodataset.org/ . Also there are a ton of research papers as people are unraveling this technology that are worth exploring (or adding to the bottom of a graduation thesis). Good luck mate!
@@Core-Electronics thanks so much!I believe that with your help I can get a high score.best wish!
Sir! can you use a web cam instead of the original camera of raspberry??
Yep :)
Hi, thank you for the explanation and code. I tried the code with the V3 HD camera, but it didn't work. Additionally, can you tell me how to create an autostart for this design? The 5 ways to autostart don't work ("Output:957): Gtk-WARNING **: 19:31:41.632: cannot open display:"). I'm sending a relay with it to keep the chickens away from the terrace with a water jet. Beautiful design! Greetings, Luc.
Hey Luc,
To start you will need to update a new driver for the V3 Camera so it can work with the older 'Buster' Raspberry Pi OS. Check out how to do it here - forum.arducam.com/t/16mp-autofocus-raspbian-buster-no-camera-available/2464 -
And if you want to autostart your system come check out how here (I would use CronTab) - www.tomshardware.com/how-to/run-script-at-boot-raspberry-pi
Come pop to our forum if you need any more help 😊 forum.core-electronics.com.au/latest
Kind regards,
Tim
is there any way make more video capture opencv speeed up?
Hi Tim,
Thank you so much on this video for demonstrating how to use OpenCV with the Raspberry Pi.
I am willing to follow along your process to install OpenCV and test it out.
I am just wondering if OpenCV will run on the new Raspberry Pi OS
At this current stage I would recommend using the older 'Buster' OS with this guide. If you want to use Bullseye with machine scripts come check this guide on the OAK-D Lite - core-electronics.com.au/guides/raspberry-pi/oak-d-lite-raspberry-pi/
Hi Tim, can the coral accelerator be integrated in this project?
Absolutely
Is it possible to use any USB camera instead of an official pi camera for this project?
100% any USB webcam can work with this script. You will just need to adjust some code. Likely you will just need to change | cap = cv2.VideoCapture(0) | to | cap = cv2.VideoCapture(1) |. Hope that helps 😊.
@@Core-Electronics thank you! I’ll try this out tomorrow once I am able to and have setup my pi again
Can I use Bullseye?
oop last question sirs, can you use any type of type c and power supply cable for raspberry??
You may be able to run a Pi with other power supplies but it's recommended to use the official Raspberry Pi Power Supply. They actually provide 5.1V to prevent issues from voltage drop that you might run into with a generic power supply.
Will I be able to add an entire category to the list of objects to be displayed in real-time? So instead of saying ['horse'], could I possibly mention a broader category of ['animal'] in the objects parameter? If not, please do let me know the correct way to approach this.
The fastest way would be to just add a long list like ['horse', 'dog', 'elephant'] etc. If you check the full-write up I do something very similar there.
Hello I am trying to create a design that will recognize different trash types. Does this image recognition able to perceive things like cardboard, paper, tissue, or silver foil as such? like trash items?
Hey Max, Im currently working on a very similar project. My workshop can get a bit messy so I am setting it up to scream at me when it gets untidy. I will report back to you how it goes, or if you've had some luck I'd be more than interested.
Cheers!
ohhh man where are , i spent a week trying to install lib's thank u sooooo much
Hi tim, how to add gTTS in the program when the object is detected
This is really cool. I wonder how hard it would be to connect to a thermal imaging camera and identify things by body heat?
Sounds like an awesome project, let us know if you manage to get it working.
Also as inspiration check out what this man managed to do with a Pico and a thermal camera! (if only he shared his code) - th-cam.com/video/xO4RsO3nBZ8/w-d-xo.html
Hey Core Electronics! Can I make it detect pistols only?
Give Edge Impulse a look at. This library doesn't have that as an object but you can use Edge Impulse to train/modify the standard COCO library to include new objects and things.
How can I fuse this code with the face recognition one?
Will this work on an Orange Pi5? Raspberry Pi's are out of stock everywhere except from scalpers charging five times normal.
I'm not sure honestly mate, if you can get it to run 'Buster' Raspberry Pi OS then I'll give it a solid positive maybe.
What model is the COCO dataset trained on? I tried a custom one trained on TFlow Lite and it wouldn't work.
COCO was trained off 325k images of just day to day environments and objects. They have the research paper here if you are interested! arxiv.org/abs/1405.0312
(loading the PDF may take a little while)
I am currently trying to follow this on a pi 4b with Bullseye. I am really struggling to get the files to download and build properly, any tips?
Check the article G you'll see it in the description
Awesome vid, clear fast and accurate 🌟
hi, i am using raspberry pi 3 model B+ in this project. I uploaded the codes and was successful. But there is a delay of 8-10 seconds and it detects an object many times. You mentioned in the forum that we can reduce the latency by lowering the camera resolution. I can't find where to set this setting, can you help me? (I am using raspberry pi camera module v2 as camera.)
Sure mate, lower the values you find in the line here | net.setInputSize(320,320) |. Make sure both numbers are matching. Most AI Vision systems depend on the inputted video data to be square. If you type | net.setInputSize(160,160) | it will yield faster responses.
@@Core-Electronics The image became faster, but object recognition worsened. It draws the boundaries in different parts of the object. Thanks for your reply.
I hadn't realised it would do that, without a doubt there is some code lines in there that when adjusted would fix up the boundary boxes.
@@Core-Electronics any update with this issue?
Hi, I keep getting that error message at 41% on the make-j $(nproc). no matter how many times i reenter the command it wont progress. any help?
After watching this, I have an urge to train one of these to identify the difference between male and female whitetail deer for a game camera....
That would be ultra rad!
If you want to keep those deer in frame the whole time perhaps an automatic Machine Learned tracking system would help 😊 something like this core-electronics.com.au/guides/Face-Tracking-Raspberry-Pi/
Can you do this with your own dataset and if so how? Need it for a school project. Thank you to whoever answers.
Yes you can but you have to train your dataset. Watch some other tutorials on how to train dataset.
@@arafatsiam4060 Will a dataset made from TensorFlow lite work? Or is there a different one compatible with the program?
Can you change the sd card to cloud storage??
Upload a video only motion tracking and shooting tracing object 👍
I am getting cv2.imshow error while running object-ident.py in pi terminal, I connected pi via ssh. What should I do?
Very clever doing it through SSH 😊. It shouldn't be an issue doing it that way so long as you go through all the set up process. If you come write me a message on the core electronics forum under this topic I'll best be able to help you. That way you can sent through screen grabs of your terminal command errors.
hi is this usable for pest detection?
Ah I see now, depends on the pest. If your interested in large pests like possums, rats, skunks, baboons or the like then this could be useful. Smaller critters like gross bugs likely not. Unless you had some doorway to the outside where you could watch the bugs come in from and you had a camera up really close.
how much time it will take after make -j $(nproc). Because on my side, after 3 hour my system reboot automatically. Help me out in this situation
3 Hours is definitely too long for installation! Come jump into the full written up article. At the bottom is a whole bunch of successful troubleshooting that you utilise.
@@Core-Electronics Thanks for Reply. I did that successfully. Thanks for your help.
One more thing, I want to connect multiple camera with Raspberry pi via GPIO. Is it possible? Can you help me in that?
Can I also use a normal webcam?
The raspberry pi 5 with AI kit is pretty slick I just need to get better identification.
We are very excited over here for the AI kit as well! Not the most powerful chip, but performance per dollar and Watt is quite respectable.
@@Core-Electronics I ordered mine the day it was announced and have been running it nonstop a few days now just with a demo detection running just to see how it goes.