Love all your videos, keep up the great content! I’ve had the AI hat for a while now. I ran all the demo scripts from Halo, but found them “difficult to adapt” to simple use cases such as you demonstrated. Finally i can start using the AI accelerator instead of having it collect dust in my parts bin! I’m looking forward to your next AI tutorial, and hoping that you can show face recognition technique with AI hat, like in one of your recent Pi4 with open CV tutorials. Thank again for sharing your knowledge!
Awesome! I just received my Pi Hat+ & my Pi Camera AI from Core-Electronics today this is a good intro on how to install the req repositories to get starting. Very useful!!
This one is SO much faster than the models in the last video. Do you know if there is a relatively easy way to pull in the yoloworld model here instead of Yolo8 or whatever its using? Or a way to leverage this speed with the way you demonstrated Yoloworld in your last video?
Unfortunately we are out of luck here. There is no way to usual Yolo method with the HAT as it uses a completely different workflow and needs a model format called HEF. You can convert normal Yolo models to HEF with the Hailo Dataflow Compiler, but it is a little bit of an involved process - not impossible though. You may have some issues with converting Yolo World as well, and once you convert it to HEF, you have locked in the prompt. To change the prompt you would need to re-convert it to HEF. This may change though! The HAT is still relatively new but lots of very clever have access to it, so maybe in time!
Hello, excellent content on your TH-cam channel Do you know any option to adapt the 15-pin cable of the raspberry pi5 camera, to connect via USB and dispense with the flex cable (sometimes very short)?
Hmmmmm I can't say we have seen one of them before. As a 2nd best, you can get longer camera cables, here is a 0.5 meter one for the Pi 5: core-electronics.com.au/catalog/product/view/sku/CE09777
We have seen examples of it being done, but we weren't able to successfully do it with the Python pipelines. There may be a way to dig around and modify the pipeline to do so, but out of the box and from what we set up in this video, unfortunately no 😔
When we run the --help option to see all the pipeline options you should see the option to instead use a webcam. In the basic_pipelines folder, you will also find a script you can run which detects your webcam name to use with that pipeline option. In terms of IP-cam that might be a little more involved, and if the setup won't natively take an IP-cam as an input, you can definitely find a library to do so. If you need a hand with this, we have a maker forum where you can post this question, we have a lot of makers over there who can help. forum.core-electronics.com.au/
Right now we haven't seen anything, Hailo has only provided ones for object recognition, pose estimation and segmentation unfortunately, a good bet you'll see more from the community in the future though!
Both the camera and HAT need different model formats to run. We haven't tested, but its likely that you could run the AI camera and the AI HAT at the same time on the Pi, but they would both be running separate models. It is extremely unlikely that you could use both together to accelerate the processing even more. We also don't have a solid figure, but I would guess the AI camera is around 4 TOPS (a measurement of computing power). The AI HAT comes in a bigger 26 TOPS version so 4 extra tops wouldn't do that much.
You will need a custom model, but thankfully someone may of already trained one for you as there are Pokemon detection models on Hugging Face: huggingface.co/models?other=yolo&sort=trending&search=pokemon The next step is to use the Hailo Data Flow Compiler to turn it into the HEF format that the HAT needs. This can be a bit of an involved process, but its not impossible. If you don't wish to go through converting the model, we also have a guide on running the YOLO model directly without the HAT. It is slower, and doesn't use the processing power of the HAT, but its very easy to use custom models. th-cam.com/video/XKIm_R_rIeQ/w-d-xo.html
Love all your videos, keep up the great content! I’ve had the AI hat for a while now. I ran all the demo scripts from Halo, but found them “difficult to adapt” to simple use cases such as you demonstrated. Finally i can start using the AI accelerator instead of having it collect dust in my parts bin! I’m looking forward to your next AI tutorial, and hoping that you can show face recognition technique with AI hat, like in one of your recent Pi4 with open CV tutorials. Thank again for sharing your knowledge!
Awesome! I just received my Pi Hat+ & my Pi Camera AI from Core-Electronics today this is a good intro on how to install the req repositories to get starting. Very useful!!
Great episode, new fan
I love you man 😭 literally saved my life
AWESEOME. THANK U SIR
Thank you very much for all the information
Can't wait when you will show, how to scan QR codes on raspberry pi 5.
Is it work on multiple object detection with facial recognition in projects like real time ppe monitoring system
This one is SO much faster than the models in the last video. Do you know if there is a relatively easy way to pull in the yoloworld model here instead of Yolo8 or whatever its using? Or a way to leverage this speed with the way you demonstrated Yoloworld in your last video?
Unfortunately we are out of luck here. There is no way to usual Yolo method with the HAT as it uses a completely different workflow and needs a model format called HEF. You can convert normal Yolo models to HEF with the Hailo Dataflow Compiler, but it is a little bit of an involved process - not impossible though. You may have some issues with converting Yolo World as well, and once you convert it to HEF, you have locked in the prompt. To change the prompt you would need to re-convert it to HEF.
This may change though! The HAT is still relatively new but lots of very clever have access to it, so maybe in time!
Hello, excellent content on your TH-cam channel
Do you know any option to adapt the 15-pin cable of the raspberry pi5 camera, to connect via USB and dispense with the flex cable (sometimes very short)?
Hmmmmm I can't say we have seen one of them before. As a 2nd best, you can get longer camera cables, here is a 0.5 meter one for the Pi 5:
core-electronics.com.au/catalog/product/view/sku/CE09777
Can 2 cameras be ran simultaneously on 1 pi with the AI + hat?
We have seen examples of it being done, but we weren't able to successfully do it with the Python pipelines. There may be a way to dig around and modify the pipeline to do so, but out of the box and from what we set up in this video, unfortunately no 😔
Hi, do you know how to use webcam and ip-cam with RP5 + Hailo?
When we run the --help option to see all the pipeline options you should see the option to instead use a webcam. In the basic_pipelines folder, you will also find a script you can run which detects your webcam name to use with that pipeline option.
In terms of IP-cam that might be a little more involved, and if the setup won't natively take an IP-cam as an input, you can definitely find a library to do so. If you need a hand with this, we have a maker forum where you can post this question, we have a lot of makers over there who can help.
forum.core-electronics.com.au/
Is there a pipeline code for facial recognition, so I can use the AI hat for face recognition?
Right now we haven't seen anything, Hailo has only provided ones for object recognition, pose estimation and segmentation unfortunately, a good bet you'll see more from the community in the future though!
@@Core-Electronics Thank you for your response! :)
Can I count the number of green cars the camera sees ? Or the number of red skittles in a bag?
Can we use the new AI Pi Camera ? If yes, can we even acelerate the processing with it ?
Both the camera and HAT need different model formats to run. We haven't tested, but its likely that you could run the AI camera and the AI HAT at the same time on the Pi, but they would both be running separate models. It is extremely unlikely that you could use both together to accelerate the processing even more.
We also don't have a solid figure, but I would guess the AI camera is around 4 TOPS (a measurement of computing power). The AI HAT comes in a bigger 26 TOPS version so 4 extra tops wouldn't do that much.
@Core-Electronics Yes we can, i tested but i wish that were somehow campatible to accelerate even more the process
How can it be used to detect let's say Pikachus? Probably we have to train the model; but can you help us on that?
You will need a custom model, but thankfully someone may of already trained one for you as there are Pokemon detection models on Hugging Face:
huggingface.co/models?other=yolo&sort=trending&search=pokemon
The next step is to use the Hailo Data Flow Compiler to turn it into the HEF format that the HAT needs. This can be a bit of an involved process, but its not impossible.
If you don't wish to go through converting the model, we also have a guide on running the YOLO model directly without the HAT. It is slower, and doesn't use the processing power of the HAT, but its very easy to use custom models.
th-cam.com/video/XKIm_R_rIeQ/w-d-xo.html
hide pies, this guys coming after them. LUL
Are we talking meat pies or Raspberry Pis? Cus ill take both.