Export Custom Trained Ultralytics YOLOv8 Model and Run Live Inference on Webcam | Episode 4
ฝัง
- เผยแพร่เมื่อ 2 ก.ค. 2024
- Welcome to the fourth video in our new series! Join Nicolai Nielsen as he shows you how to export your custom-trained Ultralytics YOLOv8 model and run live inference on a webcam.
In this episode, you'll learn the straightforward process of exporting a trained YOLOv8 Model on your custom dataset in Google Colab. We will then take the exported YOLOv8 model for object detection and see how to use it for inference in a Python script for custom applications and projects.
Stay tuned for practical insights and step-by-step instructions. Let's get started!
For more information, please visit:
Ultralytics ⚡ resources
- About Us - ultralytics.com/about
- Join Our Team - ultralytics.com/work
- Ultralytics License - ultralytics.com/license
- Contact Us - ultralytics.com/contact
- Discord - / discord
YOLOv8 🚀 resources
- GitHub - github.com/ultralytics/ultral...
- Docs - docs.ultralytics.com/ - วิทยาศาสตร์และเทคโนโลยี
Did you also import OpenCV for the web cam functionality or just the code in the video? Thanks
Hi there! Ultralytics has got you covered when it comes to OpenCV - there's no need to import it separately.
Hi :) ... thanks for the great video ... just one question ... when I run the code, a video (mp4) from the webcam transmission is saved in an additional folder detect/predict, but unfortunately it gets corrupted every time and it can't be opened ... is there another easy way to save a video from the webcam transmission?
It appears there might be a codec issue. Please ensure you are using the latest version of OpenCV. Hopefully, updating to the latest version will resolve this issue.
Thank you for this excellent offer. After training the model and creating the best.pt file (in yolov8), how can I convert this model to a tflite model?
You can utilize the export feature available in Ultralytics package to simply export the model to tflite format, Its a single line command mentioned below, for more information, you can explore our docs: docs.ultralytics.com/modes/export/#usage-examples
###Export command
yolo export model=best.pt format=tflite # export model
Thanks
Ultralytics Team!
Thanks for your interesting videos. In the documentation they say onnx and openvino is 3x faster on cpu. I tested with my trained yolov8x model, with imgsz 4032, it executes in the same time as original .pt format. Can you confirm it's faster in your tests?
A value of 4032 for imgsz is quite high; our testing has been conducted at imgsz 1280 and 640. At these image sizes, the model exhibits a speed that is 3 times faster.
Thanks
Ultralytics Team!
after installing python, cv and ultralytics on raspberry pi 4 and then downloading my trained model from google colab, can i directly run it on raspberry pi for object detection?
Yes you can run inference on RaspberryPI directly by using the exported model, for more information, you can explore: docs.ultralytics.com/guides/raspberry-pi/
Thanks
Ultralytics Team!
If I use this approach to retrain the model with 1 or 2 new classes, does it mean that the resulting model can detect the pre-trained objects in addition to the new objects?
Retraining the model with 1 or 2 new classes will result in the model exclusively detecting those added classes. The original classes on which it was pretrained will no longer be detected.
Thanks,
Ultralytics Team!
Hi, I don't seem to see the link for the Colab for this episode, has it been taken down?
The Colab notebook for this module is currently unavailable. However, we do have plans to provide it in the future. In the meantime, you can export the model by following the instructions outlined in our documentation, accessible at: docs.ultralytics.com/modes/export/
@@Ultralytics okay, thanks very much :)
if train model inside ultralytics website, which platform should i choose to export (pytorch, onnx, etc)?
PyTorch is optimal when focusing solely on performing inference with a trained model. However, if deploying the model on edge or embedded devices is a consideration, choosing the ONNX format would be preferable.
Thanks
Ultralytics Team!
Hello good Sir! Is it possible to use this model and integrate it on a flutter environment?
Yes, it's possible. you can export the model in TensorFlow lite or Onnx format, and use it in a flutter environment.
For more information, you can check export docs: docs.ultralytics.com/modes/export/
@@Ultralytics Thank you very much for replying! Your videos are so informative ☺☺
Hey. I want to download the model that i have trained on my dataset to detect ingredients on Google Colab but the weights are stored in my google drive. that i mounted with the notebook. because I have to make an Android application using this YOLOV8 model. How can I do that? Integration with the app in Android Studio. Is there a tutorial or something? Any guidelines would help. I have a project due next week.
To integrate the YOLOv8 model into an Android app, you'll need to export the YOLOv8 PyTorch model to TensorFlow Lite or NCNN. Detailed information on this process is available in our documentation: docs.ultralytics.com/modes/export/#export-formats
@@Ultralytics For the download in "TensorFlow Lite format" it gives an error: YOLOv8 TensorFlow export support is still under development. Please consider contributing to the effort if you have TF expertise. Thank you!
And for the "ncnn" it gives the error: ERROR: Invalid format=ncnn, valid formats are ('torchscript', 'onnx', 'openvino', 'engine', 'coreml', 'saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs', 'paddle')
Attempt exporting the YOLOv8 model in tflite format using TensorFlow version
@@Ultralytics i tried doing it through exporting in onnx and then converting into tflite format. I found it somewhere on internet. Is that okay? Will the model still work? I obtained a "model.pb" directory and a file "converted_model.tflite" in the files section after running the code.
Yes the model should work!
How to use an exported engine file for inference of images in a directory?
you can use mentioned command to run inference on image directory using engine file.
```yolo detect predict model='path/to/engine/file' source='path/to/images/directory'
Thanks
Hello, Sir! Why i cant open the camera of my laptop when i type { result = model(source = 1, .......)} like you ,could you teach me how to do ? 😭
If you're using a webcam, set the source parameter to 0. However, if you have an external camera connected to your laptop, use source=1.
Thanks
hello i want to convert this model to model.pb (tensorflow) how i can do that
The provided command allows you to convert the model to a TensorFlow saved model (.pb) format.
"""
yolo export model='path/to/yolov8s.pt' format=saved_model # Official YOLOv8 model
yolo export model='path/to/yolov8_best.pt' format=saved_model # Your Custom Model
"""
@@Ultralytics hi thanks
but i have this issue
ImportError: cannot import name 'tensor' from 'tensorflow.python.framework' (C:\Users\aubin\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\__init__.py)
TensorFlow SavedModel: export failure 6.1s: SavedModel file does not exist at: G:\yolov8\yolov8_face_detection.v1i.yolov8\supect_mouvement.v1i.yolov8
uns\detect\train\weights\best_saved_model\{saved_model.pbtxt|saved_model.pb}
how i can resolve it
You can ask code-related questions in our GitHub Issues section, accessible at: github.com/ultralytics/ultralytics
@@Ultralytics This command is showing an issue
While converting yolov8 model, I am getting this error:
ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined
I searched over the internet, but unable to figure this out.
How can I train my custom model on top of the COCO dataset? So the model detects all COCO objects and my custom object.
You should integrate your custom annotated classes into the COCO dataset and subsequently perform fine-tuning on the model. This process will enable the model to detect both COCO classes and your custom annotated classes.
Once I run it, the new windows with video doesnt appear in PyCharm, but in VSCode is all right (((
It appears that the PyCharm modules may not have been installed correctly. Please ensure that you are using the latest version of PyCharm. Additionally, if your system does not have sufficient memory, PyCharm may not be able to run live inference.
How can i get python files and source code ?
You can simply follow our documentation, codes and details are available there: docs.ultralytics.com/
Thanks
Ultralytics Team!
Thanks it works. But on my webcam, FPS is very low. It works very slowly. Any idea?@@Ultralytics
which one use the webcam model
To utilize the webcam for processing, you can specify source=0. This setting allows you to perform inference using the webcam.
@@Ultralytics i want to use multiple webcames at same time how may i do
@artofanonymous2144, feel free to explore our example of multi-stream object tracking; it can assist you in running multiple streams with object detection. docs.ultralytics.com/modes/track/#multithreaded-tracking
@@Ultralytics Thankyou sooo much I was in need of this I am making advance surveillance system for my competition thank you a lot... ❤️❤️😊🤗
@@artofanonymous2144 Thanks