I get this error: "...too many values to unpack (expected 4)" pointing inside the labels array. Why am I getting this error this error? What should I do to fix it? :) pip list: ...numpy 1.24.2, torch 2.0.0, torchvision 0.15.1, ultralytics 8.0.82, supervision 0.6.0
It's so cool, thank you for showing basic function to work with YOLO, i'm having a small task that required learning object detection and I was frustrated to find some tutorial, this was really a big help
Hi, your video is great. I've been using your code with my own model to count bacteria inside the zone and it works perfectly! Thank you for sharing the knowledge.
When the number of objects increases, my model does not display the label format that I specified and only displays the object code, how can I display the specified label in any case?
hi, the pre trained model that u used how can we use if i want to train a custom dataset and then use it? like how to use it for model trained on custom dataset on collab but i want to detect in real time how to get that? can i download the already trained model on collab on my pc?
We have a tutorial where I show how to train YOLOv8 model on custom dataset. At the end of the video I show how to use custom model for inference. Among other things I show where your custom model is saved. You need to to download that file to your local.
Thanks for your complete step by step coding video. Kinna like it. I followed yolo since v3, but never know we can filter out hand or other unrelated object to be detected
@@Roboflow , Hey, i followed every step, in my jetson nano, unfortunately i got an error that "Illegal instruction (core dumped) related to core system incompatibility. Do you have a way i can tweak and handle this. Looking forward to your response. I would really appreciate. Thank you again !
Hi, thank you for the tutorial. I'm working on my project right now and it helps me a lot. But this project require me to know the fps, is there a way to show it?
Thank you for this great tutorial! May be you can help me with this error I am getting? When I run the main with the "results = model(frame)" line added, it is throwing the following error (see below). If I use YOLOv5, it works perfect, but with YOLOv8 it throws this error. I have created a virtual environment and followed the tutorial step by step. Any ideas? Thanks! OSError Traceback (most recent call last) Cell In [16], line 22 19 break; 21 if __name__ == "__main__": ---> 22 main(model) Cell In [16], line 10, in main(model) 7 ret, frame = cap.read() 8 assert ret ---> 10 result = model(frame) 12 cv2.imshow("yolov8", frame) 14 k = cv2.waitKey(1) File c:\Python38\lib\site-packages\ultralytics\yolo\engine\model.py:58, in YOLO.__call__(self, source, **kwargs) 57 def __call__(self, source, **kwargs): ---> 58 return self.predict(source, **kwargs) File c:\Python38\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File c:\Python38\lib\site-packages\ultralytics\yolo\engine\model.py:130, in YOLO.predict(self, source, **kwargs) ... --> 205 s = self._ext_to_normal(_getfinalpathname(s)) 206 except FileNotFoundError: 207 previous_s = s OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '[[[ 69 76 103] [ 67 75 102] [ 65 75 103] ... [ 56 71 96] [ 59 73 98] [ 60 73 99]]
@@SkalskiP Thanks for the reply. I found the issue: I was running the code on python3.8.5. I upgraded it to 3.10 and now it works. May be it works since 3.9 . In case you find someone else with a similar issue, now you know the solution, Have a great day!
Is the object inside the same class or not? If the same class it will be hard if different I think we can we would need to experiment a bit with model parameters.
@roboflow thanks for the answer, its not the same class, so we only counting spoon class if the spoon inside the cup bounding box and not counting the spoon outside the bounding box
Hello, I have a problem with LineZone in supervision 0.7.0, and it not working. I've tried to follow the same way with your previous video about track and count object, any idea?
I don't have NVidia cards (nor can I use CUDA for that matter). How can I make use of the GPU when running the "yolo detect predict..." local inference on processors with UHD Graphics 600 & 630 and Intel N100 & N200 processors?
bonjour merci pour le travail ! tu utilise la version 0.2.0 de supervision mais il n'y a que la versin 0.16.0 de dispo qui ne contient pas la fonction detections.from_yolov8 ! comment puis je faire ? merci
thank you i have a question i have a trained instance segmentation yolov8 model and also detectron2 model on custom dataset and what i need is to run inference on new data images and use the output to make annotation on the new images and add them to my train dataset by uploading them to my roboflow dataset later so is there any way i can do that ?
@@body1024 we could start by using YOLOv8 CLI to run prediction on your images pyt pass save_txt=True. That should save your predictions in YOLO txt annotation files. You should be able to upload those annotations and images to Roboflow. Let me know if that worked ;)
Thank you for video! I'm curious about how to write the code to load an ONNX model. I would like to load an ONNX model, but I'm not sure how to do it. When using the code from the video, I encounter an input size error. Are there any helpful videos or resources that I can refer to in this case?
@@Roboflow th-cam.com/play/PLZCA39VpuaZZ1cjH4vEIdXIb0dCpZs3Y5.html It appears that there are no videos for ONNX models. Not much information available... Thank you
We use numpy notation. So you can chain logical conditions using single & and putting each condition into separate brackets. Here is solution to your specific example: detections[(detections.class_id !=0) & (detections.confidence >= 0.7)]
I keep getting 2 detections for the same class even after writing agnostic_nms=True is there a way to say: Just detect 1 of each class in the whole window, the one with higher confidence?
I've been trying to run this code but its failing in the supervision/detection/core.py file... xyxy=yolov8_results.boxes.xyxy.cpu().numpy(), AttributeError: 'list' object has no attribute 'boxes' I put in a bug report.... Any idea what this might be? I'd love to be able to finish your suite of tutorials on this! I am using supervision 0.2.0 and I tried it with the latest 0.2.1.. same thing.... Thanks!
@@Roboflow Yes that fixed it! Thanks for the quick response... and after continuing on I see you had the same problem in the video! So I just needed to continue watching... ugh....
Is it possible I change the "zone_annotator = sv.PolygonZoneAnnotator(zone=zone,color=sv.Color.white(),thickness=2,text_thickness=4,text_scale=2)" position. For example, display the red box with the object number on the left/bottom side on the screen?
Hi Peter. I have a question regarding filtering detections. Filtering sv.Detections is the same as passing class ids as additional argument in the predict method or there are some performance issue with the second alternative?
Hey, can you please guide me on overlapping object detection, as im willing to use YOLO for peach🍑 fruit detection in my project, the problem is, fruits are very dense and are overlapping with each other as well as occluded by the leaves, so can you please help me with that?
@@jirivchi hii, bro, would like to stay in touch with u, so in case any problem occurs to any of us, we may discuss it along....! If u don't mind share ur details where we can connect 🙏
I use Linux/Ubuntu as it looks like you're doing.. :) Would you say that within the ML/Vision industry the Linux platform is the most common? I rarely see Windows being used as the platform of choice for this application.
Hi 👋! It's Peter from the video. Yup, I used my Linux PC for this video, as I needed access to my GPU locally. Usually, I record stuff in colab on my Mac. All in all I'd say that most of the people I worked with use Mac or Linux. Windows is for sure the least frequently used.
why i can't do install ultralystic the command say ERROR: Could not find a version that satisfies the requirement ultralystic (from versions: none) ERROR: No matching distribution found for ultralystic even though I have upgraded pip to the latest version
Thank you for the video Piotr! How handle multiple camera detections and counting in zone and save results in database. Can you make a video tutorial about it
Hi 👋🏻 we are thinking about showing how to save YOLOv8 detections to CSV. Would that be interesting for you? As for multi stream setup, you think about having single model and multiple streams?
Is it possible to use the YoloV8 model in .onnx format rather than .pt, for real-time object detection? I only ask thinking the detection/prediction should take less time. And if so, would you be able to make such a video? :)
Yeah YOLOv8 can be converted to ONNX. I was even thinking about video like that. Not strictly about ONNX but… optimization. Pruning and quantization… does it sound interesting?
Hi Friend, great tutorial! Cheers for that.. how can I know the class number of a given item? for example, you know the class number of apple and person.. You got it somewhere probably :) where can i see this list so i can filter other items from the list? thanks!
Hi sorry for the late response. I was a bit busy with a new video. Take a look here: github.com/ultralytics/ultralytics/blob/9e58c32c15835e54e57f7b8c925367a64cb94951/ultralytics/datasets/coco128.yaml
@@abdshomad I'll keep that in mind next time when I'll make some spectacular mistake. Given that you are frequent viewer what do you think about the format of this video? I code in editor instead notebook. And I write the code instead of just explaining what I did?
@@SkalskiP Thank you. Firstly, better explain in it vs code. Cleaner. But... please also show it runs on Colab. Roboflow Notebooks are very helpful. Colab helpful for quick POC. We dont have to prepare venv, conda, pip install huge packages (pytorch, detectron, etc).
@@abdshomad thanks a lot for your opinion. I also need to balance it all out not to make suuuuper long videos. But I see your point. VS code is a lot cleaner when it comes to explaining the code. On the other hand notebooks are super convenient. This time no notebook, as you need to have access to GPU. But there is repo with example ;)
i wanna clear all my installed dataset and args to make another yolo. how can i do that???? Its a long time problem. PLEASE HEEELP TO CLEAR ALL before making another version!
is there any way of taking only 4 labels, for example, truck, car, bus and motorcycle instead of only one or all? thanks a lot for sharing ur knowledge!
It‘s almost the same however I don‘t reccomand it bc the Raspberyy Pi is way to slow and only archieve 1,5fps at best compared to 120 fps+ on my desktop gpu. The raspberry cam is really enoyikng as well took me many Hours to get work proberly
Yup it can. It will be just a bit mor of work. Because YOLOv7 does not have pip package. But I made stuff like that in the past. It is very much possible.
@@Roboflow i am going to do real time object detection (Hand gesture detection) using yolov7 model, but i haven't been able to find a way to do it in using webcam. Hope you can help me :)
@@Roboflow I tried running it on Jetson Nano using Linux OS, but I failed, it showed the error "illegal instruction (core dumped)". Could you tell me how to fix it?
Hi dude, your video is very good. I have a question. I trained a four-label model. I want to show the count of each label in this model separately on the screen. Do you have a suggestion?
That's a fantastic tutorial! Thank you so much! I have a question: if it's possible for you to guide me on how to implement re-identification (or maybe re-tracking) of the same object with YOLOv8?
Create a new thread here: github.com/roboflow/notebooks/discussions/categories/q-a I'll try to help you out. I'm really busy but I'll try to do my best.
Thank you so much for this amazing tutorial! I have a question: I'm interested in extracting the results from a frame like the result, specifically the count of objects and their corresponding types, and then outputting them in JSON format. Do you have any suggestions or ideas on how I can accomplish this?
Hello , thank you for great video. I would like to know if I can save the results as a time-stamped data in csv format. If you respond, it makes me pleasure. Best regards
Great video, unfortunately, several times, you have typed the code exactly where the youtube progress bar is, so when I wanted to follow you, I had to look for a better shot when the code was scrolled. Please type the code a little higher on the screen (if possible) next time. Thanks
Love this video, thank you for sharing/teaching. Sadly I'm getting an error around the 9:20 mark when following along, which I can't seem to resolve. 0: 384x640 1 person, 2 chairs, 1 tv, 1 book, 199.4ms Speed: 2.0ms preprocess, 199.4ms inference, 1.0ms postprocess per image at shape (1, 3, 384, 640) Traceback (most recent call last): File "d:\coding\Python\Supervision\main.py", line 56, in main() File "d:\coding\Python\Supervision\main.py", line 50, in main cv2.imshow("yolov8", frame) File "D:\Users\xyz\anaconda3\Lib\site-packages\ultralytics\utils\patches.py", line 55, in imshow _imshow(winname.encode('unicode_escape').decode(), mat) cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
@@Roboflow It is possible, because I already use a script made in python and yolov5, which detects objects on the monitor screen. But there were profound changes in yolov8 and my script stopped working with the new version of yolov. Thanks for the quick response buddy.
I don't know if anyone has requested but, would it be possible for you to do a video using YoloV8 ONNX Object Detection Counting in Real-time with OpenVINO ? That would be one really interesting video to watch!! :)
@@Roboflow Yeah! Reason is, there are many low cost miniPC's with Intel processors and, OpenVINO can make use of their integrated GPU's. OpenVINO can be installed with one 'pip' command and that's it. So you doing such a video would be superb!
Hi @SkalskiP , that was a wonderful explanation. Is it possible to track the objects that comes in or goes out using a polygon zone like you did using a line in your earlier video? If yes, how can i get the count of objects (in/out) separately for each class.
We do not have that feature yet, but it sounds useful. Would you be kind enough and create an issue in the supervision repo: github.com/roboflow/supervision/issues ?
👋🏻 hello! Supervision is not a model like YOLO but rather a set of computer vision tools, that aim to help you build video analytics apps. So to do something useful with your detections
Hello 👋! It is Peter from the video. Not yet, but I have that on our roadmap. Feel free to add your feature here: github.com/roboflow/supervision/issues. That will help us to prioritise work better.
I absolutely love your videos! YOLO is indeed amazing. But I do have a question: How do I do it so that I can only detect and count people (whether it's a webcam feed or a video)?
@@Roboflow you didn't included requirement.txt in supervision github. i already installed all of python package requirement.txt that you uploaded. i am using viusal studio code with python.
and the result is this. PS C:\Users\kyutae\yolov8> & C:/Users/kyutae/AppData/Local/Programs/Python/Python310/python.exe c:/Users/kyutae/yolov8/main.py 0: 384x640 (no detections), 476.6ms Speed: 6.0ms preprocess, 476.6ms inference, 1.0ms postprocess per image at shape (1, 3, 384, 640) Traceback (most recent call last): File "c:\Users\kyutae\yolov8\main.py", line 46, in main() File "c:\Users\kyutae\yolov8\main.py", line 35, in main detections = sv.Detections.from_yolov8(result) AttributeError: type object 'Detections' has no attribute 'from_yolov8'. Did you mean: 'from_yolov5'?
That's what I thought however when I try to pass in a numpy array directly I get the following error: RuntimeError: Given groups=1, weight of size [16, 3, 3, 3], expected input[1, 4, 384, 640] to have 3 channels, but got 4 channels instead Here's my script: #def onCook(scriptOp): cap = op('null1').numpyArray(delayed=True) model = YOLO("yolov8n.pt") box_annotator = sv.BoxAnnotator(thickness = 2, text_thickness=2, text_scale=1) while True: result = model(cap, agnostic_nms=True)[0] detections = sv.Detections.from_yolov8(result) labels =[ f"{model.model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections ] frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels) scriptOp.copyNumpyArray(cap) return
hey bro please help me if want to make the output label in another language how can i do that in mainpy file i want to make if yolo detect class 0 show label in another language 🙏
@@Roboflow i want make output detection in arabic language or if yolo detection class 0 show text "any language" in frame window how can i use this function ?
@@hassenmaged5989 that's super easy! problem is that it is quite hard to give you the code snipet here. Could you start a thread here: github.com/roboflow/supervision/discussions/categories/q-a I'll help you out :)
I'm doing this because of my research that will help our local community, I've trained my own data that I can use and right now I'm studying implementation in building an app for the user to use.
I get this error: "...too many values to unpack (expected 4)" pointing inside the labels array. Why am I getting this error this error? What should I do to fix it? :)
pip list: ...numpy 1.24.2, torch 2.0.0, torchvision 0.15.1, ultralytics 8.0.82, supervision 0.6.0
Please downgrade supervision to version 0.3.0
@@Roboflow This comment needs to be pinned. Thank you!!
@@NoName-un2qr is it possible to pin 📌 comment?
@@Roboflow Yes. You should have the option when click the 3 dots near his comment.
Once again thank you very much for the video. It was great!
@@NoName-un2qr awesome! Done! ✅
Your content has become the best content I've watched on YT in a while and I love the Supervision package, it's making my work easier. Thanks
Hi 👋! It's Peter from the video. You have no idea how happy I am to read things like that. Thanks a lot for saying that. It is really motivating.
omg thank you so much, everyone is using yolo on it's own and not with OpenCV and that's exactly what I needed
it was great to see how easy is it to remove a class from detections! Great job @roboflow!😁
Thanks a lot! Glad you liked Supervision utilities;)
Hello sir,
How can I develop real-time webcam functionality using a dataset I've created?
It's so cool, thank you for showing basic function to work with YOLO, i'm having a small task that required learning object detection and I was frustrated to find some tutorial, this was really a big help
My pleasure!
Hi, your video is great. I've been using your code with my own model to count bacteria inside the zone and it works perfectly! Thank you for sharing the knowledge.
You’ve been using Supervision to count bacteria? This is awesome! I’d love to take a look.
how can i use my model insted of importing yolov8 from ultralytics ?
@@fcgfgfgh is that model a custom YOLOv8 or any other? If other, than what's the model??
If it's a custom yolov8 model then how can i do it?
i am getting this error
AttributeError: module 'supervision' has no attribute 'BoxAnnotator'
Well explained @Peter. Useful and informative video that can cover multiple use-cases. Thanks a lot!
Thanks a lot for kind words! 🔥
how do you create a virtual environment at 00:45? on my end it says invalid syntax
When the number of objects increases, my model does not display the label format that I specified and only displays the object code, how can I display the specified label in any case?
What is the hardware (perhaps Jetson Nano?) you are using for this video?
I was using Linux PC in this tutorial; but we have dedicated Jetson tutorial on this channel.
Great stuff Piotr!!!
Thanks a lot! :))
God bless you, you have made my life much easier. Keep up the good work
hi, the pre trained model that u used how can we use if i want to train a custom dataset and then use it? like how to use it for model trained on custom dataset on collab but i want to detect in real time how to get that? can i download the already trained model on collab on my pc?
We have a tutorial where I show how to train YOLOv8 model on custom dataset. At the end of the video I show how to use custom model for inference. Among other things I show where your custom model is saved. You need to to download that file to your local.
@@Roboflow thanks i will check it out
Is it possible to do multiple polygon zones inside 1 frame?
Great video sir....!. Mass respect from India ...!
thank you for this video! very helpful.
but i am a problem. the line "import cv2" in not identified what should i do?
Thanks for your complete step by step coding video. Kinna like it. I followed yolo since v3, but never know we can filter out hand or other unrelated object to be detected
This was a life saver...and a job saver lol.
I love your voice.
Haha thanks a lot! Glad your job is safe ;)
Hey @Roboflow can this same model work in Raspberry pi 5 or Nvidia Jetson Nano without any optimization or quantization ?
It will 100% run on Jetson Nano
@@Roboflow , Hey, i followed every step, in my jetson nano, unfortunately i got an error that "Illegal instruction (core dumped) related to core system incompatibility. Do you have a way i can tweak and handle this. Looking forward to your response. I would really appreciate.
Thank you again !
Awesome tutorial. Very clear. Thanks for your time.
In this if we want to detect objects on my current computer screen instead of a webcam, how could we do that, any idea?
Did they remove yolov8 compatibility from supervision? Mine insists there isn't v8 version only v5
Now it is called from_ultralytics
Hi, thank you for the tutorial. I'm working on my project right now and it helps me a lot. But this project require me to know the fps, is there a way to show it?
Thank you for this great tutorial! May be you can help me with this error I am getting? When I run the main with the "results = model(frame)" line added, it is throwing the following error (see below). If I use YOLOv5, it works perfect, but with YOLOv8 it throws this error. I have created a virtual environment and followed the tutorial step by step. Any ideas? Thanks!
OSError Traceback (most recent call last)
Cell In [16], line 22
19 break;
21 if __name__ == "__main__":
---> 22 main(model)
Cell In [16], line 10, in main(model)
7 ret, frame = cap.read()
8 assert ret
---> 10 result = model(frame)
12 cv2.imshow("yolov8", frame)
14 k = cv2.waitKey(1)
File c:\Python38\lib\site-packages\ultralytics\yolo\engine\model.py:58, in YOLO.__call__(self, source, **kwargs)
57 def __call__(self, source, **kwargs):
---> 58 return self.predict(source, **kwargs)
File c:\Python38\lib\site-packages\torch\autograd\grad_mode.py:27, in _DecoratorContextManager.__call__..decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File c:\Python38\lib\site-packages\ultralytics\yolo\engine\model.py:130, in YOLO.predict(self, source, **kwargs)
...
--> 205 s = self._ext_to_normal(_getfinalpathname(s))
206 except FileNotFoundError:
207 previous_s = s
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '[[[ 69 76 103]
[ 67 75 102]
[ 65 75 103]
...
[ 56 71 96]
[ 59 73 98]
[ 60 73 99]]
[[ 70 77 103]
[ 70 77 104]
[ 67 76 103]
...
[ 59 73 98]
[ 59 73 98]
[ 60 73 99]]
[[ 72 80 104]
[ 71 78 103]
[ 70 78 105]
...
[ 62 74 98]
[ 62 74 98]
[ 61 73 98]]
...
[[ 59 95 138]
[ 61 97 139]
[ 61 97 137]
...
[ 35 48 49]
[ 33 48 50]
[ 33 50 51]]
[[ 59 97 138]
[ 59 97 138]
[ 61 99 138]
...
[ 37 49 51]
[ 37 50 52]
[ 36 51 53]]
[[ 60 99 138]
[ 60 99 138]
[ 59 99 137]
...
[ 40 50 52]
[ 39 51 53]
[ 38 51 53]]]'
Hello 👋! It's Peter from the video. Could you give it another try? I just updated the code.
Unfortunately it was a bit outdated...
@@SkalskiP Thanks for the reply. I found the issue: I was running the code on python3.8.5. I upgraded it to 3.10 and now it works. May be it works since 3.9 . In case you find someone else with a similar issue, now you know the solution, Have a great day!
@@juanolano2818 oh. Interesting I think it should run on 3.8 too. Regardless. I’m happy that you managed to solve the problem. ;)
I have an error: i caanot use boxannotator and detections it says unused reference initpy
can we detect object if the object inside the object, example we only detect spoon if the spoon inside the cup bounding box
Is the object inside the same class or not? If the same class it will be hard if different I think we can we would need to experiment a bit with model parameters.
@roboflow thanks for the answer, its not the same class, so we only counting spoon class if the spoon inside the cup bounding box and not counting the spoon outside the bounding box
Hello, I have a problem with LineZone in supervision 0.7.0, and it not working. I've tried to follow the same way with your previous video about track and count object, any idea?
Could you describe your problems here: github.com/roboflow/supervision/discussions/categories/q-a? I'll try to help you :)
I don't have NVidia cards (nor can I use CUDA for that matter). How can I make use of the GPU when running the "yolo detect predict..." local inference on processors with UHD Graphics 600 & 630 and Intel N100 & N200 processors?
I’m afraid those GPUs are not supported by PyTorch
Hi! can I use Yolov8 via a live stream link, not via a connected webcam, only via the player link?
Sure! Take a look here: docs.ultralytics.com/modes/predict/?h=rtsp#inference-sources
Exlente video he aprendido mucho
bonjour merci pour le travail ! tu utilise la version 0.2.0 de supervision mais il n'y a que la versin 0.16.0 de dispo qui ne contient pas la fonction detections.from_yolov8 ! comment puis je faire
? merci
Hi! All versions are available. You just need to install it like this: pip install supervision==0.2.0
Awesome!!
Hi! It is Peter from the video! I'm super happy you liked it :))
any idea how to freeze the other classes and take only one class
thank you
i have a question
i have a trained instance segmentation yolov8 model and also detectron2 model on custom dataset and what i need is to run inference on new data images and use the output to make annotation on the new images and add them to my train dataset by uploading them to my roboflow dataset later
so is there any way i can do that ?
Hi 👋! Do you need fully automated solution or are okey with manual steps?
@@Roboflow Yes, anything helpful 🤩
@@body1024 we could start by using YOLOv8 CLI to run prediction on your images pyt pass save_txt=True. That should save your predictions in YOLO txt annotation files. You should be able to upload those annotations and images to Roboflow. Let me know if that worked ;)
What version of Ubuntu and ros are using in your video ?❤
Thank you for video!
I'm curious about how to write the code to load an ONNX model. I would like to load an ONNX model, but I'm not sure how to do it. When using the code from the video, I encounter an input size error. Are there any helpful videos or resources that I can refer to in this case?
Maybe we should do some ONNX tutorials... 🤔
@@Roboflow th-cam.com/play/PLZCA39VpuaZZ1cjH4vEIdXIb0dCpZs3Y5.html
It appears that there are no videos for ONNX models.
Not much information available...
Thank you
What would be the best way to have multiple conditions for the detections so, for example:
[detections.class_id !=0 && detections.confidence >= 0.7] ?
We use numpy notation. So you can chain logical conditions using single & and putting each condition into separate brackets. Here is solution to your specific example: detections[(detections.class_id !=0) & (detections.confidence >= 0.7)]
I am using a windows 11 and it doesn't show me the webcam only the terminal in vscode
Thank you so much bro.... Very helpful 🙂👍
I'm super happy to hear that!
I keep getting 2 detections for the same class even after writing agnostic_nms=True is there a way to say: Just detect 1 of each class in the whole window, the one with higher confidence?
how about IP carmera? opencv read via RSTP is very slow.
I've been trying to run this code but its failing in the supervision/detection/core.py file...
xyxy=yolov8_results.boxes.xyxy.cpu().numpy(),
AttributeError: 'list' object has no attribute 'boxes'
I put in a bug report.... Any idea what this might be? I'd love to be able to finish your suite of tutorials on this!
I am using supervision 0.2.0 and I tried it with the latest 0.2.1.. same thing....
Thanks!
I just responded to the issue. Let me know if that fixed your problem.
@@Roboflow Yes that fixed it! Thanks for the quick response... and after continuing on I see you had the same problem in the video! So I just needed to continue watching... ugh....
@@hchattaway yes 🙌 looks like it is not intuitive
Is it possible I change the "zone_annotator = sv.PolygonZoneAnnotator(zone=zone,color=sv.Color.white(),thickness=2,text_thickness=4,text_scale=2)" position. For example, display the red box with the object number on the left/bottom side on the screen?
Master piece
Thanks a lot!
these videos are amazing
Thanks a lot!
Great, thanks for this. Keep doing.
Thanks a lot! 🙏🏻
Impressive mate!
Is there a way to get the coordinates of the bounding box in real time from Supervision or YOLOV8 itself?
`detections.xyxy` - it is `numpy` array with coordinates
@@Roboflow Thanks☺️ I wanna use yolov8 for my project! Thanks to you. Maybe I will do my best!
dont know what version of python you are using. im using 3.9 and i cant install specific version of ultralytics
@SkalskiP Hi Peter, what operating system are you using?
I usually use MacOS, but I used Ubuntu for this video.
Hi Peter. I have a question regarding filtering detections. Filtering sv.Detections is the same as passing class ids as additional argument in the predict method or there are some performance issue with the second alternative?
thank you !
You welcome ;)
Hey, can you please guide me on overlapping object detection, as im willing to use YOLO for peach🍑 fruit detection in my project, the problem is, fruits are very dense and are overlapping with each other as well as occluded by the leaves, so can you please help me with that?
Do you have some image/video sable that we could discuss?
@Milindn Chaudhari. I am also working in the same project, but in my case are totamotes and with leaves is quite difficult to count.
@@jirivchi hii, bro, would like to stay in touch with u, so in case any problem occurs to any of us, we may discuss it along....! If u don't mind share ur details where we can connect 🙏
@@milindchaudhari1676 haha awesome :) to see people find friends in our comment section
I use Linux/Ubuntu as it looks like you're doing.. :) Would you say that within the ML/Vision industry the Linux platform is the most common? I rarely see Windows being used as the platform of choice for this application.
Hi 👋! It's Peter from the video. Yup, I used my Linux PC for this video, as I needed access to my GPU locally. Usually, I record stuff in colab on my Mac. All in all I'd say that most of the people I worked with use Mac or Linux. Windows is for sure the least frequently used.
why i can't do install ultralystic
the command say
ERROR: Could not find a version that satisfies the requirement ultralystic (from versions: none)
ERROR: No matching distribution found for ultralystic
even though I have upgraded pip to the latest version
What’s your Python version? What’s your OS?
Thank you for the video Piotr! How handle multiple camera detections and counting in zone and save results in database. Can you make a video tutorial about it
Hi 👋🏻 we are thinking about showing how to save YOLOv8 detections to CSV. Would that be interesting for you? As for multi stream setup, you think about having single model and multiple streams?
@@Roboflow hi! thanks for ur reply. Saving to csv is great also. I trained custom dataset with 1 class only and need to count it in multiple streams.
@@zy.r.4323 sure but I need to ask if you plan to run only one one model for inference, or have one model per one stream?
@@Roboflow only one model for inference.
@@zy.r.4323 let me think about it. I want to add some utilities for supervision.
Is it possible to use the YoloV8 model in .onnx format rather than .pt, for real-time object detection? I only ask thinking the detection/prediction should take less time. And if so, would you be able to make such a video? :)
Yeah YOLOv8 can be converted to ONNX. I was even thinking about video like that. Not strictly about ONNX but… optimization. Pruning and quantization… does it sound interesting?
@@Roboflow Yes! Absolutely! Thank you!! 👍
Hi can you make a video on explaining the code of YoloV8 a little bit
Hi 👋! It's Peter from the vdeo. Anything spcyfic that is interesting to you?
@@SkalskiP thanks peter after watching your latest video ,all my doubts are now clear , more power to u bro
@@hammad2147 thanks a lot! Stay tuned I already have great ideas for next videos ;)
Hi Friend, great tutorial! Cheers for that..
how can I know the class number of a given item? for example, you know the class number of apple and person.. You got it somewhere probably :)
where can i see this list so i can filter other items from the list? thanks!
Hi sorry for the late response. I was a bit busy with a new video. Take a look here: github.com/ultralytics/ultralytics/blob/9e58c32c15835e54e57f7b8c925367a64cb94951/ultralytics/datasets/coco128.yaml
@@Roboflow thanks 🙏🏼
Always love your content and its small jokes when things go wrong. Thank you!
👍++
Hello it is Peter from the video :) Uf... I was worried that I'm the only one who find those jokes funny hahaha
@@SkalskiP the delay, pause and its silence/cricket sounds make the mistakes (and its solutions) long lasting in our memory. Like it alot! 😁
@@abdshomad I'll keep that in mind next time when I'll make some spectacular mistake. Given that you are frequent viewer what do you think about the format of this video? I code in editor instead notebook. And I write the code instead of just explaining what I did?
@@SkalskiP Thank you. Firstly, better explain in it vs code. Cleaner. But... please also show it runs on Colab. Roboflow Notebooks are very helpful. Colab helpful for quick POC. We dont have to prepare venv, conda, pip install huge packages (pytorch, detectron, etc).
@@abdshomad thanks a lot for your opinion. I also need to balance it all out not to make suuuuper long videos. But I see your point. VS code is a lot cleaner when it comes to explaining the code. On the other hand notebooks are super convenient. This time no notebook, as you need to have access to GPU. But there is repo with example ;)
i wanna clear all my installed dataset and args to make another yolo. how can i do that???? Its a long time problem. PLEASE HEEELP TO CLEAR ALL before making another version!
is there any way of taking only 4 labels, for example, truck, car, bus and motorcycle instead of only one or all? thanks a lot for sharing ur knowledge!
That should work:
class_ids = np.array[1, 2, 3] #
what changes do i need to make if I were to use Raspberry Pi 4 + Raspberry Camera?
It‘s almost the same however I don‘t reccomand it bc the Raspberyy Pi is way to slow and only archieve 1,5fps at best compared to 120 fps+ on my desktop gpu. The raspberry cam is really enoyikng as well took me many Hours to get work proberly
how i can use my trained yolov8 model ?
You can just provide the path to your pt file. Something like that: model = YOLO("path/to/your/model.pt")
How can I write the same code in .NET Core C#? 🙂
I've never done anything like that :/
Hello, thank you for your great video! May i ask you a question, can it be use on yolov7 model?
Yup it can. It will be just a bit mor of work. Because YOLOv7 does not have pip package. But I made stuff like that in the past. It is very much possible.
@@Roboflow OK, Thank you for the quick response. It means, i need to change several parts on the code to suit yolov7 model, right?
@@mollynaia First of all what are you planning to do? Is it going to be really time processing? Do you plan to use zones?
@@Roboflow i am going to do real time object detection (Hand gesture detection) using yolov7 model, but i haven't been able to find a way to do it in using webcam. Hope you can help me :)
@@mollynaia do you want to runn it as stand alone app or is it part of some larger system?
Can it be used to count humans on real if yes can u tell the possible changes
Can I build the program on Windows OS as you did on Linux?
You probably can. :)
@@Roboflow I tried running it on Jetson Nano using Linux OS, but I failed, it showed the error "illegal instruction (core dumped)". Could you tell me how to fix it?
Bro how to store the object count value in the variable?
I'm not sure I understand your question but count = len(detections) should work.
That was great, thanks!
Sir how can i filter with respect to confidence
Can you make a video to run with onnx model? I really appreciate that
Webcam + onnx model?
@@Roboflow yes please
@@hoangtuhuynh5416 sounds cool! I'm not sure but I think we don't have any ONNX tutorial. I'll pass the idea to the team :)
How to use it with custom class?
Hi dude, your video is very good. I have a question. I trained a four-label model. I want to show the count of each label in this model separately on the screen. Do you have a suggestion?
Thanks a lot for the kind word 🙏🏻 are we talking about per class current count in zone?
@@Roboflow Yes, that's right
That's a fantastic tutorial! Thank you so much!
I have a question: if it's possible for you to guide me on how to implement re-identification (or maybe re-tracking) of the same object with YOLOv8?
Create a new thread here: github.com/roboflow/notebooks/discussions/categories/q-a I'll try to help you out. I'm really busy but I'll try to do my best.
Can you make a video on person re identification.
Thank you so much for this amazing tutorial! I have a question: I'm interested in extracting the results from a frame like the result, specifically the count of objects and their corresponding types, and then outputting them in JSON format. Do you have any suggestions or ideas on how I can accomplish this?
Hi, did you find a way?
Hello , thank you for great video. I would like to know if I can save the results as a time-stamped data in csv format. If you respond, it makes me pleasure. Best regards
Great video, unfortunately, several times, you have typed the code exactly where the youtube progress bar is, so when I wanted to follow you, I had to look for a better shot when the code was scrolled. Please type the code a little higher on the screen (if possible) next time. Thanks
Awesome feedback! Thanks a lot for that. I’ll keep that in mind next time.
Love this video, thank you for sharing/teaching. Sadly I'm getting an error around the 9:20 mark when following along, which I can't seem to resolve.
0: 384x640 1 person, 2 chairs, 1 tv, 1 book, 199.4ms
Speed: 2.0ms preprocess, 199.4ms inference, 1.0ms postprocess per image at shape (1, 3, 384, 640)
Traceback (most recent call last):
File "d:\coding\Python\Supervision\main.py", line 56, in
main()
File "d:\coding\Python\Supervision\main.py", line 50, in main
cv2.imshow("yolov8", frame)
File "D:\Users\xyz\anaconda3\Lib\site-packages\ultralytics\utils\patches.py", line 55, in imshow
_imshow(winname.encode('unicode_escape').decode(), mat)
cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
Issue solved, if anyone else has this problem:
pip uninstall opencv-python-headless; then
pip uninstall opencv-python; then
pip install opencv-python
How can I use another webcam like on jetson nano , or custom
Do you face any problems with Webcams on Jetson Nano? Those should work the same way.
Congratulations on the job. Can you make a video of yolov8, mss and numpay, capturing the image directly from the monitor screen?
You ask if we can do it or if that is possible?
@@Roboflow It is possible, because I already use a script made in python and yolov5, which detects objects on the monitor screen. But there were profound changes in yolov8 and my script stopped working with the new version of yolov. Thanks for the quick response buddy.
So I ask, do you know how to do it?
@@maiquelkappel7745 I can see us making video about it but I think we won’t do it soon. We have a lot on our TODO list.
@@Roboflow Okay, thanks anyway for your attention!
I have a problem with the resolution, I can't change it, I tried everything I know but still shows 640,480,3. Any solution?
minute 6:45 , on line 21, write this instead cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
I don't know if anyone has requested but, would it be possible for you to do a video using YoloV8 ONNX Object Detection Counting in Real-time with OpenVINO ? That would be one really interesting video to watch!! :)
Both ideas sound awesome! Added to long list of ideas :)
@@Roboflow Yeah! Reason is, there are many low cost miniPC's with Intel processors and, OpenVINO can make use of their integrated GPU's. OpenVINO can be installed with one 'pip' command and that's it. So you doing such a video would be superb!
It looks awesome. May I use your script, then where can I get it?
Of course, you can! This is open source :) Here it is: github.com/SkalskiP/yolov8-live
Hi @SkalskiP , that was a wonderful explanation. Is it possible to track the objects that comes in or goes out using a polygon zone like you did using a line in your earlier video? If yes, how can i get the count of objects (in/out) separately for each class.
We do not have that feature yet, but it sounds useful. Would you be kind enough and create an issue in the supervision repo: github.com/roboflow/supervision/issues ?
Done. Thanks for the response!
@@jayarajdhanapriya5938 thanks a lot 🙏🏻
I know yolo5 and 7 supports multiple streams, does Supervision support multiple streams?
👋🏻 hello! Supervision is not a model like YOLO but rather a set of computer vision tools, that aim to help you build video analytics apps. So to do something useful with your detections
Hello can we count number of objects when one of them enters the zone, just using supervision libary?
Hello 👋! It is Peter from the video. Not yet, but I have that on our roadmap. Feel free to add your feature here: github.com/roboflow/supervision/issues. That will help us to prioritise work better.
how to count object in image by yolo8?
IndexError: index 738 is out of bounds for axis 0 with size 720
I absolutely love your videos! YOLO is indeed amazing. But I do have a question: How do I do it so that I can only detect and count people (whether it's a webcam feed or a video)?
You can try adding detections = detections[detections.class_id == 0] ?
@@Roboflow I guess that makes sense, silly me 😅 Thx for the reply, btw! Help is always appreciated 😊
i got big error. the msg is
AttributeError: type object 'Detections' has no attribute 'from_yolov8'. Did you mean: 'from_yolov5'?
Use requirements.txt to install packages
@@Roboflow you didn't included requirement.txt in supervision github.
i already installed all of python package requirement.txt that you uploaded.
i am using viusal studio code with python.
my code is this.
import cv2
import argparse
from ultralytics import YOLO
import supervision as sv
def parse_argument() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="YOLOv8 LIVE")
parser.add_argument("--webcam-resolution", default=[1920, 1080], nargs=2, type=int)
args = parser.parse_args()
return args
def main():
args = parse_argument()
frame_width, frame_height = args.webcam_resolution
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
model = YOLO("yolov8l.pt")
box_annotator = sv.BoxAnnotator(
thickness=2,
text_thickness=2,
text_scale=1
)
while True:
ret, frame = cap.read()
result = model(frame)[0]
detections = sv.Detections.from_yolov8(result)
frame = box_annotator.annotate(scene=frame, detections=detections)
cv2.imshow("yolov8l", frame)
if (cv2.waitKey(30) == 27):
break
if __name__== "__main__":
main()
and the result is this.
PS C:\Users\kyutae\yolov8> & C:/Users/kyutae/AppData/Local/Programs/Python/Python310/python.exe c:/Users/kyutae/yolov8/main.py
0: 384x640 (no detections), 476.6ms
Speed: 6.0ms preprocess, 476.6ms inference, 1.0ms postprocess per image at shape (1, 3, 384, 640)
Traceback (most recent call last):
File "c:\Users\kyutae\yolov8\main.py", line 46, in
main()
File "c:\Users\kyutae\yolov8\main.py", line 35, in main
detections = sv.Detections.from_yolov8(result)
AttributeError: type object 'Detections' has no attribute 'from_yolov8'. Did you mean: 'from_yolov5'?
Is there a way to pass yolov8 a numpyArray instead of cv2.VideoCapture?
Hi you actually pass frame to YOLOv8 model. If you take a loo kat our code we actually do ret, frame = cap.read(). That frame is numpy array.
That's what I thought however when I try to pass in a numpy array directly I get the following error: RuntimeError: Given groups=1, weight of size [16, 3, 3, 3], expected input[1, 4, 384, 640] to have 3 channels, but got 4 channels instead
Here's my script:
#def onCook(scriptOp):
cap = op('null1').numpyArray(delayed=True)
model = YOLO("yolov8n.pt")
box_annotator = sv.BoxAnnotator(thickness = 2, text_thickness=2, text_scale=1)
while True:
result = model(cap, agnostic_nms=True)[0]
detections = sv.Detections.from_yolov8(result)
labels =[ f"{model.model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections ]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
scriptOp.copyNumpyArray(cap)
return
@@taylorgonzalez9406 could you set up a thread here github.com/roboflow/supervision/discussions/categories/q-a ?
@@Roboflow yes!
How do I quit running the feed without killing my terminal? Dumb question but im very new to this
press ESC
Thanks @DrFatBear
How to turn off the webcam 1:45
thanks but it wont work for mac os 10.15
What’s the problem you face on MacOS?
hey bro please help me if want to make the output label in another language how can i do that in mainpy file i want to make if yolo detect class 0 show label in another language 🙏
Hi 👋🏻! What do you mean another language? Programming language?
@@Roboflow i want make output detection in arabic language
or if yolo detection class 0 show text "any language" in frame window how can i use this function ?
@@hassenmaged5989 that's super easy! problem is that it is quite hard to give you the code snipet here. Could you start a thread here: github.com/roboflow/supervision/discussions/categories/q-a I'll help you out :)
@@Roboflow ok thank you, i make the post
@@hassenmaged5989 perfect! Thanks a lot ;)
I don't know how to deploy this in web framework with tracking and counting
I'm doing this because of my research that will help our local community, I've trained my own data that I can use and right now I'm studying implementation in building an app for the user to use.
@@rolandojrhernandez4905 you only want to do object detection?