i have watched like above 20 videos for this purpose but i mean it this one vid is much more simplified easy steps going on just well explained good quality, not too fast not too slow or boring explanation just amazing !
This worked GREAT for me! The best part was that it showed my training images and their labels, and it made me realize my training set was mis-labeled :)
how many trainable parameters does yolo5 have? does your training start with a pre trained model that used the coco dataset? if you use existing weights won't you need to re code the network head to change the classes to your own?
At 18:35 1) If we want to combine our custom set in default which yolov5 given sets is it possible? and how 2) or can we delete some classes that we doesn't needs? I would like to make custom sets with person classes, But if I made custom sets in your videos' method Maybe i think i have to make person dataset again . ( I'd like to add my customs to the existing default datasets. )
Can you tell how to change the model architecture? Like the layers. I would appreciate that. If you have already made a made for that please leave a link to that. Thanks.
Hi, I'm working on a project and I need help with something. How do I find the correct epoch size for my project? Different tutorial videos on your channel have different epoch sizes. How do you determine this?
Its very nice. How can I download this model to my PC ? Is the weights are sufficient ?? I need the architecture also right ?? I want to make an android app for object detection , any tutorial available on that ? Pardon for any mistake in the question , I am a beginner.
hi, may I ask how to use a "Try this model drop an image" box in a library that already exist? cuz when I drop the image it cant be detected an just shown this command { "predictions": [] }
from utils.utils import plot_results; plot_results() # plot results.txt as results.png Image(filename='./results.png', width=1000) # view results.png This is not working, I just copied your code, giving me an error "ModuleNotFoundError: No module named 'utils.utils'". Kindly anybody help me asap. Thanks.
I have my own GPU i can train on, how would i go about doing that? don't want to use external websites with the datasets i have, as they are very limited.
Hi Joseph .A great tutorial. The dataset you used was heavily unbalanced. The augmentation is performed on all images combined and not the ones that were misrepresented(WBC AND platelets). Don't you thing that the trained model will be biased towards RBC? Could you give some suggestions how to cope with unbalanced dataset? or should I perform augmentation on the least represented classes?
I honestly don't understand why this model is working in Colab and does not working on my PC. It's always displaying nan loss. But pytorch setup is working well with other network architectures. What can be a problem with it?
Hi, my folder 'weights' is not able to get the 'last_yolov5s_results.pt', the unique file contained after running all the code is 'download_weights.sh'. Does anyone know how to fix this issue? Thanks so much in advance.
Trying to do this but getting the following error: Traceback (most recent call last): File "train.py", line 531, in train(hyp, opt, device, tb_writer, wandb) File "train.py", line 191, in train assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) AssertionError: Label class 1 exceeds nc=1 in ../data.yaml. Possible class labels are 0-0
When I try to run the train.py script I get the following issue: Traceback (most recent call last): File "train.py", line 492, in train(hyp, opt, device, tb_writer, wandb) File "train.py", line 91, in train model = Model(opt.cfg, ch=3, nc=nc).to(device) # create File "/content/yolov5/models/yolo.py", line 95, in __init__ self._initialize_biases() # only run once File "/content/yolov5/models/yolo.py", line 150, in _initialize_biases b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. Does anyone know How to Fix This?
Good day! We have some issues with object detection on custom datasets and would be appreciated any advice. Is it possible to have a 1-hours consultation?
with this method does the model is entirely learn end to end with our dataset or does it use pre-trained models with pre-configure weight ? Also thank you for this clear tutorial
I've been trying to use Yolov5 with PyTorch for the past few days... but there are some massive memory leaks during training. I'm constantly having to reboot.
Is it possible to just augment to the objects that yolov5 can detect already? Like for example I have a dataset of streetview images and I want to detect trees aside from cars and motorcycles. Do I have to retrain yolov5 again to detect cars and motorcycles, when it is already included in the original coco128.yaml file?
My training always stop at : train: Scanning 'MyDataset/train/labels.cache' for images and labels... 26559 found, 0 missing, 0 empty, 0 corrupted: 100% 26559/26559 [00:00
@@sarahch8878 Unfortunately I don't have an answer to that question. I remember ending up using Yolov4 and finished the project. Maybe something changed, but taking into account your question It seems like it didn't.
@@Arhan3l thank you for your answer, may be I'll try to add some code from yolov3+deepSort, but it not seems to be easy... Did you find yolov4 with deepSort ? thank you
@@sarahch8878 Yes, there should be a video on youtube on the "The AI Guy" channel If I remember correctly. I was also searching some forums and the "Computer Vision" subreddit.
Dude thanks this video was really helpful. Just asking when i wanted to use roboflow to create my own data i didn't found the place where i have to use to generate data is it the new version that's why. Btw thanks again.
When I trained with the same, my model with the extension ".pt" is in compressed archive format, how do I get that back to a normal ".pt" file so that i can use it anywhere else? When I try to use the compressed archive directly, it gives an error of core dumped
Only images in result didn't show bounding boxes in my case! The model run well. But the cell with #display inference on ALL test images didn't show up any bounding boxes.
same here with me :/ It says "ls: cannot access 'runs/train/exp0_yolov5s_results/weights': No such file or directory" As per the video I believe this weights is from ultralytics itself.
Hello sir, I have followed the same steps you have Given to train the model, but my MAP is decreasing per epoch. Please let me know how to solve this. Thanks in Advance
Hey Nice video! Is there a way to like export the weights and such to use in a custom python project? I already trained a custom dataset, now I need to combine it with my python code that counts cars as they pass by. There seems to be no videos explaining how to use yolov5 other than on google collab
@Roboflow "RuntimeError: non-positive stride is not supported" I get this error when I run the training command. My architecture is the same as the video. Does anyone else get this?
# this is the YAML file Roboflow wrote for us that we're loading into this notebook with our data %cat data.yaml I do not see the file and get the "error cat: data.yaml: No such file or directory"
What are the next steps to creating a useable model? I'm looking to convert to .tflite or .tflite quantised so I can use on a Raspberry Pi or Android phone?
For this implementation, if you want to run the model on mobile, the recommendation is compiling to ONNX pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html, and going from ONNX to your desired output for serving github.com/onnx/tutorials
Hello, thanks for the amazing tutorial. I'm just wondering if we apply resizing for training data, but at inference, we use the original images. Does this have an impact on return?
Hi, thanks for the tutorial, actually I did the same steps and I used the model on a real-time video but I got only 4 FPS which is too slow , do you have an idea about that ?
Dude thanks for speaking in a way that teaches me instead of just assumes I already know this stuff
Certainly - thanks for tuning in.
I'm about to follow a yolo3 tutorial, and then I found the yolo4 tutorial, and then I found yolo5 tutorial, omg Thanks roboflow.
Thanks! Let us know what you want to see next.
this guy is still a legend to this day
Brilliantly done! You know how to keep the viewers glued to your video.
i have watched like above 20 videos for this purpose but i mean it this one vid is much more simplified easy steps going on just well explained good quality, not too fast not too slow or boring explanation just amazing !
This worked GREAT for me! The best part was that it showed my training images and their labels, and it made me realize my training set was mis-labeled :)
Great to hear! If you're interested, we'd be happy to share your success on our blog. Write us in-app or here: docs.roboflow.ai/support
@@Roboflow Great, I just did!
this was the best video in the detection area by YOLO, believe me I tried a bunch of videos, non of them worked like this Thanks truly.
Best YOLO tutorial I've seen on TH-cam so far! Thank you!
Thanks so much!
Great tutorial, maybe the best I have seen on Yolo. Thanks for sharing the Colab notebook
this video is so transparent and really easy to understand and follow
Thanks a ton, I finished my project easily by referring this video
We love to hear that! What was the project that you worked on? :)
@@Roboflow UAV Detection in Real Time
@@sainandhan1108 sweet! Is that something you can share? Seems very interesting
@Sai Nandhan. Bro koncham adhi ravadam ledhu bro, ninnu contact avacha?
Hats off..what a teacher
i love your facial expressions. thanks. i dont know anything yet but hopefully i am planning to go into this, this summer.
I liked the shaving part and selfie part ..by the way ..this was so fun learning that hard skill in a funniest way..thankur so much sir..
18:11 - I don't get the question "replace data.yaml". any suggestions?
wow, great! thanks for sharing!
Best tutorial ever!! Thankss Joseph!! ;D
Thanks!
Amazing work! Congrats! keep up
Roboflow is pretty awesome
Thank you!
can u explain what the meaning of "head", "backbone", and "anchors" on the models of yolov5?
He is funny af. And obvio the tutorial is awesome, thats very universal of him.
" define input image size " 19:47
I don't know what it mean ?
can anyone help me
explain what is number 416
min 18:20 is bit confusing, the command downloads the files into /content, but you open the /content/yolov5 folder? where do the files should go?
exactly did find the solution
Apart from the nits, noted below, THANKS for a great video, delivered in such an upbeat way!
Thanks for the feedback!
amazing work thank you so much !
Thank you! for the clear explanation :)
Sir you are amazing thanks, you make we want to get roboflow Pro! Keep it up!
!!! awesome! going to test out now
Thanks for your awesome sharing. Helps us a lot!
Great job 👏
amazing video !
Is it possible detect or recognize different person like different classes? Thanks great job!
perrrfectttt
Amazing! Congratulations!!!
Thanks!
Great content! Thank you for this.
Thank you so much!
How do i test the model for a video? What should we change in the line 'detect.py'?
Note per the Ultralytics repo, when calling detect.py, source should be updated to "0" github.com/ultralytics/yolov5#inference
how many trainable parameters does yolo5 have? does your training start with a pre trained model that used the coco dataset? if you use existing weights won't you need to re code the network head to change the classes to your own?
In very confused, once you mount the weights to your drive, how can you use it in your code????
Should we use CUDA for training??
Will *weights file* and *config file* generated by Roboflow be acceptable by "dnn" module of OpenCV ie: cv2.dnn.readNetFromDarknet ??
At 18:35
1) If we want to combine our custom set in default which yolov5 given sets
is it possible? and how
2) or can we delete some classes that we doesn't needs?
I would like to make custom sets with person classes,
But if I made custom sets in your videos' method
Maybe i think i have to make person dataset again .
( I'd like to add my customs to the existing default datasets. )
I following all steps you provided, but there is no predictions on the test dataset. I don't know why!!!!
i need a full explanation of tensor board measures and understand them please
Can you tell how to change the model architecture? Like the layers. I would appreciate that. If you have already made a made for that please leave a link to that. Thanks.
Can you please tell me about how you labelled your dataset?
Hi, I'm working on a project and I need help with something. How do I find the correct epoch size for my project? Different tutorial videos on your channel have different epoch sizes. How do you determine this?
Its very nice. How can I download this model to my PC ? Is the weights are sufficient ?? I need the architecture also right ?? I want to make an android app for object detection , any tutorial available on that ? Pardon for any mistake in the question , I am a beginner.
Thanks Sir !
I am a student .. I have used free plan roboflow for annotating data.but I couldn't download the dataset .. Export was not there ..so what should I do
hi, may I ask how to use a "Try this model drop an image" box in a library that already exist? cuz when I drop the image it cant be detected an just shown this command
{
"predictions": []
}
from utils.utils import plot_results; plot_results() # plot results.txt as results.png
Image(filename='./results.png', width=1000) # view results.png
This is not working, I just copied your code, giving me an error "ModuleNotFoundError: No module named 'utils.utils'". Kindly anybody help me asap. Thanks.
I have my own GPU i can train on, how would i go about doing that? don't want to use external websites with the datasets i have, as they are very limited.
Hi Joseph .A great tutorial. The dataset you used was heavily unbalanced. The augmentation is performed on all images combined and not the ones that were misrepresented(WBC AND platelets).
Don't you thing that the trained model will be biased towards RBC?
Could you give some suggestions how to cope with unbalanced dataset?
or should I perform augmentation on the least represented classes?
I honestly don't understand why this model is working in Colab and does not working on my PC. It's always displaying nan loss. But pytorch setup is working well with other network architectures. What can be a problem with it?
Hi Joseph
Is it possible to save the trained model, instead of weights?
like TensorFlow "model.save" ??
Dear sir.. where is the video you used in the lesson located sir?
Hi, my folder 'weights' is not able to get the 'last_yolov5s_results.pt', the unique file contained after running all the code is 'download_weights.sh'. Does anyone know how to fix this issue? Thanks so much in advance.
I having issue with my dataset. my dataset is video based and it anootation file for bounding box is also given how can I convert it for yolo v5
Trying to do this but getting the following error:
Traceback (most recent call last):
File "train.py", line 531, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 191, in train
assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
AssertionError: Label class 1 exceeds nc=1 in ../data.yaml. Possible class labels are 0-0
When I try to run the train.py script I get the following issue:
Traceback (most recent call last):
File "train.py", line 492, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 91, in train
model = Model(opt.cfg, ch=3, nc=nc).to(device) # create
File "/content/yolov5/models/yolo.py", line 95, in __init__
self._initialize_biases() # only run once
File "/content/yolov5/models/yolo.py", line 150, in _initialize_biases
b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
Does anyone know How to Fix This?
Good day!
We have some issues with object detection on custom datasets and would be appreciated any advice. Is it possible to have a 1-hours consultation?
thanks for the tutorial, but when i train my data it takes up to 5mins to complete one epochs on google colab, any help please ?
i have a json file and a zip file of yolo format. it contains txt files of all the images. which one should I use
Hey! One qus.
I have my own images and xml files given and downloaded in my pc.
How should I import those to collab?
how can i use download .cfg and weights file to use it locally on my system using python and open cv code ??? plzz help
Please upload video of auto annotation for custom dataset using roboflow
How to change the bounding box thickness and label ?I mean if you have multible object you can't see the details in the img
This is a great video. I have a question, what if I want to use TensorFlow instead of PyTorch for YOLOv5 or 4?
with this method does the model is entirely learn end to end with our dataset or does it use pre-trained models with pre-configure weight ?
Also thank you for this clear tutorial
I believe that it is supposed to be training from scratch. He explains it on the moment 20:21, you can do both but the default is to train from null
sadly the collab notebook doesn't work anymore :c
I've been trying to use Yolov5 with PyTorch for the past few days... but there are some massive memory leaks during training. I'm constantly having to reboot.
thanks a lot!
Please tell me how these saved weights can be used for prediction
Great video! Do you plan to upload video with opencv gpu c++ and yolov5?
Is it possible to just augment to the objects that yolov5 can detect already? Like for example I have a dataset of streetview images and I want to detect trees aside from cars and motorcycles. Do I have to retrain yolov5 again to detect cars and motorcycles, when it is already included in the original coco128.yaml file?
My training always stop at :
train: Scanning 'MyDataset/train/labels.cache' for images and labels... 26559 found, 0 missing, 0 empty, 0 corrupted: 100% 26559/26559 [00:00
I am facing the same problem.
How did you solve it ?
@@melissarizkallah3110 Me too
Hello sir can you please tell me how to find accuracy???
How can it be used to detect a person running in a stadium
i mean how can i use yolov5 in activity recognition?
How can I use Yolov5 to track objects? I see videos with Yolov3 + deep sort but don't know if I can just swap Yolo versions there.
Hello, did you have an answer to your question ? same here, I want to use YoloV5 to track custom objects using deepSORT..
@@sarahch8878 Unfortunately I don't have an answer to that question. I remember ending up using Yolov4 and finished the project. Maybe something changed, but taking into account your question It seems like it didn't.
@@Arhan3l thank you for your answer, may be I'll try to add some code from yolov3+deepSort, but it not seems to be easy... Did you find yolov4 with deepSort ? thank you
@@sarahch8878 Yes, there should be a video on youtube on the "The AI Guy" channel If I remember correctly. I was also searching some forums and the "Computer Vision" subreddit.
@@Arhan3l Thank you
Dude thanks this video was really helpful.
Just asking when i wanted to use roboflow to create my own data i didn't found the place where i have to use to generate data is it the new version that's why.
Btw thanks again.
How canI save the model and use it? So I don’t need to train again,and then use the model
Hi, it is a great project. Can yolov5 work on mobile devices or run live webcam on google colab?
Thanks for sharing, como dijeran en mi México lindo, eres la v3rga!!!!
When I trained with the same, my model with the extension ".pt" is in compressed archive format, how do I get that back to a normal ".pt" file so that i can use it anywhere else? When I try to use the compressed archive directly, it gives an error of core dumped
How can I use the weights?
You can download the weights (as our notebook does at its conclusion) and reuse them elsewhere.
Roboflow How can I convert it to weights?
do i need GPU?
Only images in result didn't show bounding boxes in my case! The model run well. But the cell with #display inference on ALL test images didn't show up any bounding boxes.
yeaj me too ı have big problem someone tell me what should ı do ?
same here with me :/
It says "ls: cannot access 'runs/train/exp0_yolov5s_results/weights': No such file or directory"
As per the video I believe this weights is from ultralytics itself.
Hello sir, I have followed the same steps you have Given to train the model, but my MAP is decreasing per epoch. Please let me know how to solve this. Thanks in Advance
Hey Nice video!
Is there a way to like export the weights and such to use in a custom python project?
I already trained a custom dataset, now I need to combine it with my python code that counts cars as they pass by.
There seems to be no videos explaining how to use yolov5 other than on google collab
@Roboflow "RuntimeError: non-positive stride is not supported" I get this error when I run the training command. My architecture is the same as the video. Does anyone else get this?
# this is the YAML file Roboflow wrote for us that we're loading into this notebook with our data
%cat data.yaml I do not see the file and get the "error cat: data.yaml: No such file or directory"
same problem
Could you please share how to run yolov5 trained using python on iOS and Android using coreml or pytorch mobile?
thank you for video and ı want to ask how can I find this trained model on mine local gpu on pycharm or jupyter
Does the test data also needs xml to be added with them ?
this error cat: /content/yolov5/models/yolov5s.yaml: No such file or directory
same
did u find the workaround
What are the next steps to creating a useable model?
I'm looking to convert to .tflite or .tflite quantised so I can use on a Raspberry Pi or Android phone?
For this implementation, if you want to run the model on mobile, the recommendation is compiling to ONNX pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html, and going from ONNX to your desired output for serving github.com/onnx/tutorials
Hello, thanks for the amazing tutorial. I'm just wondering if we apply resizing for training data, but at inference, we use the original images. Does this have an impact on return?
according to glenn jocher, you should use the same size at inference, but i think you can use the resized size for both
How can i use this saved weights for future prediction
Hi, thanks for the tutorial, actually I did the same steps and I used the model on a real-time video but I got only 4 FPS which is too slow , do you have an idea about that ?