Another great tutorial, this channel should have far more recognition, some other tutorials take more time not to work, and they miss stuff, this is compact and complete, thanks
hi can you help me followed the two both has problem but i thnk this colab should work because it use google gpu heres my output RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Hello I followed your tutorial but I'm getting an error which says 'no labels in data/train ' cannot train without labels. I have followed every step but it's not able find the labelled images
Thank you for the tutorial. Can u help me why I have this problem? AssertionError: train: No labels in data/train/human.cache. Can not train without labels while i've uploaded the labels in data folder
Thanks alot this worked great but i didn't have to update yaml on my colab. It just worked :D Training takes a while but i'm used to worse since playing around with dreambooth / stable diffusion
Thank you so much for the great tutorials! I wanted to know if we can detect vehicle speed and capture over speeding vehicles real time? and capture their license plate. I am assuming we have to use yolov7 with opencv, but again I have no idea if that is possible or how to do it.
@@TheCodingBug I am sorry, my question is not well defined. I was referring to optimize the inference (setting those parameters when running detect.py), based on the plots.
@@albertrg9166 in training graphs, iou and confidence should be as close to 1 as possible. If it's greater than 0.5, we say that model is able to learn something but below that, model is unable to learn anything. If it's closer to 0 (something like 0.1), you'll not get any bboxes while inferencing.
hey, so can i detect many images at a time. you wrote code for each images you are going to detect.so if i have to detect many images at once, then do i have to write code for each one of them in google colab and anaconda prompt????
i am getting GPu usage Limit Error after some rounds completed of Epochs then its give me this error and if i set epochs to less than 100 like 40 then it will not detect any object how to solve this problem for free is their any platform available for free
What if we want to detect extra classes in addition to COCO for example - I want to detect everything yolov7 is trained on and just remove cars from the existing class names and add 3 more classes such as let's say sedan, suv etc.. so basically, my model should be able to detect 79 + 2 = 81 classes ? Is that possible ?
@@TheCodingBug thank you, since we are using pre-trained model, what do you think should be the approximate number of images from each class that one should consider given there are almost 80 labelled classes in COCO… 300 images per category ?
@@TheCodingBug Thank you, I have been studying the detection code, I had a doubt is it possible for us to print the bounding box coordinates for each object detected. Also, are these bounding box details stored somewhere in the repo once the execution is done or if we want that we have to manually store it inside a folder.
Hello. It's a very good video, well done. While I was testing on the test photo, I encountered the error "/bin/bash: -c: line 0: syntax error near unexpected token `('"). What could be the reason. Do you know?
Can we also do active learning in our coding like out of 20 testing images my system detect 19 correct images but only 1 incorrect detection so how can i tell my system about this 1 incorrect image detection so next time it will not repeat the same mistake again. and how can i do that without training the entire model again?
sir, could you please give me idea how to test multiple image from the test folder or path in yolov7 model by using colab? already, i had followed this video and the code.
Hello Sir, great Tutorial. I want to run yolov7 with my csi camera instead of a usb webcam, but i dont't know how i can achieve this. Do i need to make changes in the detect.py file? If yes what changes and where? I'm fairly new in thies field and i would be happy for any help.
Awesome work worth watching it, thank you all the team of TheCodingBug for your efforts. Can you please guide us if we want to make object detection of atleast 2 or 3 custom dataset not just the 1 object in this video which is 'jack sparrow' what changes we need to do in the data.yaml coding file? plese make a video on this or atleast ans me here i will be very thankful to you.
@@TheCodingBug thanks alot i got it now, one more and last Question: i have 3 car's length and width data and i want to show it with the name of these car's on the bounding box, like for example it should display VW ID.3 Length = 4 Width = 2, where in the code i can do this?
hi can you help me with this one.. i tried to follow the instruction and follow everything after i run the code RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) this is my error
Can you please also make a video on a Multi-label object dection in a single bounding box. I have a dataset that involves images with bounding boxes which have more than 1 label. For example, I have 5 classes (0-4), then an image might have a bounding box with label "2" and another might have a bounding box with two labels "2, 4" ie it belongs to both categories. so how can we show two labels or classes on a single bounding box? please make a video on this or atleast provide some solution in your ans here i will be very thankful to you.
hey, i have a problem, when i want to use yolov7-w6 and size batch 16 i have this error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 14.76 GiB total capacity; 13.27 GiB already allocated; 41.88 MiB free; 13.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF But, i reduce size batch to 8 and i have this error: RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) I haven´t found a solution. do you know what i can do?
@@TheCodingBug This is my command: !python train_aux.py --device 0 --batch-size 2 --epochs 100 --img 1280 1280 --data data/custom_data.yaml --hyp data/hyp.scratch.custom.yaml --cfg cfg/training/yolov7-w6-custom.yaml --weights yolov7-w6.pt --name yolov7-w6.pt i tried, but using train.py i get this: Traceback (most recent call last): File "train.py", line 616, in train(hyp, opt, device, tb_writer) File "train.py", line 363, in train loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 585, in __call__ bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 677, in build_targets b, a, gj, gi = indices[i] IndexError: list index out of range And if i use train_aux.py i get this: Traceback (most recent call last): File "train_aux.py", line 612, in train(hyp, opt, device, tb_writer) File "train_aux.py", line 362, in train loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 1206, in __call__ bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs) File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 1558, in build_targets2 from_which_layer = from_which_layer[fg_mask_inboxes] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) i dont understand the problem. Help me please aaaa
All codes are running but in the end there is no segmentation of the image or video . Its getting saved as it is and no detection is happening. Can someone please help by pointing out the error
@Varun Raj I reduced that --conf 0.1 . But my model is not predicting correctly I used 230 images as accident but it detecting normal car as also accident
what does this error mean? and how to solve Traceback (most recent call last): File "train.py", line 616, in train(hyp, opt, device, tb_writer) File "train.py", line 251, in train assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) AssertionError: Label class 22 exceeds nc=2 in data/custom_data.yaml. Possible class labels are 0-1
@@farhadhossen4548 Hey! I have resolved this problem Try to check all your .txt files for incorrect class ids. over there we have in this format 21 0.601124 0.555046 0.797753 0.880734 here in the place of 21 we need to change the value. For suppose we have 2 classes namely Man and Women, then we will assign 0 and 1 respectively. So we need to change the value of 21 to the respective class values. Try it, it will be resolved.
@@chandanasai225 Hello.. I have this problem also, i am trying to train image that contain 5 number and the same error is happen to me. my txt file is like that: 15 0.077500 0.385000 0.085000 0.370000 16 0.232500 0.415000 0.125000 0.350000 17 0.372500 0.235000 0.125000 0.370000 16 0.530000 0.455000 0.120000 0.390000 16 0.672500 0.280000 0.115000 0.400000 17 0.825000 0.380000 0.120000 0.420000 Any kind of help please?
Can we convert these to .tflite? I'm an Android developer, I can create models thanks to Google's Model Maker but I'm looking a more accurate model. Thanks for the channel, very informative. I binged on a few last night and hit the subscribe button today. Things have definitely moved on in this field and got easier since I first tackled Tensorflow 3 years ago.
@@TheCodingBug. Thank you so much. One other question, when I create a model in Google's Model maker, I'm limited to 25 detections at a time. Is there anywhere in the Yolo V7 model where we can change this number to 100 or 1,000 detections? I'm creating a money counting app and am limited by 25 detections.
Hello sir. I have this error when i need to train image that contains 5 random number Transferred 630/644 items from yolov7x.pt Scaled weight_decay = 0.0005 Optimizer groups: 108 .bias, 108 conv.weight, 111 other train: Scanning 'data/train/labels.cache' images and labels... 10 found, 0 missing, 0 empty, 0 corrupted: 100% 10/10 [00:00
"Official YOLO v7 Custom Object Detection on Colab" is the title...in the middle you change to v7x... only to find that is worst than v7! what a waste of time and GPU! Next time please spoil it at the start so we don't train a useless model.
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) That's weird Something with colab Everything was working fine a couple days before But recently started to encounter this error while training Traceback (most recent call last): File "train.py", line 616, in train(hyp, opt, device, tb_writer) File "train.py", line 363, in train loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size File "/content/gdrive/MyDrive/Yolo7/yolov7/utils/loss.py", line 585, in __call__ bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs) File "/content/gdrive/MyDrive/Yolo7/yolov7/utils/loss.py", line 759, in build_targets from_which_layer = from_which_layer[fg_mask_inboxes] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
WARNING: Dataset not found, nonexistent paths: ['/content/gdrive/MyDrive/TheCodingBug/yolov7/coco/val2017.txt'] Traceback (most recent call last): File "train.py", line 616, in train(hyp, opt, device, tb_writer) File "train.py", line 97, in train check_dataset(data_dict) # check File "/content/gdrive/MyDrive/TheCodingBug/yolov7/utils/general.py", line 173, in check_dataset raise Exception('Dataset not found.') Exception: Dataset not found. im getting this warning. I have copied the training and validatio files
in the custom yolov7 where you give the train and val paths, give it in the form of a list. For instance , train: [ training image path, training label path] val: [val image path, val label path] this worked for me
WARNING: Dataset not found, nonexistent paths: ['/content/drive/MyDrive/TheCodingBug/yolov7/content/drive/MyDrive/TheCodingBug/yolov7/data/val'] Traceback (most recent call last): File "/content/drive/MyDrive/TheCodingBug/yolov7/train.py", line 616, in train(hyp, opt, device, tb_writer) File "/content/drive/MyDrive/TheCodingBug/yolov7/train.py", line 97, in train check_dataset(data_dict) # check File "/content/drive/MyDrive/TheCodingBug/yolov7/utils/general.py", line 173, in check_dataset raise Exception('Dataset not found.') Exception: Dataset not found. can anyone help me to resolve this error?
Traceback (most recent call last): File "/content/drive/MyDrive/The Codding Bug/yolov7/train.py", line 616, in train(hyp, opt, device, tb_writer) File "/content/drive/MyDrive/The Codding Bug/yolov7/train.py", line 245, in train dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt, File "/content/drive/MyDrive/The Codding Bug/yolov7/utils/datasets.py", line 69, in create_dataloader dataset = LoadImagesAndLabels(path, imgsz, batch_size, File "/content/drive/MyDrive/The Codding Bug/yolov7/utils/datasets.py", line 392, in __init__ cache, exists = torch.load(cache_path), True # load File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 815, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1033, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: STACK_GLOBAL requires str I found this problem can you solve me out this prblm?
Another great tutorial, this channel should have far more recognition, some other tutorials take more time not to work, and they miss stuff, this is compact and complete, thanks
I am glad it's helping.
AssertionError: train: No labels in data/train/labels.cache. Can not train without labels.
Can you please help me with this error
hey i have the same problem. have u solved the problem?
@@fangirlpurpose4859 Please remove the cache file in google colab folder
@@TECHNEWSUNIVERSE thank u so much
Even after removing the cache files it's coming back after running. Any other solution?
Hi , I have error
torch.cuda.OutOfMemoryError: CUDA out of memory
I reduced batch size to 16 , 8 ,4 and 2 . I got the same error .
can u help me plz!
Hey thank you so much I used both methods from both videos but this one worked flawlessly
hi can you help me followed the two both has problem but i thnk this colab should work because it use google gpu heres my output
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Thank you so much. You are the life saver!!!
Hello
I followed your tutorial but I'm getting an error which says 'no labels in data/train ' cannot train without labels. I have followed every step but it's not able find the labelled images
Hello sir please make a video about how we connect Yolo v7 models to the web .my FYP model is ready only connect web app is remaining.
Thanks
So nicely explained!
Thank you for the tutorial. Can u help me why I have this problem?
AssertionError: train: No labels in data/train/human.cache. Can not train without labels
while i've uploaded the labels in data folder
amazing course. Helps a lot😉
Thanks alot this worked great but i didn't have to update yaml on my colab. It just worked :D
Training takes a while but i'm used to worse since playing around with dreambooth / stable diffusion
Thank you so much for the great tutorials!
I wanted to know if we can detect vehicle speed and capture over speeding vehicles real time? and capture their license plate. I am assuming we have to use yolov7 with opencv, but again I have no idea if that is possible or how to do it.
I have a doubt regarding the flag values "--conf-thres and "--iou-thres". How to set them accordingly, in order to optimize our training?
These are not for training. These are used at testing time. Use 0.5, 0.5
@@TheCodingBug I am sorry, my question is not well defined. I was referring to optimize the inference (setting those parameters when running detect.py), based on the plots.
@@albertrg9166 in training graphs, iou and confidence should be as close to 1 as possible. If it's greater than 0.5, we say that model is able to learn something but below that, model is unable to learn anything. If it's closer to 0 (something like 0.1), you'll not get any bboxes while inferencing.
hey, so can i detect many images at a time. you wrote code for each images you are going to detect.so if i have to detect many images at once, then do i have to write code for each one of them in google colab and anaconda prompt????
I used 100 epochs for training. it took 3 stops to complete the training, but in the end only last.pt file is genrated and i couldnt see best.pt
i am getting GPu usage Limit Error after some rounds completed of Epochs then its give me this error and if i set epochs to less than 100 like 40 then it will not detect any object how to solve this problem for free is their any platform available for free
What if we want to detect extra classes in addition to COCO for example - I want to detect everything yolov7 is trained on and just remove cars from the existing class names and add 3 more classes such as let's say sedan, suv etc.. so basically, my model should be able to detect 79 + 2 = 81 classes ? Is that possible ?
You need to retrain custom model on coco as well as your own data (relabel cars in coco dataset to the car models basically)
@@TheCodingBug thank you, since we are using pre-trained model, what do you think should be the approximate number of images from each class that one should consider given there are almost 80 labelled classes in COCO… 300 images per category ?
@@AmitKumar-hm4gx yes. That'll suffice.... Also, see if negative sampling is possible.
@@TheCodingBug Thank you, I have been studying the detection code, I had a doubt is it possible for us to print the bounding box coordinates for each object detected. Also, are these bounding box details stored somewhere in the repo once the execution is done or if we want that we have to manually store it inside a folder.
Hello. It's a very good video, well done. While I was testing on the test photo, I encountered the error "/bin/bash: -c: line 0: syntax error near unexpected token `('"). What could be the reason. Do you know?
Great tutorial! Thanks!
Great session by you sir..
Can we also do active learning in our coding like out of 20 testing images my system detect 19 correct images but only 1 incorrect detection so how can i tell my system about this 1 incorrect image detection so next time it will not repeat the same mistake again. and how can i do that without training the entire model again?
Hi, How can I continue training the yolov7 after fully completing the first training?
Just use "last.pt" as model file.
Hi can you tell how to show bounding box and its values for an output of an image
Sir, can you make a video regarding, the calculation of accuracy to this model in Google colab.
Accuracy on validation data is stored at the end of training in the train folder.
Bro, please make a video on web cam detection using yolo v7
on colab
Thanks this might game on my toolbox
I implemented it on Colab. But the program is not generating bounding boxes while detect.py is executed. Please help.
Check precision recall curves in training folder. Model must not be learning anything if the values are too small.
If so, increase the dataset.
@@TheCodingBug Thank you sir.
sir, could you please give me idea how to test multiple image from the test folder or path in yolov7 model by using colab?
already, i had followed this video and the code.
yes sir you can😁 follow my instruction🙃
Hello Sir, great Tutorial.
I want to run yolov7 with my csi camera instead of a usb webcam, but i dont't know how i can achieve this.
Do i need to make changes in the detect.py file? If yes what changes and where?
I'm fairly new in thies field and i would be happy for any help.
CSI camera on raspberry pi I'd assume. It has same 0 identifier which you can use as --source
I'm using a jetson nano and if select 0 or 1 for source, i either get nothing or the usb cam...
it works with pictures but didn’t work with video i don’t know what to do
I am getting the same image in the exp folder that I give as input.
Increase number of training images or reduce --conf to 0.1
Sir, can we increase the epochs number in Google colab like 200 or 300
Yes. As long as it doesn't get disconnected
Hi, Thank you, is there any way to use .pt file with Opencv ?
Sir, what is the learning rate used in the training? thank you sir
Awesome work worth watching it, thank you all the team of TheCodingBug for your efforts.
Can you please guide us if we want to make object detection of atleast 2 or 3 custom dataset not just the 1 object in this video which is 'jack sparrow' what changes we need to do in the data.yaml coding file? plese make a video on this or atleast ans me here i will be very thankful to you.
In case of 3 classes,
nc=3
And then 3 names in the class names list.
Also, nc=3 in respective model cfg file.
@@TheCodingBug thanks alot i got it now, one more and last Question: i have 3 car's length and width data and i want to show it with the name of these car's on the bounding box, like for example it should display VW ID.3 Length = 4 Width = 2, where in the code i can do this?
@@TheCodingBug and do we keep images of all the 3 classes in one folder ?
hi can you help me with this one.. i tried to follow the instruction and follow everything after i run the code
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
this is my error
hey did u find a solution
Is it possible to obtain a saliency map from this model?
How could know loss or accuracy per epoch ? IS any kind of graph?
You'll find all graphs in training folder where final.pt file is.
Can you please also make a video on a Multi-label object dection in a single bounding box. I have a dataset that involves images with bounding boxes which have more than 1 label.
For example, I have 5 classes (0-4), then an image might have a bounding box with label "2" and another might have a bounding box with two labels "2, 4" ie it belongs to both categories. so how can we show two labels or classes on a single bounding box? please make a video on this or atleast provide some solution in your ans here i will be very thankful to you.
Hi, did you find a solution for this? Thanks in advance
thanks bro nice vid. Have you got linkedin to share?
www.linkedin.com/in/haroon-shakeel/
@@TheCodingBug thanks, all the best.
how can yolov7 architecture be remodified
How to convert .pt to .onnx for custom. After converting to .onnx. how to use .onnx to do object detection. Please help
Have you resolved this issue?
@@gemshunt9637 no. Plz help.
hey, i have a problem, when i want to use yolov7-w6 and size batch 16 i have this error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 14.76 GiB total capacity; 13.27 GiB already allocated; 41.88 MiB free; 13.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
But, i reduce size batch to 8 and i have this error:
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
I haven´t found a solution. do you know what i can do?
Use batch size of 4 or 2.
@@TheCodingBug
This is my command:
!python train_aux.py --device 0 --batch-size 2 --epochs 100 --img 1280 1280 --data data/custom_data.yaml --hyp data/hyp.scratch.custom.yaml --cfg cfg/training/yolov7-w6-custom.yaml --weights yolov7-w6.pt --name yolov7-w6.pt
i tried, but using train.py i get this:
Traceback (most recent call last):
File "train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "train.py", line 363, in train
loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 585, in __call__
bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 677, in build_targets
b, a, gj, gi = indices[i]
IndexError: list index out of range
And if i use train_aux.py i get this:
Traceback (most recent call last):
File "train_aux.py", line 612, in
train(hyp, opt, device, tb_writer)
File "train_aux.py", line 362, in train
loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 1206, in __call__
bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs)
File "/content/gdrive/MyDrive/IA-python/yolov7/utils/loss.py", line 1558, in build_targets2
from_which_layer = from_which_layer[fg_mask_inboxes]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
i dont understand the problem. Help me please aaaa
@@jptoaster make sure you've downloaded and placed the weights file along with code. Use train.py and not train_aux.py. Remove --device flag
All codes are running but in the end there is no segmentation of the image or video . Its getting saved as it is and no detection is happening. Can someone please help by pointing out the error
Reduce the --conf value
@@101-sridhar.r2 what value should I put any suggestions on that ??
@@101-sridhar.r2 tried doing that still not working. Is it a problem with the dataset????
@Varun Raj I reduced that --conf 0.1 . But my model is not predicting correctly I used 230 images as accident but it detecting normal car as also accident
@@101-sridhar.r2 same is happening with me . I reduced it to 0.1 but still the detection isn't happening
7:25 I'm getting this error: RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
PLESE HELP
hey did u find a solution
@@kagansenkeser4357 No, i simply tried this tutorial: th-cam.com/video/-QWxJ0j9EY8/w-d-xo.html
It's important to use anaconda prompt
@@ardumaniak thank you
what does this error mean? and how to solve
Traceback (most recent call last):
File "train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "train.py", line 251, in train
assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
AssertionError: Label class 22 exceeds nc=2 in data/custom_data.yaml. Possible class labels are 0-1
Same Problem form me
@@farhadhossen4548 Hey! I have resolved this problem
Try to check all your .txt files for incorrect class ids. over there
we have in this format
21 0.601124 0.555046 0.797753 0.880734
here in the place of 21 we need to change the value.
For suppose we have 2 classes namely Man and Women, then we will assign 0 and 1 respectively.
So we need to change the value of 21 to the respective class values.
Try it, it will be resolved.
@@chandanasai225 can you provide me your mail? I need some help regarding this problem.
@@chandanasai225
Hello.. I have this problem also, i am trying to train image that contain 5 number and the same error is happen to me.
my txt file is like that:
15 0.077500 0.385000 0.085000 0.370000
16 0.232500 0.415000 0.125000 0.350000
17 0.372500 0.235000 0.125000 0.370000
16 0.530000 0.455000 0.120000 0.390000
16 0.672500 0.280000 0.115000 0.400000
17 0.825000 0.380000 0.120000 0.420000
Any kind of help please?
Can we convert these to .tflite? I'm an Android developer, I can create models thanks to Google's Model Maker but I'm looking a more accurate model.
Thanks for the channel, very informative. I binged on a few last night and hit the subscribe button today. Things have definitely moved on in this field and got easier since I first tackled Tensorflow 3 years ago.
Indeed.
We should be able to convert it to tflite or onnx at least.
@@TheCodingBug Thanks for the quick reply. Should we use the same methods as converting from YoloV4 that you used?
@@GlentoranMark No. It'd be different. I'll make a tutorial by the end of this month.
@@TheCodingBug. Thank you so much.
One other question, when I create a model in Google's Model maker, I'm limited to 25 detections at a time. Is there anywhere in the Yolo V7 model where we can change this number to 100 or 1,000 detections? I'm creating a money counting app and am limited by 25 detections.
can you please give ipynb notebook
The codes are available for our Patreon supporters.
yolo-world yolo9 video upload
Hello sir.
I have this error when i need to train image that contains 5 random number
Transferred 630/644 items from yolov7x.pt
Scaled weight_decay = 0.0005
Optimizer groups: 108 .bias, 108 conv.weight, 111 other
train: Scanning 'data/train/labels.cache' images and labels... 10 found, 0 missing, 0 empty, 0 corrupted: 100% 10/10 [00:00
"Official YOLO v7 Custom Object Detection on Colab" is the title...in the middle you change to v7x... only to find that is worst than v7! what a waste of time and GPU! Next time please spoil it at the start so we don't train a useless model.
Do the steps for yolov7 instead of yolov7x and it will work just fine
Hi sir, I'm getting below message
Traceback (most recent call last):
File "train.py", line 587, in
opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
File "/content/gdrive/MyDrive/test/yolov7/utils/general.py", line 151, in check_file
assert len(files), f'File Not Found: {file}' # assert file was found
AssertionError: File Not Found: cfg/training/yolov7x-custom.yaml
You do not have yolov7x-custom.yaml file
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
That's weird Something with colab
Everything was working fine a couple days before
But recently started to encounter this error while training
Traceback (most recent call last):
File "train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "train.py", line 363, in train
loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
File "/content/gdrive/MyDrive/Yolo7/yolov7/utils/loss.py", line 585, in __call__
bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
File "/content/gdrive/MyDrive/Yolo7/yolov7/utils/loss.py", line 759, in build_targets
from_which_layer = from_which_layer[fg_mask_inboxes]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Hi did you find the answer i have similar problem i did follow the video step by step but i got same error
Hi, can you solved this problem ?
@@omoklamok hey did u find a solution
WARNING: Dataset not found, nonexistent paths: ['/content/gdrive/MyDrive/TheCodingBug/yolov7/coco/val2017.txt']
Traceback (most recent call last):
File "train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "train.py", line 97, in train
check_dataset(data_dict) # check
File "/content/gdrive/MyDrive/TheCodingBug/yolov7/utils/general.py", line 173, in check_dataset
raise Exception('Dataset not found.')
Exception: Dataset not found.
im getting this warning. I have copied the training and validatio files
You didn't follow instructions. This path is removed from dataset yaml file.
@@TheCodingBug tat error is cleared but...the weights file tat im using for custom detection isnt detecting any object in the video that im providing
Hey how did you solve this "Dataset Not Found Error"?
@@Jobsonu change the path of image and labels in coco.yaml file
in the custom yolov7 where you give the train and val paths, give it in the form of a list. For instance ,
train: [ training image path, training label path]
val: [val image path, val label path]
this worked for me
WARNING: Dataset not found, nonexistent paths: ['/content/drive/MyDrive/TheCodingBug/yolov7/content/drive/MyDrive/TheCodingBug/yolov7/data/val']
Traceback (most recent call last):
File "/content/drive/MyDrive/TheCodingBug/yolov7/train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "/content/drive/MyDrive/TheCodingBug/yolov7/train.py", line 97, in train
check_dataset(data_dict) # check
File "/content/drive/MyDrive/TheCodingBug/yolov7/utils/general.py", line 173, in check_dataset
raise Exception('Dataset not found.')
Exception: Dataset not found.
can anyone help me to resolve this error?
Traceback (most recent call last):
File "/content/drive/MyDrive/The Codding Bug/yolov7/train.py", line 616, in
train(hyp, opt, device, tb_writer)
File "/content/drive/MyDrive/The Codding Bug/yolov7/train.py", line 245, in train
dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
File "/content/drive/MyDrive/The Codding Bug/yolov7/utils/datasets.py", line 69, in create_dataloader
dataset = LoadImagesAndLabels(path, imgsz, batch_size,
File "/content/drive/MyDrive/The Codding Bug/yolov7/utils/datasets.py", line 392, in __init__
cache, exists = torch.load(cache_path), True # load
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.10/dist-packages/torch/serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: STACK_GLOBAL requires str
I found this problem can you solve me out this prblm?
AssertionError: train: No labels in data/train/labels.cache. Can not train without labels.
Can you please help me with this error
Is your issue solved, I am getting the same error?