Hello Sir, I used your video to track vehicle in congested traffic situation. It is very helpful. But the problem is the tracking Id is continuously changing to new ID for same vehicle if there is any kind of occlusion for 4/5 frame. Could you give me any suggestion regarding this?
Hi, is there any way that we can skip some frames of the video track, i.e. process every 5th frame of the video. To fasten processing also for better resource management.
Thanks for the video I have a task which is pedestrian detection and congestion measurement I need to build a model that can detect pedestrian and also be able to say if the frame is congested For example let's say in a frame greater than 20 people means congested
for your problem you can learn about density problem, plot each object as 1 point and then plot on density histogram. and if you just need to count the number of objects in the frame, the output of yolov5 is already there, line 189
Hi, I appreciate your work. I did same way in colab. But I couldn't get same result. There was no line and count number. How can I solve this problem? Thanks.
a cho em hỏi 2 muốn chạy 2 webcam một lúc thì làm như thế nào a. Em đã thử thay "parser.add_argument('--source', type=str, default='0', help='source') # file/folder, 0 for webcam" chỗ default thành 1 rồi mà chương trình báo lỗi, anh giúp e với ạ
I have custom trained model, I am bit confused with adding deepsort to custom trained yolov5. Can you please help me understand how is it to be done? and i want to run this deepsort on google colab since my laptop is of low configuration.
I ran it, just the same as i downloaded, and it says: AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor' what could be happening?
1. if you use pretrain model (yolov5s.pt,yolov5n.pt..): line 262,263 in track.py : ```code # class 0 is person, 1 is bycicle, 2 is car... 79 is oven parser.add_argument('--classes', default='2', type=int, help='filter by class') ``` 80 classes of Coco dataset: default='2' is car class 2. if you train custom model , you need change path model : line 249 in track.py: parser.add_argument('--yolo_model', nargs='+', type=str, default='yolov5n.pt', help='model.pt path(s)') --> default='yolov5n.pt' => default='path/to/your/model'
@@ai4life6 Ty again bro. 1 more please :). Can i count by class ? for example cars and persons at the same time like in minute 38:56 ? One counting for each in the same video.
@@bernardcollin340 you edit : line 263 in track.py: for only car : parser.add_argument('--classes', default=[2], type=int, help='filter by class') for only person: parser.add_argument('--classes', default=[0], type=int, help='filter by class') for both : parser.add_argument('--classes', default=[0,2], type=int, help='filter by class') and edit function: example: def count_obj(box,w,h,id,class): global count_car,data_car, count_person , data_person center_coordinates = (int(box[0]+(box[2]-box[0])/2) , int(box[1]+(box[3]-box[1])/2)) if class == 2: if int(box[1]+(box[3]-box[1])/2) > (h-350) and if id not in data: count_car += 1 data_car.append(id) if class == 0: if int(box[1]+(box[3]-box[1])/2) > (h-350) and if id not in data: count_person += 1 data_person.append(id) class == output[5] line 173
anh cho em hỏi lỗi này với a Traceback (most recent call last): File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 278, in detect(opt) File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect pred = model(img, augment=opt.augment, visualize=visualize) File "C:\Users\Admin\anaconda3\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize) File "C:\Users\Admin\anaconda3\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once x = m(x) # run File "C:\Users\Admin\anaconda3\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\Admin\anaconda3\lib\site-packages\torch n\modules\upsampling.py", line 154, in forward recompute_scale_factor=self.recompute_scale_factor) File "C:\Users\Admin\anaconda3\lib\site-packages\torch n\modules\module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
Hello friend, I want to achieve one kind and multiple goals. For example, for vehicles of the same type, each vehicle has its own ID. How do you divide the data set?
Hi sir. Thanks for the video, it has been really helpful. I have a question: is it possible to train the yolo network on a custom dataset? Thanks in advance
Hello sir great video and thank you so much. Can you help me how can I add in the database the name of the object every detection? Hope you reply and thanks in advance.
Anh ơi cho em hỏi làm thế nào để vẽ trajection of object hay là lưu lại vị trí cũng từng centroid của từng đối tượng vào 1 file csv với ạ. Em cảm ơn anh.
cách xác định centriod của box thì em xem lại, anh có làm rồi, còn lưu thì ví dụ : ```code``` if len(outputs) > 0: for j, (output, conf) in enumerate(zip(outputs, confs)): bboxes = output[0:4] id = output[4] cls = output[5] header = ['bboxes', 'id', 'cls'] data = [bboxes, id, cls] with open('data.csv', 'w', encoding='UTF8') as f: writer = csv.writer(f) # write the header writer.writerow(header) # write the data writer.writerow(data) ```
em chào anh, khi em chạy code thì khung hiển thị video của em bị to quá màn hình và chỉ hiển thị được 1 phần của video và phần còn lại bị khuất, cho em hỏi là chỉnh kích thước khung hình video hiển thị thì như thế nào ạ
parser.add_argument('--save-vid', action='store_true', help='save video tracking results') ---> 'store_false' or line 214 : if save_vid: ------------ --> if True:
chào b! cảm ơn đã chia thẻ video. Cho mình muốn hỏi, ví dụ mình muốn cắt ảnh những đối tượng khi đi qua line thì ý tưởng là gì? b có thể gợi ý cho mình được không? cảm ơn bạn.
hello sir, I would like to ask you , how could I use another camera? I have already tried this code parser.add_argument('--source', type=str, default='1', help='source') # file/folder, 0 for webcam please help me
Hi and thanks for the video sir, it taught me a lot. When, I integrated Deep Sort in my project, I noticed that, while assigning the unique IDs it is skipping some numbers in between like 1,2,3,4,5,6,7,8,9,10 and then suddenly it starts naming with 34,35,36,37,38, what would be the probable reason for that? as I want a sequence in my project.
Deep SORT manages the lifecycle of a track based on a state variable with 3 values (tentative, confirmed, deleted) These states are initially assigned a tentative value. This value, if it is still guaranteed to be maintained in the next 3 frames, the state will change from probe to confirmed (confirmed), Tracks with confirmed status will try to stay tracked, even if lost, Deep SORT will still maintain tracking for the next 30 frames. Conversely, if the track is lost in less than 3 frames, the status will be deleted from the tracker.
@@ai4life6 So, that means there are some track IDs of objects which are getting deleted in between. So, IDs like 11,12,13,14,15,..............33 are not visible in the generated video? By the wat thanks for such a fast reply.
Greetings, sir! Thank you very much for your video! Can you tell us please is it possible to count objects of 2 different classes separately ? Thanks in advance
hi sir i hope you reply to me. I am trying to run this project with your code and what is happening is that when im trying a new footage. It counts every detections. I just want to make it count when passing thru the line. I hope you can help me with this.
Hi @AIforLife! I trained custom object. The program counts the objects passing virtual line. But detection is not accurate due to sunlight. For this reason I am going to draw ROI. But I dont know how to count objects passsing the area. Another problem is when 2 objects crossing the line parallelly it detecting it as 1 object. How can I fix these problems? Help me please
Hi sir, i have another question. Is it possible to implement here other tracking algorithms, like i.e. the sort or the centroid tracker method? Thanks in advance
@@ai4life6 ok so how to avoid giving to him an extract network? Should i just leave blanck the MODEL_TYPE field in the deep_sort.yaml file, or should I do something more ?
@@maurizioloschiavo9758 you need remove deepsort folder , we not need it if you use sort . Create 1 class for implement sort algorithms , you need explain input & output of sort .www.researchgate.net/publication/352498971/figure/fig1/AS:1035885332160524@1623985717827/Object-tracking-procedure-of-SORT-14.png
Hi, great tutorial. I`m trying to set up in my own dataset but my system is returning the follow error: Traceback (most recent call last): File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/track.py", line 283, in detect(opt) File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/track.py", line 74, in detect model = DetectMultiBackend(yolo_model, device=device, dnn=opt.dnn) File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/yolov5/models/common.py", line 309, in __init__ model = attempt_load(weights if isinstance(weights, list) else w, map_location=device) File "./yolov5/models/experimental.py", line 96, in attempt_load ckpt = torch.load(attempt_download(w), map_location=map_location) # load File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 1131, in _load result = unpickler.load() File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 1124, in find_class return super().find_class(mod_name, name) AttributeError: Can't get attribute 'DetectionModel' on Could you help me? Thanks
Hello sir. I hope you are healthy and safe. I want to count per class, like for vehicle, how many bus, car and motorcycle pass in the line. Is it possible? example bus: 1 car: 3 motorcycle: 4
Hello sir i hope you will answer, when changing the video it doesn't work perfectly in counting cars after crossing the line some cars are detected before and when come closer to the line it doesn't detect it so it wont be counted , how can i fix this? Thank you for sharing your code and creating this video.
Thanks, 1. you can use model yolov5s or yolov5m, or set conf-thres,iou-thres high( ex: 0.65) 2. you can set status of obj, example obj_id_1 go from bottom to top detect obj_id_1 first, initialization dict data = { obj_1: {"counted": false, ''status":"going up" }..., obj_n: {"counted": 0, ''status":"going down" }} , with (status = ''going up" if y of obj1 < line,''going down" if y of obj1 > line ) when obj1 cross line update data { obj_1: {"counted": true, ''status":"going up" } and count all dict data have counted = True and status = "going up", we have sum count :"going up"
Hi Sir, I'm trying to implement the same on Nvidia Jetson Nano, I have managed to run YoloV5, but for deepsort we need python 3.8, and Jetson nano is compatible with 3.6! Is there anyway to implement Deepsort on 3.6?
Thank you for showing and explaining the code. This helps me a lot with implementation to my project. I'm doing tracking on two classes of objects trained on my own dataset. Beside counter I'm also extracting coordinates from each of the tracked objects and here are my questions: 1. How can I implement kind of a heat map for occurrence of detected/tracked objects on the video? I don't know what to look for and can't find anything. 2. Can you recommend some tutorial or any source that I could check to find some information that what else I could implement to this code (to display output)? Thank you in advance
Thanks for great video. I have a error: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but th is version of numpy is 0xf (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:68.) Can you help me?
@@ai4life6 Fusing layers... Model Summary: 213 layers, 1867405 parameters, 0 gradients Traceback (most recent call last): File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 278, in detect(opt) File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect pred = model(img, augment=opt.augment, visualize=visualize) File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize) File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once x = m(x) # run File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch n\modules\upsampling.py", line 154, in forward recompute_scale_factor=self.recompute_scale_factor) File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch n\modules\module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor' A có thể giúp e lỗi này đc ko ạ?
@@faizelkhan3951 1.liscence plate detection: train 1 class with License plate use yolov5 2.License-Plate-Recognition: Determine the license plate container area using Yolo, and character recognition ex: WPOD,LPRNet,...
Good day! Thanks for you tutotial video. You definitely helped me a lot in finishing my final year project. But I have a question regarding how to initialize the GPU. I understand that to use GPU, i have to change the command, such that parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') ====> parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') However when I edit in that way, it shows that I dont have any GPU and the fact is that I have one in my device. Do you know how to solve this problem? Because I think the displayed video using CPU is kinda laggy. Thanks 😊
1. Check version cuda of your computer (gpu of nvidia) if not , let Installation CUDA 2. pytorch.org/get-started/locally/ , install pytorch with your version cuda 3. check console : >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.current_device() 0 >>> torch.cuda.device(0) >>> torch.cuda.get_device_name(0) 'GeForce GTX 3090TI' 😊
1. you train custom data :th-cam.com/video/nnSCYdraVrA/w-d-xo.html you have a file weight yolov5.pt 2. replace github.com/dongdv95/yolov5/blob/master/Yolov5_DeepSort_Pytorch/track.py : line 249 with your weight model
Hello, thanks for the amazing video brother, how do i create virtual ground boxes and check if a car is inside these boxes also how do i give an id for every box so that i can use it in database to tell if that place is occupied or free , thank you so much keep the amazing work up :)
@@yashaswibaskarla6351 yes i did you need deepsort and any yolo version th-cam.com/users/results?search_query=yolov5+deepsort turn subtitles on and watch the whole video explaining how to count vehicles you just need need to understand the concept and replace the line with a rectangle like this video
Hi sir I hope you reply I am implementing the code as mentioned but when I run track.py it it showing me the following error: video 1/1 (1/335) C:\Users\Admin\OneDrive\Desktop\yolov5-master\Yolov5_DeepSort_Pytorch\videos\Traffic.mp4: Traceback (most recent call last): File "track.py", line 281, in detect(opt) File "track.py", line 112, in detect for frame_idx, (path, img, im0s, vid_cap, s) in enumerate(dataset): ValueError: not enough values to unpack (expected 5, got 4) I hope you reply , please help
anh ơi cho em hỏi là e có làm code phỏng đoán tốc độ xe khi nhận diện bằng ''haar cascade car xml'' ấy anh , mà giờ em muốn phỏng đoán tốc độ xe bằng yolov5 anh có thể giúp em đc ko ạ .
nếu đo tốc độ thì khó, 1. còn vận tốc tương đối cho bài tập lớn trên trường thì bạn có thể vẽ 2 line với khonagr cách xác định trước s , lưu t1 là thời gian tâm bõ chạm vạch 1 , t2 là thời gian chạm vạch 2 , rồi tính vận tốc tương đối , v = s /(t2-t1) 2.tính quãng đường dịch chuyển trên các điểm liên tiếp trên cách khung liên tiếp. 3.Vì hình ảnh trên camera khác với mặt phẳng trên mặt đường nên chỉ có tính tương đối
Hello sir, thanks for the video. The only issue I have is with saving a cropped image from a video. tried to get it working with save-one-box. But couldn't figure it out. Thanks in advance.
Hello sir I have the parser.add_argument('--device', default="0", help='cuda device, i.e. 0 or 0,1,2,3 or cpu') line and I get the AssertionError: Invalid CUDA '--device 0' requested, use '--device cpu' or pass valid CUDA device(s) error
@@ai4life6 Now it recognize it! pero i have the following error: C:\Users\meboc\AppData\Roaming\SPB_16.6\Yolov5_DeepSort_Pytorch\deep_sort/deep/reid\torchreid\utils\tools.py:43: UserWarning: No file found at "ultimo_osnet_x0_25" warnings.warn('No file found at "{}"'.format(fpath)) Successfully loaded imagenet pretrained weights from "C:\Users\meboc/.cache\torch\checkpoints\osnet_x0_25_imagenet.pth" ** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias'] Model: osnet_x0_25 It still detect the objects very good but i didnt have that error before :(
It was great, thank you but I have a question sir. How can I save the results with the wideowriter? It seems there is a code which does that(in track.py, line 210 #save results) but it does not work. I would appreciate if you could help me.
Thanks, you edit line 259: parser.add_argument('--save-vid', action='store_true', help='save video tracking results') ====> parser.add_argument('--save-vid', action='store_false', help='save video tracking results') or line 214 if save_vid: ===> if True:
Thanks a lot, also one last question :) I am not able to track specified classes. I change '--classes' with '--class 0' but it gives me error. What can I do to track only human?
@@ozgurkaplanturgut6315 line 263 in track.py: for only car : parser.add_argument('--classes', default=[2], type=int, help='filter by class') for only person: parser.add_argument('--classes', default=[0], type=int, help='filter by class')
draw 2 lines where you know their distance ex: s = 10m save the time the box touches the 1st line, time_1 save the time the box touches the 2nd line, time_2 velocity = s / (time_2 -time_1)
I'm getting this error when I'm trying to run track.py. Can Anyone help me to find out what the problem is? $ python track.py Traceback (most recent call last): File "H:\vc\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 2, in from yolov5.utils.general import (LOGGER, check_img_size, non_max_suppression, scale_coords, File "H:\vc\yolov5\Yolov5_DeepSort_Pytorch\yolov5\utils\general.py", line 35, in from utils.downloads import gsutil_getsize ModuleNotFoundError: No module named 'utils.downloads'
I'm getting this error when i'm trying to run track.py . Can you help me to find what is the problem? D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\deep_sort/deep/reid\torchreid\metrics ank.py:11: UserWarning: Cython evaluation (very fast so highly recommended) is unavailable, now use python evaluation. warnings.warn( Successfully loaded imagenet pretrained weights from "C:\Users\HP/.cache\torch\checkpoints\osnet_x0_25_imagenet.pth" Selected model type: osnet_x0_25 YOLOv5 2022-9-1 torch 1.12.1+cpu CPU YOLOv5 2022-9-1 torch 1.12.1+cpu CPU Fusing layers... Model Summary: 213 layers, 1867405 parameters, 0 gradients Traceback (most recent call last): File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 278, in detect(opt) File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect pred = model(img, augment=opt.augment, visualize=visualize) File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize) File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once x = m(x) # run File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\upsampling.py", line 154, in forward recompute_scale_factor=self.recompute_scale_factor) File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch n\modules\module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor' PS D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch>
Thank for a great video! I was able to run it before but now I encounter a problem with import torch. The error is from torch._C import * # noqa: F403 ImportError: /home/.../lib/python3.8/site-packages/torch/lib/libc10_cuda.so: undefined symbol: _ZN3c107Warning4warnERKNS_14SourceLocationERKSsb Do you know why I am having this issue? Thank you!
Hello sir, I have test GPU. torch.cuda.is_available()=True But I stll get AssertionError: Invalid CUDA ‘-device 1’ requested, use ‘-device cpu’ or pass valid CUDA device(s) Please help me.
@@davidlin2337 Based on the error message CUDA is not available either because you’ve installed the wrong binary, have multiple binaries installed and are using the wrong one, your system has any driver issue and cannot communicate with the GPU etc. Create a new and empty virtual environment, install the PyTorch binary with the desired CUDA runtime, and make sure you are able to use the GPU.
Can you advise what is the version of yolov5 you used for this video?
Hello Sir, I used your video to track vehicle in congested traffic situation. It is very helpful. But the problem is the tracking Id is continuously changing to new ID for same vehicle if there is any kind of occlusion for 4/5 frame. Could you give me any suggestion regarding this?
Hi, is there any way that we can skip some frames of the video track, i.e. process every 5th frame of the video. To fasten processing also for better resource management.
If I want to change the models, where should we change it?
Thanks for the video
I have a task which is pedestrian detection and congestion measurement
I need to build a model that can detect pedestrian and also be able to say if the frame is congested
For example let's say in a frame greater than 20 people means congested
for your problem you can learn about density problem, plot each object as 1 point and then plot on density histogram.
and if you just need to count the number of objects in the frame, the output of yolov5 is already there, line 189
@@ai4life6 thank you very much
How to display the output (on top left corner)
Hi, I appreciate your work. I did same way in colab. But I couldn't get same result. There was no line and count number. How can I solve this problem? Thanks.
How did you bring the reid folder sir?
a cho em hỏi 2 muốn chạy 2 webcam một lúc thì làm như thế nào a. Em đã thử thay "parser.add_argument('--source', type=str, default='0',
help='source') # file/folder, 0 for webcam" chỗ default thành 1 rồi mà chương trình báo lỗi, anh giúp e với ạ
I have custom trained model, I am bit confused with adding deepsort to custom trained yolov5. Can you please help me understand how is it to be done? and i want to run this deepsort on google colab since my laptop is of low configuration.
I ran it, just the same as i downloaded, and it says:
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
what could be happening?
facing similar error did you solve it?
Ty man. I have a question, if you can replly im glad. Do you know how i can counting diferent classes like this?
1. if you use pretrain model (yolov5s.pt,yolov5n.pt..): line 262,263 in track.py :
```code
# class 0 is person, 1 is bycicle, 2 is car... 79 is oven
parser.add_argument('--classes', default='2', type=int, help='filter by class')
```
80 classes of Coco dataset: default='2' is car class
2. if you train custom model , you need change path model : line 249 in track.py:
parser.add_argument('--yolo_model', nargs='+', type=str, default='yolov5n.pt', help='model.pt path(s)')
--> default='yolov5n.pt' => default='path/to/your/model'
@@ai4life6 Ty again bro. 1 more please :). Can i count by class ? for example cars and persons at the same time like in minute 38:56 ?
One counting for each in the same video.
@@bernardcollin340 you edit :
line 263 in track.py:
for only car : parser.add_argument('--classes', default=[2], type=int, help='filter by class')
for only person: parser.add_argument('--classes', default=[0], type=int, help='filter by class')
for both : parser.add_argument('--classes', default=[0,2], type=int, help='filter by class')
and edit function:
example:
def count_obj(box,w,h,id,class):
global count_car,data_car, count_person , data_person
center_coordinates = (int(box[0]+(box[2]-box[0])/2) , int(box[1]+(box[3]-box[1])/2))
if class == 2:
if int(box[1]+(box[3]-box[1])/2) > (h-350) and if id not in data:
count_car += 1
data_car.append(id)
if class == 0:
if int(box[1]+(box[3]-box[1])/2) > (h-350) and if id not in data:
count_person += 1
data_person.append(id)
class == output[5] line 173
@@ai4life6 thanks bro !
it runs in you without error?
Great. Glad to find this video. It really works.Thank you very much.
anh cho em hỏi lỗi này với a
Traceback (most recent call last):
File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 278, in
detect(opt)
File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect
pred = model(img, augment=opt.augment, visualize=visualize)
File "C:\Users\Admin\anaconda3\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward
y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
File "C:\Users\Admin\anaconda3\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Users\Admin\Downloads\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once
x = m(x) # run
File "C:\Users\Admin\anaconda3\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Admin\anaconda3\lib\site-packages\torch
n\modules\upsampling.py", line 154, in forward
recompute_scale_factor=self.recompute_scale_factor)
File "C:\Users\Admin\anaconda3\lib\site-packages\torch
n\modules\module.py", line 1207, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
Hello friend, I want to achieve one kind and multiple goals. For example, for vehicles of the same type, each vehicle has its own ID. How do you divide the data set?
Hi sir. Thanks for the video, it has been really helpful. I have a question: is it possible to train the yolo network on a custom dataset? Thanks in advance
sure, th-cam.com/video/nnSCYdraVrA/w-d-xo.html
Hello sir great video and thank you so much. Can you help me how can I add in the database the name of the object every detection? Hope you reply and thanks in advance.
Anh ơi cho em hỏi làm thế nào để vẽ trajection of object hay là lưu lại vị trí cũng từng centroid của từng đối tượng vào 1 file csv với ạ. Em cảm ơn anh.
cách xác định centriod của box thì em xem lại, anh có làm rồi, còn lưu thì ví dụ :
```code```
if len(outputs) > 0:
for j, (output, conf) in enumerate(zip(outputs, confs)):
bboxes = output[0:4]
id = output[4]
cls = output[5]
header = ['bboxes', 'id', 'cls']
data = [bboxes, id, cls]
with open('data.csv', 'w', encoding='UTF8') as f:
writer = csv.writer(f)
# write the header
writer.writerow(header)
# write the data
writer.writerow(data)
```
em chào anh, khi em chạy code thì khung hiển thị video của em bị to quá màn hình và chỉ hiển thị được 1 phần của video và phần còn lại bị khuất, cho em hỏi là chỉnh kích thước khung hình video hiển thị thì như thế nào ạ
Hi sir, thanks for the tutorial. I succeed to run the model but the video are not saved in the detect file. Any changes in the code?
parser.add_argument('--save-vid', action='store_true', help='save video tracking results') ---> 'store_false'
or line 214 :
if save_vid: ------------ --> if True:
@@ai4life6 ill try. tq sir
@@ai4life6 sir, how do adjust the virtual line height .....and which line can I take the output print to sent it to database postgres?
chào b! cảm ơn đã chia thẻ video. Cho mình muốn hỏi, ví dụ mình muốn cắt ảnh những đối tượng khi đi qua line thì ý tưởng là gì? b có thể gợi ý cho mình được không? cảm ơn bạn.
Khi ta đếm obj đi qua line line, có toạ độ box của đối tượng tượng, ta dựa vào đó để vẽ , vẽ tâm , cũng dựa vào đó để cắt box .
hello sir, I wrote some code in plots
when I run track
the result is not I want
I use pytorch env
please help me
Anh ơi . Anh cho em hỏi anh mua camera hết bao nhiêu ạ .
Chào bạn mình dùng phần mềm iriun webcam để lấy cam từ điện thoại, bạn có thể tham khảo mua webcam c270, c310 giá tầm 500 - 800k
@@ai4life6 Dạ em cảm ơn anh ạ. Anh có fb hay zalo gì không ạ . Nếu được thì em sẽ để lại link fb anh kết bạn em được không ạ ?
anh ơi cho e hỏi chút là mình muốn track 2 đối tượng khác nhau thì sửa ở đâu ạ. Ví dụ vừa track ô tô vừa xe máy ạ
Em xem bên dưới a có comment vs ai rồi đấy
hello sir, I would like to ask you , how could I use another camera?
I have already tried this code
parser.add_argument('--source', type=str, default='1', help='source') # file/folder, 0 for webcam
please help me
Hi and thanks for the video sir, it taught me a lot. When, I integrated Deep Sort in my project, I noticed that, while assigning the unique IDs it is skipping some numbers in between like 1,2,3,4,5,6,7,8,9,10 and then suddenly it starts naming with 34,35,36,37,38, what would be the probable reason for that? as I want a sequence in my project.
Deep SORT manages the lifecycle of a track based on a state variable with 3 values (tentative, confirmed, deleted)
These states are initially assigned a tentative value.
This value, if it is still guaranteed to be maintained in the next 3 frames, the state will change from probe to confirmed (confirmed),
Tracks with confirmed status will try to stay tracked, even if lost, Deep SORT will still maintain tracking for the next 30 frames.
Conversely, if the track is lost in less than 3 frames, the status will be deleted from the tracker.
@@ai4life6 So, that means there are some track IDs of objects which are getting deleted in between. So, IDs like 11,12,13,14,15,..............33 are not visible in the generated video?
By the wat thanks for such a fast reply.
@@anubhavdixit3135 Have u solve this problem, avoid 11,12,13,....,33,34. Can you count all the person appeared during running code?
@@ai4life6 cho e xin code đc ko a
Greetings, sir! Thank you very much for your video! Can you tell us please is it possible to count objects of 2 different classes separately ? Thanks in advance
You can see comments below
Bernard Collin
@@ai4life6 thank you sir ! You literally saved my university major project)
hi sir i hope you reply to me. I am trying to run this project with your code and what is happening is that when im trying a new footage. It counts every detections. I just want to make it count when passing thru the line. I hope you can help me with this.
Excuse me, how can I set display size about .mp4 file testing data?
Hi @AIforLife! I trained custom object. The program counts the objects passing virtual line. But detection is not accurate due to sunlight. For this reason I am going to draw ROI. But I dont know how to count objects passsing the area. Another problem is when 2 objects crossing the line parallelly it detecting it as 1 object. How can I fix these problems? Help me please
1,with ROI: manage obj with state (up, down) when it was detected, if obj in ROI change state( )
2. use ROI replace line
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
I have this error while trying this code, anyone fixed this error?
Hi sir, i have another question. Is it possible to implement here other tracking algorithms, like i.e. the sort or the centroid tracker method? Thanks in advance
let's start with centroid tracker, create 1 class for it with input: centriod coordinate of box, sort same as deepsort without extract network
@@ai4life6 ok so how to avoid giving to him an extract network? Should i just leave blanck the MODEL_TYPE field in the deep_sort.yaml file, or should I do something more ?
@@maurizioloschiavo9758 you need remove deepsort folder , we not need it if you use sort . Create 1 class for implement sort algorithms , you need explain input & output of sort .www.researchgate.net/publication/352498971/figure/fig1/AS:1035885332160524@1623985717827/Object-tracking-procedure-of-SORT-14.png
Hi, great tutorial. I`m trying to set up in my own dataset but my system is returning the follow error:
Traceback (most recent call last):
File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/track.py", line 283, in
detect(opt)
File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/track.py", line 74, in detect
model = DetectMultiBackend(yolo_model, device=device, dnn=opt.dnn)
File "/blue/nsboyd/re.herrigfurlane/Teste/Yolov5_DeepSort_Pytorch/yolov5/models/common.py", line 309, in __init__
model = attempt_load(weights if isinstance(weights, list) else w, map_location=device)
File "./yolov5/models/experimental.py", line 96, in attempt_load
ckpt = torch.load(attempt_download(w), map_location=map_location) # load
File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "/home/re.herrigfurlane/.local/lib/python3.9/site-packages/torch/serialization.py", line 1124, in find_class
return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'DetectionModel' on
Could you help me?
Thanks
Hello sir. I hope you are healthy and safe. I want to count per class, like for vehicle, how many bus, car and motorcycle pass in the line. Is it possible?
example
bus: 1
car: 3
motorcycle: 4
see the answer below @Bernard Collin ask me .
@@ai4life6 it says errors on line 173
sir pls help me. I want then to display the number per class in the video
Try cv2.puttext
Hello sir i hope you will answer, when changing the video it doesn't work perfectly in counting cars after crossing the line some cars are detected before and when come closer to the line it doesn't detect it so it wont be counted , how can i fix this? Thank you for sharing your code and creating this video.
Thanks,
1. you can use model yolov5s or yolov5m, or set conf-thres,iou-thres high( ex: 0.65)
2. you can set status of obj, example obj_id_1 go from bottom to top
detect obj_id_1 first, initialization dict data = { obj_1: {"counted": false, ''status":"going up" }..., obj_n: {"counted": 0, ''status":"going down" }} , with (status = ''going up" if y of obj1 < line,''going down" if y of obj1 > line )
when obj1 cross line update data { obj_1: {"counted": true, ''status":"going up" } and count all dict data have counted = True and status = "going up", we have sum count :"going up"
Hi Sir,
I'm trying to implement the same on Nvidia Jetson Nano, I have managed to run YoloV5, but for deepsort we need python 3.8, and Jetson nano is compatible with 3.6! Is there anyway to implement Deepsort on 3.6?
Try install motpy , same deepsort, yolov5n or onnx for jetson nano, good luck.
@@ai4life6 Thank you sir! I'll try that
@@ai4life6 Hi Sir, How do we reduce frame size? this helps to increase fps
Change in line 253 track.py
@@ai4life6 Thank you sir! line 233 right!
Thank you for showing and explaining the code. This helps me a lot with implementation to my project.
I'm doing tracking on two classes of objects trained on my own dataset.
Beside counter I'm also extracting coordinates from each of the tracked objects and here are my questions:
1. How can I implement kind of a heat map for occurrence of detected/tracked objects on the video? I don't know what to look for and can't find anything.
2. Can you recommend some tutorial or any source that I could check to find some information that what else I could implement to this code (to display output)?
Thank you in advance
how can i do to tracking on multipl classes of objects trained on my own dataset.
Thanks for great video. I have a error: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but th
is version of numpy is 0xf (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:68.)
Can you help me?
try: pip install numpy --upgrade
@@ai4life6 Fusing layers...
Model Summary: 213 layers, 1867405 parameters, 0 gradients
Traceback (most recent call last):
File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 278, in
detect(opt)
File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect
pred = model(img, augment=opt.augment, visualize=visualize)
File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward
y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "E:\Vidu\Xu Ly Anh\yolov5-master\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once
x = m(x) # run
File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch
n\modules\upsampling.py", line 154, in forward
recompute_scale_factor=self.recompute_scale_factor)
File "C:\ProgramData\Anaconda3\envs\Yolov5_DeepSort_Pytorch\lib\site-packages\torch
n\modules\module.py", line 1207, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
A có thể giúp e lỗi này đc ko ạ?
anh làm yolov7 với deep sort được không ạ?
how can i edit the current classes to include liscence plate detection? so for every car we detect we try detect its liscence plate aswell?
anyone who can help with this?
@@faizelkhan3951
1.liscence plate detection: train 1 class with License plate use yolov5
2.License-Plate-Recognition: Determine the license plate container area using Yolo, and character recognition ex: WPOD,LPRNet,...
Hello sir. May I ask how to use gpu in this? Thank you so much.
yes, if you have 1 gpu : line 257 : parser.add_argument('--device', default="0", help='cuda device, i.e. 0 or 0,1,2,3 or cpu') ,
@@ai4life6 thank you so much sir. 😀
Good day! Thanks for you tutotial video. You definitely helped me a lot in finishing my final year project.
But I have a question regarding how to initialize the GPU. I understand that to use GPU, i have to change the command, such that
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') ====> parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
However when I edit in that way, it shows that I dont have any GPU and the fact is that I have one in my device. Do you know how to solve this problem? Because I think the displayed video using CPU is kinda laggy.
Thanks 😊
1. Check version cuda of your computer (gpu of nvidia) if not , let Installation CUDA
2. pytorch.org/get-started/locally/ , install pytorch with your version cuda
3. check console :
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
1
>>> torch.cuda.current_device()
0
>>> torch.cuda.device(0)
>>> torch.cuda.get_device_name(0)
'GeForce GTX 3090TI'
😊
@@ai4life6 Thanks for your help!! I truly appreciate that 🙌🙌
Hello sir, I'm trying to do a custom deepsort project how should I do, thank you
You want train custom data ?
@@ai4life6 Yes,and I need to use deepsort to detect and track.
1. you train custom data :th-cam.com/video/nnSCYdraVrA/w-d-xo.html
you have a file weight yolov5.pt
2. replace github.com/dongdv95/yolov5/blob/master/Yolov5_DeepSort_Pytorch/track.py : line 249 with your weight model
@@ai4life6 Thank you very much! That's very helpful for me.
Hello, thanks for the amazing video brother, how do i create virtual ground boxes and check if a car is inside these boxes also how do i give an id for every box so that i can use it in database to tell if that place is occupied or free , thank you so much keep the amazing work up :)
Did you find a way bruh?
@@yashaswibaskarla6351 yes i did you need deepsort and any yolo version
th-cam.com/users/results?search_query=yolov5+deepsort
turn subtitles on and watch the whole video explaining how to count vehicles
you just need need to understand the concept and replace the line with a rectangle like this video
Hi sir I hope you reply I am implementing the code as mentioned but when I run track.py it it showing me the following error:
video 1/1 (1/335) C:\Users\Admin\OneDrive\Desktop\yolov5-master\Yolov5_DeepSort_Pytorch\videos\Traffic.mp4: Traceback (most recent call last):
File "track.py", line 281, in
detect(opt)
File "track.py", line 112, in detect
for frame_idx, (path, img, im0s, vid_cap, s) in enumerate(dataset):
ValueError: not enough values to unpack (expected 5, got 4)
I hope you reply , please help
I dont have gpu so it is running on cpu itself- addition to above
anh ơi cho em hỏi là e có làm code phỏng đoán tốc độ xe khi nhận diện bằng ''haar cascade car xml'' ấy anh , mà giờ em muốn phỏng đoán tốc độ xe bằng yolov5 anh có thể giúp em đc ko ạ .
nếu đo tốc độ thì khó,
1. còn vận tốc tương đối cho bài tập lớn trên trường thì bạn có thể vẽ 2 line với khonagr cách xác định trước s , lưu t1 là thời gian tâm bõ chạm vạch 1 , t2 là thời gian chạm vạch 2 , rồi tính vận tốc tương đối , v = s /(t2-t1)
2.tính quãng đường dịch chuyển trên các điểm liên tiếp trên cách khung liên tiếp.
3.Vì hình ảnh trên camera khác với mặt phẳng trên mặt đường nên chỉ có tính tương đối
em cx đang làm đề tài này bác ạ. hơi khoai
@@phanvankhai4354 ban lam toi dau roi
Hello sir, thanks for the video. The only issue I have is with saving a cropped image from a video. tried to get it working with save-one-box. But couldn't figure it out. Thanks in advance.
I have ex:
import cv2
img = cv2.imread("img.png")
crop_img = img[y:y+h, x:x+w]
if you draw rectangle for box, crop same
Hello sir I have the parser.add_argument('--device', default="0", help='cuda device, i.e. 0 or 0,1,2,3 or cpu') line and I get the AssertionError: Invalid CUDA '--device 0' requested, use '--device cpu' or pass valid CUDA device(s) error
default="" for cpu if you have not gpu
@@ai4life6 I have GPU but looks like it doesn't recognize it
@@ai4life6 Now it recognize it! pero i have the following error:
C:\Users\meboc\AppData\Roaming\SPB_16.6\Yolov5_DeepSort_Pytorch\deep_sort/deep/reid\torchreid\utils\tools.py:43: UserWarning: No file found at "ultimo_osnet_x0_25"
warnings.warn('No file found at "{}"'.format(fpath))
Successfully loaded imagenet pretrained weights from "C:\Users\meboc/.cache\torch\checkpoints\osnet_x0_25_imagenet.pth"
** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias']
Model: osnet_x0_25
It still detect the objects very good but i didnt have that error before :(
@@ai4life6 forrtl: error (200): program aborting due to control-C event
Image PC Routine Line Source
libifcoremd.dll 00007FF9F37F3B58 Unknown Unknown Unknown
KERNELBASE.dll 00007FFAB72E6273 Unknown Unknown Unknown
KERNEL32.DLL 00007FFAB7BC7C24 Unknown Unknown Unknown
ntdll.dll 00007FFAB990D721 Unknown Unknown Unknown
@@ai4life6 pls help
In addition, I would like to ask how to make the image smoother.Thank you
run with gpu (imgz >>) & better camera
@@ai4life6 thank you
It was great, thank you but I have a question sir. How can I save the results with the wideowriter? It seems there is a code which does that(in track.py, line 210 #save results) but it does not work. I would appreciate if you could help me.
Thanks, you edit line 259: parser.add_argument('--save-vid', action='store_true', help='save video tracking results') ====> parser.add_argument('--save-vid', action='store_false', help='save video tracking results')
or line 214 if save_vid: ===> if True:
Thanks a lot, also one last question :) I am not able to track specified classes. I change '--classes' with '--class 0' but it gives me error. What can I do to track only human?
@@ozgurkaplanturgut6315
line 263 in track.py:
for only car : parser.add_argument('--classes', default=[2], type=int, help='filter by class')
for only person: parser.add_argument('--classes', default=[0], type=int, help='filter by class')
Hi, can i know where do you find the saved results? Because i don't find any in my project file. Is it in the file named "inference/output" ?
i need to track.py the picture by my picture it error please help me
AI4LIFE, excuse me, I also wanted to specify how should I modify the code to get the coordinates of bbox on video ? Thanks in advance!
bboxes = output[0:4] of im0
Hi I want to implement speed estimation to this project can you help me with some heads up…"?
draw 2 lines where you know their distance ex: s = 10m
save the time the box touches the 1st line, time_1
save the time the box touches the 2nd line, time_2
velocity = s / (time_2 -time_1)
Ơ, tưởng ông Tây nào chứ :)))
anh cho em xin cau hinh cua may voi a?
minh dung i5 10th th+ 2060 , ko co gpu van chay muot nhe.
@@ai4life6 a có thể chỉ cho em được không ạ. em chạy nó giật lắm ạ
@@phanvankhai4354 bạn chạy thử model yolov5n và giảm image size xuống nhé
why your video title is in english if you make your video in japanese? to increase click count?
I'm getting this error when I'm trying to run track.py. Can Anyone help me to find out what the problem is?
$ python track.py
Traceback (most recent call last):
File "H:\vc\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 2, in
from yolov5.utils.general import (LOGGER, check_img_size, non_max_suppression, scale_coords,
File "H:\vc\yolov5\Yolov5_DeepSort_Pytorch\yolov5\utils\general.py", line 35, in
from utils.downloads import gsutil_getsize
ModuleNotFoundError: No module named 'utils.downloads'
I'm getting this error when i'm trying to run track.py . Can you help me to find what is the problem?
D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\deep_sort/deep/reid\torchreid\metrics
ank.py:11: UserWarning: Cython evaluation (very fast so highly recommended) is unavailable, now use python evaluation.
warnings.warn(
Successfully loaded imagenet pretrained weights from "C:\Users\HP/.cache\torch\checkpoints\osnet_x0_25_imagenet.pth"
Selected model type: osnet_x0_25
YOLOv5 2022-9-1 torch 1.12.1+cpu CPU
YOLOv5 2022-9-1 torch 1.12.1+cpu CPU
Fusing layers...
Model Summary: 213 layers, 1867405 parameters, 0 gradients
Traceback (most recent call last):
File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 278, in
detect(opt)
File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\track.py", line 122, in detect
pred = model(img, augment=opt.augment, visualize=visualize)
File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "d:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\yolov5\models\common.py", line 384, in forward
y = self.model(im) if self.jit else self.model(im, augment=augment, visualize=visualize)
File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 126, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch\./yolov5\models\yolo.py", line 149, in _forward_once
x = m(x) # run
File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\upsampling.py", line 154, in forward
recompute_scale_factor=self.recompute_scale_factor)
File "C:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch
n\modules\module.py", line 1207, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'
PS D:\Vehicle count\yolov5\Yolov5_DeepSort_Pytorch>
what torch version did u use?
Thank for a great video! I was able to run it before but now I encounter a problem with import torch. The error is
from torch._C import * # noqa: F403
ImportError: /home/.../lib/python3.8/site-packages/torch/lib/libc10_cuda.so: undefined symbol: _ZN3c107Warning4warnERKNS_14SourceLocationERKSsb
Do you know why I am having this issue?
Thank you!
I don't ,let create venv for each project , check version of cuda , reinstall torch-gpu
Hello sir,
I have test GPU.
torch.cuda.is_available()=True
But I stll get AssertionError: Invalid CUDA ‘-device 1’ requested, use ‘-device cpu’ or pass valid CUDA device(s)
Please help me.
I already try device 0,1,2,3,4
All of the number are wrong
@@davidlin2337 Based on the error message CUDA is not available either because you’ve installed the wrong binary, have multiple binaries installed and are using the wrong one, your system has any driver issue and cannot communicate with the GPU etc.
Create a new and empty virtual environment, install the PyTorch binary with the desired CUDA runtime, and make sure you are able to use the GPU.
pytorch.org/get-started/locally/