YOLOv3 from Scratch

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 มิ.ย. 2024
  • ❤️ Support the channel ❤️
    / @aladdinpersson
    How to implement YOLOv3 from scratch using Pytorch.
    If you prefer to read instead of watch there is also a written article:
    / yolov3-implementation-...
    In this video we'll:
    ✔️ Recap of YOLO
    ✔️ YOLOv1 vs YOLOv3 differences
    ✔️ Implementing the architecture
    ✔️ Dataset loading of MSCOCO and Pascal VOC
    ✔️ Loss function
    ✔️ Setting up training
    Paid Courses I recommend for learning (affiliate links, no extra cost for you):
    ⭐ Machine Learning Specialization bit.ly/3hjTBBt
    ⭐ Deep Learning Specialization bit.ly/3YcUkoI
    📘 MLOps Specialization bit.ly/3wibaWy
    📘 GAN Specialization bit.ly/3FmnZDl
    📘 NLP Specialization bit.ly/3GXoQuP
    ✨ Free Resources that are great:
    NLP: web.stanford.edu/class/cs224n/
    CV: cs231n.stanford.edu/
    Deployment: fullstackdeeplearning.com/
    FastAI: www.fast.ai/
    💻 My Deep Learning Setup and Recording Setup:
    www.amazon.com/shop/aladdinpe...
    GitHub Repository:
    github.com/aladdinpersson/Mac...
    ✅ One-Time Donations:
    Paypal: bit.ly/3buoRYH
    ▶️ You Can Connect with me on:
    Twitter - / aladdinpersson
    LinkedIn - / aladdin-persson-a95384153
    Github - github.com/aladdinpersson
    Original paper:
    arxiv.org/abs/1804.02767
    GitHub Repository:
    github.com/aladdinpersson/Mac...
    YOLOv3 repository:
    bit.ly/3pIIXT8
    ⌚️ Timestamps:
    0:00 - Introduction
    0:50 - Recap of YOLO
    6:10 - YOLOv3 vs YOLOv1
    14:25 - Model implementation
    47:20 - Dataset class
    1:14:30 - Loss implementation
    1:29:07 - Config file
    1:34:24 - Training
    1:51:05 - Ending

ความคิดเห็น • 225

  • @AladdinPersson
    @AladdinPersson  3 ปีที่แล้ว +104

    These from scratch videos & paper implementations take a lot of time for me to do, if you want to see me make more of these types of videos: please crush that like button and subscribe and I'll do it :) Btw was awesome chatting with you all during the premiere!
    Github repository (including link to dataset & pretrained weights):
    bit.ly/3pIIXT8
    There is an amazing written article if you prefer to read instead of watching that I recommend:
    sannaperzon.medium.com/yolov3-implementation-with-training-setup-from-scratch-30ecb9751cb0
    Consider becoming a channel supporter ❤️:
    th-cam.com/channels/kzW5JSFwvKRjXABI-UTAkQ.htmljoin
    Original paper:
    arxiv.org/abs/1804.02767
    ⌚️ Timestampo:
    0:00 - Introduction
    0:50 - Recap of YOLO
    6:10 - YOLOv3 vs YOLOv1
    14:25 - Model implementation
    47:20 - Dataset class
    1:14:30 - Loss implementation
    1:29:07 - Config file
    1:34:24 - Training
    1:51:05 - Ending

    • @torstenschindler1965
      @torstenschindler1965 3 ปีที่แล้ว +2

      These ‚from scratch‘ series is awesome. Please make one „Scaled-YOLOv4 from Scratch“. It is claimed to be faster and better than EfficientDet.

    • @radheshyamverma9263
      @radheshyamverma9263 3 ปีที่แล้ว

      Great video. Instant sub. I didn't get why you multiplied with IOU when calculating object loss. Can't find the corresponding mathematical equation as well. Can someone please help?

    • @rajkumarayyalsamy1971
      @rajkumarayyalsamy1971 3 ปีที่แล้ว

      I continuoulsy watch all your videos. Please continue to do your great work. Looking forward to more Yolo from scratch videos. Thank you :))

    • @rampanda2361
      @rampanda2361 3 ปีที่แล้ว +1

      Please make videos on other Yolo versions as well

    • @nguyenpham8447
      @nguyenpham8447 3 ปีที่แล้ว

      Thanks for the video so much. Looking forward to seeing other videos for Yolov4

  • @mrigankanath7337
    @mrigankanath7337 3 ปีที่แล้ว +55

    I have been also trying to implement research papers/ popular algorithms but fail in doing it.
    Can I suggest you make a video on how you approach a research paper, what are your first steps in implementing your code and some tips or tricks.
    It would be really good. Please!!!!!!

  • @yutongyang845
    @yutongyang845 2 ปีที่แล้ว +1

    This series of object detection is just AMAZING! Really like it!

  • @kirtipandya4618
    @kirtipandya4618 3 ปีที่แล้ว +39

    Aladdin, dude you are doing awesome projects. Don’t work for anyone. Start your own company.

  • @wolfisraging
    @wolfisraging 3 ปีที่แล้ว +7

    This is the bomb yo, really appreciate it.
    I'm too trying to make another video.... just too busy in my undergrad examinations and labs stuff.... hope to upload it really soon.

  • @glowingenigma
    @glowingenigma 3 ปีที่แล้ว +5

    Man, you motivate me with such a good videos, thanks you

  • @pdrcouto
    @pdrcouto 2 ปีที่แล้ว +1

    A lot of hard work and knowledge in this video. It was amazing to watch, thank you.

  • @konataizumi5829
    @konataizumi5829 3 ปีที่แล้ว +1

    Amazing job, dude. One of the best channels.

  • @bajrangchapola6748
    @bajrangchapola6748 3 ปีที่แล้ว

    I watched all of your videos. You are doing fabulous work.

  • @prabhavkaula9697
    @prabhavkaula9697 3 ปีที่แล้ว

    Thank you for documenting and sharing your application and understanding of the resources like the YOLO algorithm

  • @Information_Stats
    @Information_Stats ปีที่แล้ว

    Thank you very much , I was struggling with transfer learning for months and i got so frustrated that i decided to make a model myself , i hope after this tutorial i would be able to do it .

  • @klrshak776
    @klrshak776 3 ปีที่แล้ว +1

    Thanks a lot, the video Helped me a lot to understand each and every part of YOLO algorithm.

  • @emilhovad9694
    @emilhovad9694 ปีที่แล้ว +4

    Great video, it is nice to have these videos with great details regarding implementation in pytorch. It really helps me to learn pytorch🙂.
    Some minor details:
    1) The objectness is typically positioned at the fourth position, based on the original yolov3 paper.
    # start of loss function
    obj = target[..., 4] == 1 # in paper this is Iobj_i
    noobj = target[..., 4] == 0 # in paper this is Inoobj_i
    2) The target should also have all the class predictions (20 in voc or 80 in coco)
    #in the training loop, when preparing the target. The target should also have a 1 in the correction position in the class predictions
    import torch.nn.functional as fun
    targets[scale_idx][anchor_on_scale, i, j, 5:] = fun.one_hot(torch.tensor(int(class_label)), num_classes)
    I hope to make a pull request, altough yolov3 is great, the paper is hard to read;-)

  • @arrayt8480
    @arrayt8480 3 ปีที่แล้ว

    Dude you are just awesome ❤️... This video guide has helped me a lot in understanding yolo model 😌 thanks man 🤞

  • @wangyenting6539
    @wangyenting6539 11 หลายเดือนก่อน

    Awesome work!!

  • @pritamjathar8037
    @pritamjathar8037 3 ปีที่แล้ว +8

    Can't wait for the solution, as I got stuck while implementing the paper myself. Really really excited !!!!!!!!!!

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +1

      Which part did you find difficult?

    • @pritamjathar8037
      @pritamjathar8037 3 ปีที่แล้ว

      @@AladdinPersson Anchor Boxes and Detection layers part.

  • @user-kt8nc4xd1u
    @user-kt8nc4xd1u ปีที่แล้ว

    Thanks for creating the video!

  • @dajuric
    @dajuric 2 ปีที่แล้ว +1

    Thank you very much! I wish there is a larger amount I can select.

  • @qiguosun129
    @qiguosun129 3 ปีที่แล้ว

    This is really an awesome video, I decided to follow you to learn more.

  • @boyufan7373
    @boyufan7373 3 ปีที่แล้ว

    Vielen Dank, quite clear explanation!

  • @zeyutang2084
    @zeyutang2084 2 ปีที่แล้ว +1

    Very clear explanation! It would be also great if you could make a video on Detectron in the future!

  • @baohuynh5462
    @baohuynh5462 3 ปีที่แล้ว

    This is so awesome!

  • @bharath5666
    @bharath5666 2 ปีที่แล้ว

    thanks bro,it was extremely useful! will become a member soon!

  • @frankrobert6867
    @frankrobert6867 2 ปีที่แล้ว

    Great series for machine learning.

  • @vishalgoklani
    @vishalgoklani 3 ปีที่แล้ว +3

    This was awesome, I especially enjoyed the write-up! When are you guys doing a video on DETR from Scratch?

  • @adityabodkhe914
    @adityabodkhe914 3 ปีที่แล้ว

    Really appreciate the effort bro. Keep up the good work . I will also consider donating to your channel

  • @ahxmeds
    @ahxmeds 3 ปีที่แล้ว +4

    Amazing tutorial. thanks for making this. I just had a basic question before I start implementing this. For my specific problem statement, I want to use negative images (images with no object). Should I just use empty .txt files for the bounding box coordinates for these images in the training set?

  • @gmlssns5859
    @gmlssns5859 2 ปีที่แล้ว

    you are my teacher.
    I'm living in korea.
    thank you sir

  • @radoslavstavrev5636
    @radoslavstavrev5636 2 ปีที่แล้ว +3

    You are amazing Aladdin, this really helps me for my thesis, is it possible to run the demo on a video for demonstration purposes?

  • @talha_anwar
    @talha_anwar 3 ปีที่แล้ว

    reminder set. waiting

  • @rogiervdw
    @rogiervdw 3 ปีที่แล้ว

    very good! thanks!

  • @davidportilla4377
    @davidportilla4377 3 ปีที่แล้ว

    really really helpful!

  • @romanserebrennikov6115
    @romanserebrennikov6115 2 ปีที่แล้ว +12

    Really good implementation!
    It would be interesting to see implementation of YOLOv4

  • @bboybast
    @bboybast 2 ปีที่แล้ว

    Hi Aladdin! Thanks for your great tutorial. In your opinion which framework (PyTorch or TF) is better to develop an object detection app?

  • @divyesh5247
    @divyesh5247 3 ปีที่แล้ว +2

    Great work

  • @thefirstoct
    @thefirstoct 3 ปีที่แล้ว

    Love this video.
    Can you tell me how to plot performance metrics?
    Thank you!

  • @ajitkumar15
    @ajitkumar15 3 ปีที่แล้ว

    great video !!!! thanks

  • @adrienloridan1764
    @adrienloridan1764 3 ปีที่แล้ว

    young genius, awesome videos

  • @roomo7time
    @roomo7time 3 ปีที่แล้ว +2

    Hi, I've been watching your pyotrch series, and it has been immensely helpful. I have one question. Is it possible to train a detection model from scratch with two gpus (12GB ram each)? Since I have only two gpus, I need to use small batch, and bit worried about using small batch size since it might not produce a well trained model.

  • @hervenikue6437
    @hervenikue6437 ปีที่แล้ว +5

    Hi. Thank you for this interesting video on the YOLO series. It would be very interesting if you could do the same for YOLOX, the version without anchor boxes of YOLO. :)

  • @bhavyashah8674
    @bhavyashah8674 ปีที่แล้ว

    Hi. Great video. Just had a small doubt. What is the range of tx, ty, tw, th that are outputted by the model? Also do we apply sigmoid to the tw and th before exponentiating them?

  • @vasylcf
    @vasylcf 2 ปีที่แล้ว

    Thanks, it was interesting

  • @maroueneoueslati5563
    @maroueneoueslati5563 ปีที่แล้ว +1

    Excellent job. Is there any similar video please for yolo v5 or yolo v7 ? from end to end

  • @luiscao7241
    @luiscao7241 2 ปีที่แล้ว

    you did a good job!

  • @GradientDude
    @GradientDude 3 ปีที่แล้ว

    Must be very good for beginners! Good job!

  • @Anonymous-ms9ut
    @Anonymous-ms9ut 2 ปีที่แล้ว

    Thank You Aladdin :-)

  • @nhutminh1552
    @nhutminh1552 ปีที่แล้ว

    thank you

  • @tranquangkhai8329
    @tranquangkhai8329 3 ปีที่แล้ว

    Thanks for very nice tutorial. Can you let me know what program do you use to make presentation?

  • @joaobarreira5221
    @joaobarreira5221 2 ปีที่แล้ว

    thanks for the great job.
    I have a question:
    - I notice that there are a few differences between the video code and the github code? For example see config file.
    I would not like to be checking line by line, but which version gave MaP 0.78 in Pasc VOC? Video code or GitHub Code?

  • @datnguyentan4093
    @datnguyentan4093 3 ปีที่แล้ว +3

    I really love the video! I have a question. In the YoloLoss, instead of applying inverse sigmoid on target, you applied sigmoid on predictions, which is quite different from what you mentioned. Is this a mistake or we can do it both ways?

    • @lotze_cinema
      @lotze_cinema 2 ปีที่แล้ว

      i guess that's just a mistake in his words. why do u need to aply it on target? there is no point

  • @choonghonglee
    @choonghonglee 2 ปีที่แล้ว

    awesome videos for both yolo and yolo3. Wondering if you will be doing a video for yolov5?

  • @FanFanlanguageworld1707
    @FanFanlanguageworld1707 2 ปีที่แล้ว

    Hi, thank you for doing this.But it lack of the part data augmentation is quite necessary for this problem, or you did you do it in other videos?

  • @abdelhakimlamnaouar9527
    @abdelhakimlamnaouar9527 ปีที่แล้ว

    great! i will try to convert your code to keras and tensorflow myself

  • @abhinav9058
    @abhinav9058 3 ปีที่แล้ว

    very easy to follow :)

  • @fletcherpearmaine4524
    @fletcherpearmaine4524 2 ปีที่แล้ว

    id love to see you do yolov5 with PyTorch!!

  • @adipradhan7339
    @adipradhan7339 2 ปีที่แล้ว +1

    Can u do yolov5 code explanation and how to change the architecture and loss function according to our need

  • @NahidAlam
    @NahidAlam ปีที่แล้ว

    This is amazing, can you do a session on yolov7?

  • @codeninja3158
    @codeninja3158 3 ปีที่แล้ว

    Very informative video... Great work
    Is there any lecture for yolov4 as well?

  • @manishkumarmishra194
    @manishkumarmishra194 3 ปีที่แล้ว +1

    Great work Aladdin bro..
    Can you also make a video for Yolov5 from scratch. Thank you..

  • @user-bb3ok3pz7s
    @user-bb3ok3pz7s 3 ปีที่แล้ว +1

    According to yolo detection thought which cell the midpoint(center_x, center_y) falls in is responsible for detect the object, but in upper code not consider the adjoin grid cell, if they also have the greater than ignore_iou_thresh, the adjoin grid cell will also compute the loss. Because the code do not set their targets[scale_idx][anchor_on_scale, i, j, 0] = -1? I am looking forward to your answer. Thank you in advance.

  • @dshahrokhian
    @dshahrokhian 2 ปีที่แล้ว +2

    Just a small note: On the original YOLO, there were 2 bounding boxes per cell, not one. Great video!

  • @yabindong1754
    @yabindong1754 3 ปีที่แล้ว

    Dear, this channel is just great. SUBSCRIBE!!!
    I basically learned everything of transfering ML theory to code in this channel. Really appreciate it! Keep going dude!

  • @jeffrimurrugarra7562
    @jeffrimurrugarra7562 4 หลายเดือนก่อน

    Great effort! But I have some questions. Are you assigning the corresponding anchor for the test set too? (If that is the case, the code will require some changes to emulate in the real world, you do not have information about targets). I think a part of prediction is needed in the video. Good work!

  • @ShadowD2C
    @ShadowD2C 8 หลายเดือนก่อน

    good video to understand the innerworkings of Yolo but we need one for Yolov5 until v8

  • @ashwinjayaprakash7991
    @ashwinjayaprakash7991 3 ปีที่แล้ว +1

    this video series is so good, only thing is I feel like I am at too beginner level to understand this.
    can you maybe simplify it further by just correlating what you are coding with what is written on paper. I mean make it more explicit for noobs like me to understand. Thanks

  • @hazemabolholl705
    @hazemabolholl705 3 ปีที่แล้ว

    nice video thank you. Is there YOLOV3 using TensorFlow?

  • @w3w3w3
    @w3w3w3 ปีที่แล้ว

    great video

  • @markgazol5404
    @markgazol5404 3 ปีที่แล้ว +1

    Very clear and helpful! Thanks for the videos. I've got one question, though, Can you please explain what is the label for the images with no objects? During the training should it be like [0, 0, 0, 0, 0] or smth?

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +1

      Since YOLO predicts for each cell in the image (and for each scale) if there is no object in the cell we label it [0,0,0,0,0] for each anchor box

    • @ahxmeds
      @ahxmeds 3 ปีที่แล้ว

      Actually, I have a question very similar to this. Say I have an image file “001.jpg” with the corresponding label file “001.txt”. But the image file doesn’t contain any of the object I want to detect. So should I leave the file “001.txt” as empty? Or should I put [0 0 0 0 0] in it? Isn’t using 0 as the first index shows that this image belongs to class 0 (which in reality it is just the background)? In my problem statement, I want to detect only one class (tumors) and but I several negative images (images with no tumors) which I also want to train the network on, so I was wondering how to prepare the annotation files for such images. Thanks in advance.

    • @dengzhonghan5125
      @dengzhonghan5125 2 ปีที่แล้ว

      @@ahxmeds You only have to label the object you want to detect. If there is no object, the contents in target will be all 0 because for box in bboxes (this for loop will not be activated if bboxes is empty)

  • @frankrobert9199
    @frankrobert9199 ปีที่แล้ว

    great. great. great

  • @vinithegiste5026
    @vinithegiste5026 3 ปีที่แล้ว

    If I want to train on a custom dataset or a subset of VOC which weights should I use for pre_training?

  • @vikramreddy5631
    @vikramreddy5631 3 ปีที่แล้ว

    Waiting

  • @kevinjivani3536
    @kevinjivani3536 3 ปีที่แล้ว

    how do we load weight of the backbone for custom dataset

  • @jgomezrossi2497
    @jgomezrossi2497 ปีที่แล้ว

    Hi Aladddin! Thank you for the video. But I tried to follow your repo and where you say to pip install requirements there are no files with that name in that folder or the nearby.

  • @taeheejeong2801
    @taeheejeong2801 2 ปีที่แล้ว

    Can you share preprocessing steps for COCO dataset?
    In the case of VOC, you post get_data file.
    It will be great if you can share another get_data file for COCO.
    Thanks

  • @prajotkuvalekar2348
    @prajotkuvalekar2348 3 ปีที่แล้ว

    I m also waiting for ssd implementation....if you can... plz make 1 vdo

  • @norbertzabinski8166
    @norbertzabinski8166 3 ปีที่แล้ว

    Hi can you do a video explaining data loading for a different format of data eg different CSV types, folders of specific data, and how they come together whtat makes a mask different to a bounding box in the way you put it into a network? anyways great videos man, cant wait for yolo V5

  • @Njoudalnamlah
    @Njoudalnamlah 10 หลายเดือนก่อน

    we need a detailed explaination of the archetcture for yolo 3

  • @adittadasnishad7321
    @adittadasnishad7321 3 ปีที่แล้ว

    Btw love this channel

  • @eduardomartin136
    @eduardomartin136 ปีที่แล้ว

    Maybe there is a bug at the moment of calculate the object loss. Because the paper says that it should be calculated for all the anchor boxes, except for those with less than the IOU threshold (which you set with -1). I think you are not using this threshold. BTW, great work!

  • @nguyenviet3177
    @nguyenviet3177 3 ปีที่แล้ว

    can you do yolact from scratch tutorial?much appreciated 💖

  • @theanatomyofcars222
    @theanatomyofcars222 2 ปีที่แล้ว +1

    Sir, I need to give multiple labels for a bounding box. Like if a car is detected the same bounding box has to display the car, its weight, and its type. A single bounding box has to give multiple values. Can you please tell me what modifications I have to do to get the same? Awaiting your reply. Thank you.

  • @joshanishweb
    @joshanishweb 2 ปีที่แล้ว

    can someone explain about anchor taken and anchor not taken in the dataset part

  • @luiscao7241
    @luiscao7241 2 ปีที่แล้ว

    Great tutorial! I would be greateful if you can do it for Mask-RCNN. Thank you

  • @phungaoxuan1839
    @phungaoxuan1839 3 ปีที่แล้ว +1

    Hey Aladdin
    Can u pls make this but using Tensorflow?

  • @deepeshmhatre4291
    @deepeshmhatre4291 2 ปีที่แล้ว +1

    Dude exactly how many days did you spend learning this stuff yourself before creating this video ? Good Work

  • @TheFotbollen10
    @TheFotbollen10 3 ปีที่แล้ว

    Do you have an idea of how to translate from english to python code (with custom train&test dataset) using transformer?

  • @ilikeBrothers
    @ilikeBrothers 3 ปีที่แล้ว

    Блин, бомба! Красава)

  • @user-go6od7hm1c
    @user-go6od7hm1c 11 หลายเดือนก่อน

    Hello @AladdinPersson!
    Maybe I missed smth but it seems that early feature maps are responsible for detecting small objects (due to little receptieve field) , while feature maps produced by deeper layers detect big objects. What is the logic then to apply firstly13*13 grid cell to early feature maps (13*13 for detecting big objects) and then 52*52 (for small)?

    • @mincraftgeek123
      @mincraftgeek123 11 หลายเดือนก่อน

      you sort of have the right idea. Early feature maps contain less semantic information but greater resolution, usually what modern architectures do is use these high resolution shallow layers to supplement deeper layers to aid with small object detection. It would make sense that the 13x13 grid is applied at the very beginning to detect objects that are larger because these objects require less semantic information to detect. Conversely deeper layers contain more "information" about what the object "is" and so you'd want greater resolution to make the detection on smaller objects.

  • @rabiasarfrazrai4919
    @rabiasarfrazrai4919 3 ปีที่แล้ว +1

    Which tool you are using for coding??

  • @saurrav3801
    @saurrav3801 3 ปีที่แล้ว

    Bro please make a video about reinforcement learning basic to advanced in pytorch

  • @dengzhonghan5125
    @dengzhonghan5125 2 ปีที่แล้ว

    I am still having question for dataset.py. You have sorted the iou between bbox and all the anchors. The highest iou will be pickes at first, let us assume it is at the first scale. Then let us say if we have another anchor in second scale which will also be assigned as 1 for objectness score? Any help will be appreciated!

  • @nils7089
    @nils7089 3 ปีที่แล้ว

    Thanks for the great video. I have one question about the box coordinate loss tho. the torch.log() step would mean that you can get negative box coordinates for width and height if target[...,3:5] / anchors has values smaller than 1. It doesn't make sense to have negative width and height, and I couldn't find this part in the paper.

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว

      Remember that we're asking the network to output the inverse of exponential and so this might be a negative number. When we later convert using exponential it's still going to be positive

    • @nils7089
      @nils7089 3 ปีที่แล้ว

      @@AladdinPersson Thanks for your reply, I was referring to the following line target[..., 3:5] = torch.log(1e-16 + target[..., 3:5] / anchors) # convert target width and height (line 54-55 in your loss function) and there the targets would become negative for all values smaller than 1. So are you saying that we apply log to to the targets because the network also predicts logs and then pass potentially negative widths and heights into the mse loss function?

  • @TheKwongcheong
    @TheKwongcheong 3 ปีที่แล้ว

    Aladdin, will you be open to start a video of YOLOv4 / YOLOv5 implementation from scratch?

  • @yasminebaroud3965
    @yasminebaroud3965 ปีที่แล้ว

    Great job, can you please do the same video for yolov5??

  • @knowledgewithiqra9765
    @knowledgewithiqra9765 2 ปีที่แล้ว +1

    If we use only one class how we can modify the code and other parameters

  • @_fellow_
    @_fellow_ 5 วันที่ผ่านมา

    Hey I was curious where you are getting this loss function from. Especially the object loss term where the iou score is multiplied to target[..., 0]. I have seen this same scheme appear in all YOLOv3 implementations and each time it is stated that this is "what is done in the paper" but this is not mentioned in the paper.

  • @minhazulabedin9968
    @minhazulabedin9968 3 ปีที่แล้ว +1

    is it possibile to make a video on yolov4 tiny??

  • @ycsang3421
    @ycsang3421 3 ปีที่แล้ว +1

    First of all thank you very much!!
    What is the specific role of the function "iou_width_height" in utils.py? Why not use the function "intersection_over_union" to calculate IOU?

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +1

      The anchor boxes only have a width and height, if you want to use the normal intersection_over_union algorithm you'd have to copy the x, y positions but it's an unecessary computation. I instead made a function that compute IoU based only on width and height

    • @ycsang3421
      @ycsang3421 3 ปีที่แล้ว +1

      @@AladdinPersson Thank you!!
      There is another problem. Bboxes will generate tags larger than 1 when using 'albumentations' for data augmentation. Why? What should be done?
      The error is as follows:
      ValueError: Expected x_max for bbox (0.284, 0.15315315315315314, 1.002, 0.993993993993994, 6.0) to be in the range [0.0, 1.0], got 1.002.

    • @radheshyamverma9263
      @radheshyamverma9263 3 ปีที่แล้ว

      @@ycsang3421 I am also facing same issue. for now I fixed it by multiplying the bbox values by 0.999. But I dont know the route cause.

  • @Willow.448
    @Willow.448 4 หลายเดือนก่อน

    Do i need a good gpu for implementhing this?
    Should i use google colab instead?

  • @mrpants7925
    @mrpants7925 ปีที่แล้ว +1

    The reset connection should happen at the end of the block, not in-between layers right? The way its coded in the video has you add x after each convolution in the block, but I don't know how that could work. Assuming use_residual=True, and the input to the block x is of size (64, 32, 128, 128), then layer1(x) would have shape (64, 16, 128, 128), but you cannot add this to x which is (64, 32, 128, 128). Am I missing something?