Understanding the mAP (mean Average Precision) Evaluation Metric for Object Detection

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 35

  • @KCDRofficialprojects
    @KCDRofficialprojects 2 ปีที่แล้ว

    How we calculate the accuracy for the yolov4 for social distancing video without using cuda,cudnn,pytorch,tensor flow etc.Is it possible to calculate MAP?If possible please ping me.. how to write a code for it..

  • @manavmadan793
    @manavmadan793 4 ปีที่แล้ว

    why do you say that higher the threshold value will be lower will be the mAP at 7:02 seconds in video

  • @samida5568
    @samida5568 3 ปีที่แล้ว

    hello sir do we need to do this in coco val set can't we do on same mnist test sets?
    What is the difference can you explain me?

  • @cinnamoncider5242
    @cinnamoncider5242 4 ปีที่แล้ว +1

    What are the mAP values we should aim to achieve in our detection models? What does it depend upon?
    For eg, if i have 2 classes, should i expect higher map values than the detection for 20 classes? (given the same size of data for each class)

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว +1

      Yes, for less classes usually map will be higher, because its easier to learn for our model. So it depends of how many classes you have, training images, object details and etc, there is a lot of dependencies. Higher is better

    • @cinnamoncider5242
      @cinnamoncider5242 4 ปีที่แล้ว +1

      @@PyLessons thanks for the quick response! Much appreciated! Keep up the good work! :D

  • @nasimthander1296
    @nasimthander1296 3 ปีที่แล้ว

    Hey how to visualise RoC and AuC curves?

  • @don8duarte
    @don8duarte 3 ปีที่แล้ว

    I have a question. Say, for example, that at confidence of 0.5 there's a successful prediction (it's IoU is higher than the threshold between ground truth and prediction). In this scenario there's a TRUE POSITIVE.
    Now, when at 0.6 confidence the IoU is no longer bigger than the threshold so it becomes an unsuccessful prediction. Do we add a FALSE POSITIVE (because we have a wrong prediction) and a FALSE NEGATIVE (because we have a ground truth without a succesful prediction? Or do we just add a FALSE POSITIVE?
    It made more sense for me to add both a false positive and a false negative, the issue is that at a super high confidence, say 0.9, there are almost no TRUE POSITIVES but a lot of FALSE POSITIVES and FALSE NEGATIVES, making both precision and recall tend to 0. Shouldn't both metrics follow contrary growth with the confidence value? (when precision is aprox. 0, recall should be aprox. 1 and vice-versa) I've seen this happen in pretty much every mAP tutorial but I can't figure out what am I doing wrong and I have nobody to take advice from.

    • @brunospasta
      @brunospasta ปีที่แล้ว

      You should add it to both fn and fp in this case

  • @nasimthander1296
    @nasimthander1296 3 ปีที่แล้ว

    There are an array of numbers for precision and recall in result.txt. what does it mean? Among them which one is for the particular custom classes that user looking for?

  • @barathm18
    @barathm18 4 ปีที่แล้ว

    how to evaluate for the license plate detection datasset

  • @Takoyaki-hi4cj
    @Takoyaki-hi4cj 4 ปีที่แล้ว

    Thank you for your video. I am not clear about the concept of positive.
    Is it mean:
    1. bounding boxes with objectness confidence score > objectness threshold
    or
    2. bounding boxes with objectness confidence score > objectness threshold and class confidence score > class threshold
    For yolov3, could we simply regard the class of highest predicted class probability as its predicted class instead of comparing each predicted class probability with the class threshold?
    If we can simply regarded the class of highest predicted class probability as its predicted class, does it mean we regard the predicted bounding box as negative if its predicted class is different with the class of its relative ground truth??

    • @Takoyaki-hi4cj
      @Takoyaki-hi4cj 4 ปีที่แล้ว +1

      I have found out the answer. Wrong classification should be regarded as negative

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว

      Great (y)

    • @Takoyaki-hi4cj
      @Takoyaki-hi4cj 4 ปีที่แล้ว

      @@PyLessons For yolov3 using Pytorch, should we calculate the mAP per class mAP of all images instead of per class mAP per image? However, I am going to train my yolov3 using 110,000 images. I have tried to load all images into one tensor of shape [111000, 3, 416, 416] but in vain owing to CUDA out of memory.
      Currently, I saves the paths of the images into a list of datasets and load batch sizes number of images per batch to avoid CUDA out of memory. Nevertheless, I cannot comput mAP of all images as I cannot store and sort all bounding boxes in a tensor.
      Hence, do we only need to compute per class mAP for a small number of images?
      or
      compute mAP per class for each images and average them?

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว

      @@Takoyaki-hi4cj did you tried loading these images to hard instead of ram?

    • @Takoyaki-hi4cj
      @Takoyaki-hi4cj 4 ปีที่แล้ว

      @@PyLessons I did not. I have only tried loading it using .cpu() and .cuda() [with RTX 3070]. Both of them are RAM out of memory or CUDA out of memory. How to load the images to hard disk instead of ram?

  • @loocyug7859
    @loocyug7859 4 ปีที่แล้ว

    Why is mAP a preferred metric in object detection over something like an F1 score and recall?

    • @brunospasta
      @brunospasta ปีที่แล้ว

      Takes into account multiple thresholds for scores and some mAPs also for iou. F1 score and recall are just at a specific value of these thresholds

  • @grlg6910
    @grlg6910 4 ปีที่แล้ว

    Thank you so much. I fixed all errors. Thank you so much for sharing your own knowledge.
    Do you have a plan do to cross-validation on this project?

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว +1

      I am glad I can help others with my knowledge. I didn't had experience with cross-validation, it's something with random split to test and train data? Now would like to release tutorials for raspberry pi and android object detections, I have a lot of plans for future, but I am lack of time to do all of them :/

    • @grlg6910
      @grlg6910 4 ปีที่แล้ว

      ​@@PyLessons Thank you for your reply. I am not sure about cross-validation. I have no experience, too. But I think your understanding is right. I want to know cross-validation and then I asked you because your videos or tutorial is easy to understand. Yes, I agree with you. You need a lot of time. Thank you for this project :)

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว +1

      @@grlg6910 Your welcome, It's sad I can't do these tutorials as my full-time job, but this day will come someday :D then all of you gonna receive even more great stuff!

    • @grlg6910
      @grlg6910 4 ปีที่แล้ว +1

      @@PyLessons Wooooooow, Good news :). I can't wait for that time.

  • @valeriiakulakova7852
    @valeriiakulakova7852 3 ปีที่แล้ว

    Hello, thank you for your lessens. Do you know how I can plot recal-precision curve? Like better to use tensorboard or maybe sklearn? somehow i'm a bit confused what is classifier and y and x values. I found an example "disp = plot_precision_recall_curve(classifier, X_test, y_test)". Sorry for asking, hope you can give me tips:))))

    • @thefirstoct
      @thefirstoct 3 ปีที่แล้ว

      please share if you found solution to plot them.

  • @302ionwan2go
    @302ionwan2go 4 ปีที่แล้ว

    Hello Python Lessons, I love your captcha solver, how can I contact you? I just started with python and I know like nothing so I probably shouldn't start with your captcha solver, but I really need it. I had some problems so I wanted to contact you, otherwise I can give you the details about my problem in this comment section. Also, sorry that I write my problem about a like 2 year old video in here, but I don't really know how to message you

    • @PyLessons
      @PyLessons  4 ปีที่แล้ว

      Write me on pythonlessons0@gmail.com