TP, FP, TN, FN, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 มิ.ย. 2024
  • In this video, we cover the definitions that revolve around classification evaluation - True Positive, False Positive, True Negative, False Negative, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC
    These metrics are widely used in machine learning, data science, and statistical analysis.
    #machinelearning #datascience #statistics #explanation #explained
    VIDEO CHAPTERS
    0:00 Introduction
    1:15 True Positive, False Positive, True Negative, False Negative
    6:08 Accuracy, Precision, Recall, F1-Score
    8:59 Sensitivity, Specificity
    10:30 ROC, AUC

ความคิดเห็น • 30

  • @aristotlesocrates8409
    @aristotlesocrates8409 หลายเดือนก่อน +1

    Excellent explanation

  • @snugalmond1659
    @snugalmond1659 ปีที่แล้ว +1

    I am so grateful.
    Thank you.

  • @dhandrat
    @dhandrat ปีที่แล้ว +2

    Thankyou so much. Brilliant video !

  • @thewisearchitect
    @thewisearchitect 11 หลายเดือนก่อน

    Very well explained. Thank you very much. I just pressed the Subscribe button :)

  • @mrsmurf911
    @mrsmurf911 ปีที่แล้ว +1

    Good Content
    Subscribed right away!!!

  • @andrefurlan
    @andrefurlan 3 หลายเดือนก่อน

    Thanks! More videos please!

  • @user-go9er5hv7b
    @user-go9er5hv7b 5 หลายเดือนก่อน

    great explanation

  • @jhmrem
    @jhmrem 6 หลายเดือนก่อน

    Great intro

  • @ofirkish5571
    @ofirkish5571 ปีที่แล้ว +1

    Awesome!

  • @moatzmaloo
    @moatzmaloo หลายเดือนก่อน

    Thank you

  • @OgulcanYardmc-vy7im
    @OgulcanYardmc-vy7im 2 หลายเดือนก่อน

    thanks sir.

  • @user-mc5pl7zg6k
    @user-mc5pl7zg6k ปีที่แล้ว +1

    Great video

  • @dhualshammaa2062
    @dhualshammaa2062 ปีที่แล้ว +2

    its great simple video will be great to do more videos showing the over fitting and under fitting and other questions that normally been on interviews

    • @raneisenberg2155
      @raneisenberg2155  ปีที่แล้ว +1

      Thank you for the positive feedback. I'll do my best.

  • @SummerAutumn923
    @SummerAutumn923 ปีที่แล้ว +1

    Thank you so much 💙💙💙💙💙🌌🌌🌌.

  • @my_master55
    @my_master55 ปีที่แล้ว +2

    Thank you for the vid 👍
    But what do you mean by "thresholds" at 11:10 ?
    Like, what are the thresholds in terms of neural networks, and how can we change them?
    Thank you :)

    • @raneisenberg2155
      @raneisenberg2155  ปีที่แล้ว +1

      Thank you for the positive feedback :)
      The most simple way to think of a threshold is through a simple model, with only 1 feature as an input and two possible classes as output (say 0 and 1). Then when the model is trained it finds the "best" threshold for the input feature. So, for example, if we denote the input feature as a, then the model may learn that if a>=0.5 then the label is 0, otherwise it's 1. In this example the threshold is 0.5.
      Neural networks work differently (in most cases) and thus thinking of a threshold may be confusing. For our example from the paragraph above, the output of a neural network will be a vector/list of size 2, where each index is the probability that the output is in that specific class. For example if the output is [0.14, 0.85] then the model "thinks" that there is a 14% chance that the input is from label 0 and a 85% chance its from label 1. If our neural network had only 1 neuron then the 0.5 value from the example above could be incorporated into it.
      "How do we change it?" - This really depends on what you want to achieve. If FA are more important than FN, or the other way around then you can change your loss function and the incorporated threshold will change accordingly.
      Hope this helps :)

    • @my_master55
      @my_master55 ปีที่แล้ว +1

      @@raneisenberg2155 cool, thanks 👍
      Tbh, I didn't really get how can we "vary the thresholds" to further plot ROC or AUC for a neural network.
      .
      I mean, when a model is trained - we have only a single point at the ROC plot (current state of the model).
      But then how can we "change the thresholds" to have multiple points on the plot?
      Thank you 😊

    • @raneisenberg2155
      @raneisenberg2155  ปีที่แล้ว +1

      You're welcome :)
      When we train the model, it outputs probabilities. We can change the threshold of those probabilities to get different labels - for example, if the model's output is [0.34, 0.66], one threshold could be if the threshold for class 1 is set to 50% then the output of the model is labeled 1. If we set the threshold to, say 70% then the output changes to 0.
      So playing around with the threshold gives you different outcomes for the ROC curve.

    • @my_master55
      @my_master55 ปีที่แล้ว +1

      @@raneisenberg2155 oh, okay, so ROC and AUC are mostly used for the binary classification?

    • @raneisenberg2155
      @raneisenberg2155  ปีที่แล้ว +1

      Yes. exactly

  • @volktrololo6528
    @volktrololo6528 8 หลายเดือนก่อน

    nice

  • @ahmedal-baghdadi3946
    @ahmedal-baghdadi3946 ปีที่แล้ว +1

    well explained

  • @muhammadanasali7631
    @muhammadanasali7631 3 หลายเดือนก่อน

    Can I have these slides please within respective concern 🙏💓