Probability Calibration : Data Science Concepts

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ก.ย. 2024
  • The probabilities you get back from your models are ... usually very wrong. How do we fix that?
    My Patreon : www.patreon.co...
    Link to Code : github.com/rit...

ความคิดเห็น • 54

  • @danielwiczew
    @danielwiczew 3 ปีที่แล้ว +16

    Great video, it's very interesting concept that I never heard about, but mathematically speaking would make a sense. Also it's interesting that a linear model was able to correct the error so profoundly. Nevertheless, isn't a kind of metalearning ?
    Also I think you shouldn't use the name of "testing" dataset for traning the "calibration model", but rather e.g. metadataset. Test dataset is reserved only and only for the final, crossvalidated model

  • @a00954926
    @a00954926 3 ปีที่แล้ว +4

    This is super amazing!! It's such an important concept that like you said, doesn't get all the credit it deserves. And sometimes we forget this step.

  • @MuammarElKhatib
    @MuammarElKhatib หลายเดือนก่อน

    I would have called your "test" set the "calibration" set. Nice video.

  • @zijiali8349
    @zijiali8349 2 ปีที่แล้ว +1

    I got asked about in an interview. Thank you so much for posting this!!!

  • @chineloemeiacoding
    @chineloemeiacoding 3 ปีที่แล้ว +2

    Awesome video!!
    I was trying to figure out how this concept work using the SK-Learning documentation, but I found the material too much theoretical. And in your video you put the things in a more friendly way!!
    Many thanks :)

  • @nishadseeraj7034
    @nishadseeraj7034 3 ปีที่แล้ว +4

    Great material as usual!! Always look forward to learning from you.
    Question: Are you planning on doing any material covering xgboost in the future?

  • @accountname1047
    @accountname1047 3 ปีที่แล้ว +12

    does it generalize or is it just overfitting with more steps?

    • @buramjakajil5232
      @buramjakajil5232 ปีที่แล้ว

      I also have some problem with the second phase of configuring, I'm curious what happens to the out-of-sample performance after the calibration. I don't claim I understand the background here, but I just easily get the feeling in mind that: "the model fit did not produce performance that matches the observed distribution, so lets wrap the random forest into logistic function and fit it to the empirical distribution". Naturally this would perform better, but does the out-of-sample performance also improve? Sorry for my confusion, also pretty new concept for me as well.

  • @AnonymPlatypus
    @AnonymPlatypus ปีที่แล้ว +2

    I'm late to the party, but surely since random forest is not performing optimally in your example, you need to tweak its hyperparameters(tweak data, tune model) to fit a better curve. What if you create a badly performing model and try to calibrate it further with logistic regression when you could've gotten a better performing model just using random forest?

  • @yangwang9688
    @yangwang9688 3 ปีที่แล้ว +2

    I thought we don't touch test dataset until we have decided which model we are going to use?

  • @tompease95
    @tompease95 8 หลายเดือนก่อน

    The notebook section of this video is quite misleading - it is basically just plotting a line of best fit on a calibration curve. To actually calibrate the predictions, the trained logistic regression model should make predictions on a set of model outputs, and those 'calibrated' outputs can then be used to plot a newly calibrated calibration curve.

  • @ramanadeepsingh
    @ramanadeepsingh 2 หลายเดือนก่อน

    Shouldn't we first do a min-max scaling on the original probabilities you get from the models?
    Let's say I have three models and I run them on the same training data to get the below distribution of probabilities:
    1) Naive Bayes: all predicted values between 0.1 to 0.8
    2) Random Forest: all predicted values between 0.2 to 0.7
    3) XGBoost: all predicted values between 0.1 to 0.9
    If I want to take an average prediction, I am giving an undue advantage to XGBoost. So we should scale all of them to be between 0 to 1.
    The second step then is to feed these original scaled probabilities to the Logistic Regression model to the calibrated probabilities by feeding in these new scaled probabilities.

  • @rohanchess8332
    @rohanchess8332 ปีที่แล้ว

    Wow, that is an amazing video, I might be wrong but generally we use validation set first no, for calibration, and test is on the unseen data, I mean it is like that in hyperparameter tuning, so I assumed it should be same here. Correct me if I'm wrong.

  • @IgorKuts
    @IgorKuts 3 หลายเดือนก่อน

    Thank you! Brilliant video on such an important applied-ML topic.
    Tho i haven't seen, in the top section of comments, mentions of the Isotonic Regression (which also can be found in Scikit-Learn package). More often than not, it performs way better on such a task, compared to the Logistic regression, due to it's inherent monotonicity constraint and piecewise nature.
    Personally i found the most useful - the part about using different sets (test / val), for calibration and calibration validation. Right now i am in the process of developing the production classification ML model, and i think i have made a mistake of performing calibration using training set. Oops

  • @felixmorales3713
    @felixmorales3713 11 หลายเดือนก่อน

    You could solve the calibration issue more easily by tuning hyperparameters. Specifically, you choose to tune hyperparameters to optimize a cost function that is considered a "proper scoring rule", such as logistic loss/cross entropy (the cost function of logistic regression, actually). At least in my RF implementations, that has resulted in calibrated probabilities right off the bat, without any post-processing. That being said, you'll probably notice that scikit-learn's LogisticRegression() class doesn't return calibrated probabilities all of the time. You can blame that on the class using regularization by default. Just turn it off, and you'll likely get calibrated probabilities again :)

  • @payam-bagheri
    @payam-bagheri 11 หลายเดือนก่อน

    Some people are wondering whether the initial calibration shouldn't be done on the calibration set rather than the test set. I'd say the presenter in this video has the right concepts, but he's calling what's usually called validation set, test set, and vice versa. Usually, the set that's kept out for our final testing of the performance of the model is called the test set and the validation set is used before that to do whatever adjustments and tuning that we want to do.

  • @Corpsecreate
    @Corpsecreate 2 ปีที่แล้ว +1

    Why do you assume the blue line is not correct?

  • @MuhammadAlfiansyah
    @MuhammadAlfiansyah 2 ปีที่แล้ว +2

    If I already use log loss as loss function, do I need to calibrate it again? Thank you

  • @martinkunev9911
    @martinkunev9911 ปีที่แล้ว +1

    Isn't it weird that the empirical probability is not monotonically increasing as a function of the uncalibrated probability? This would mean that the calibration model needs to learn to transform, e.g. 0.4 to 0.3 but 0.5 to 0.2.

  • @aparnamahalingam1595
    @aparnamahalingam1595 ปีที่แล้ว

    This was FABULOUS, thank you.

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Glad you enjoyed it!

  • @tusharmadaan5480
    @tusharmadaan5480 ปีที่แล้ว

    This is such an important concept. I feel guilty of deploying models without a calibration layer.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 ปีที่แล้ว +2

    I think I know how you computed empirical probability. For me, it would have helped to see an explicit calculation, just to be sure.

  • @nihirpriram69
    @nihirpriram69 11 หลายเดือนก่อน

    I get that it works, but ultimately, I can't help but feel like this is a band-aid fix to a more underlying issue, namely that something is wrong fundamentally with the model (in this case random forest). It feels like throwing in a fudge factor and hoping for the best.

  • @houyao2147
    @houyao2147 2 ปีที่แล้ว +3

    It looks to me it's already caliberated during training phase because we minimize the error between predicted and empirical probability. I don't quite understand its necessity.

  • @mattsamelson4975
    @mattsamelson4975 2 ปีที่แล้ว +1

    You linked the code but not the data. Please add that link.

  • @mohsenvazirizade6334
    @mohsenvazirizade6334 8 หลายเดือนก่อน

    Thank you very much for such an amazing video. I like your videos that you explain the reasons behind something and then show the math. Could you please do the same for probability calibration? It is not clear to me why this happens and if changing the loss function in the classifier can change anything.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 ปีที่แล้ว +3

    on surface, it looks like you are using ML twice, with the second iteration to correct error from the first run. I can't seem to see why that second iteration is a legitimate step to do. It's like you made a bad prediction, and now we are going to give you another chance and coach you to adjust your prediction to arrive at a more accurate prediction. I know you used test data, but still can't see how you won't be overfitting.

  • @duynguyen4154
    @duynguyen4154 3 ปีที่แล้ว +3

    Very good tutorial, I have one question: Is this concept based on any background theory/algorithm??? If so, could you please introduce the specific name. Thanks

    • @jackleerson
      @jackleerson 2 ปีที่แล้ว

      It is called platt scaling

  • @abhishek50393
    @abhishek50393 3 ปีที่แล้ว +1

    great video, keep it up!

  • @FahadAlkarshmi
    @FahadAlkarshmi 2 ปีที่แล้ว

    I like the explanation. It is very clear. But one thing I've noticed is data snooping. Mainly in the training setting that you proposed, why not training both the classifier and the calibrator on the training set and optimise them using a validation set? as we may not (and should not) have access to the testing set.
    Thanks.

  • @yogevsharaby45
    @yogevsharaby45 ปีที่แล้ว

    Hey, thanks for the great video! I have a question regarding the predicted probability versus the empirical probability plot. I'm a bit confused because, if I understand correctly, the empirical observations are either 0 or 1 (or in this plot, are you grouping multiple observations together to obtain empirical observations that represent a probability?) Could you clarify this to help me understand it better? thanks very much again :)

  • @Ad_bazinga
    @Ad_bazinga 4 หลายเดือนก่อน

    Can you do a video on calibrating scorecards? like doubling of odds?

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover2470 3 ปีที่แล้ว +3

    But I find it difficult to understand why non-probabalistic models aren't configured by default... The probability is derived from dataset itself... So if the dataset is large enough then it should be already configured

  • @thechen6985
    @thechen6985 6 หลายเดือนก่อน

    If you calibrate it on the test set, that would introduce bias does it? Shouldn't it be the validation set?

  • @MsgrTeves
    @MsgrTeves ปีที่แล้ว

    I am confused why you train the logistic regression with input being predicted probabilities and output being the targets themselves. It seems you would train it with input being predicted probabilities and outputs being empirical probabilities. The probabilities should have nothing to do with the actual targets only how likely the prediction is to match the actual target which we calculate when we calculate the empirical probabilities. What am I missing?

  • @The_Jarico1
    @The_Jarico1 8 หลายเดือนก่อน

    Your right I've seen this exact phenomenon happen in the wild and the model needed adjustment as such. Does anyone know why this happens?

  • @aparnamahalingam1595
    @aparnamahalingam1595 ปีที่แล้ว

    Is this the same way we implement calibration for a multi-class problem?

  • @gunhatornie
    @gunhatornie ปีที่แล้ว

    Opened from vertible coornadation found a reciever according to molecular dissedent alluminum

  • @laxmanbisht2638
    @laxmanbisht2638 2 ปีที่แล้ว

    Thanks. So calibration is basically done to reduce error. right?

  • @hameddadgour
    @hameddadgour 2 ปีที่แล้ว

    Great video!

  • @junhanouyang6593
    @junhanouyang6593 2 ปีที่แล้ว

    How do you calculate the empirical probability if all the data in dataset is unique? Because if every datapoint is unique the empirical probability will be 0 or 1

  • @mohammadrahmaty521
    @mohammadrahmaty521 2 ปีที่แล้ว

    Thank you!

  • @davidwang8971
    @davidwang8971 ปีที่แล้ว

    awesome!

  • @raise7935
    @raise7935 ปีที่แล้ว

    thanks

  • @bonnyphilip8022
    @bonnyphilip8022 2 ปีที่แล้ว

    Unlike the looks, you simply are a great teacher... (Looks in the sense i mean, your attitude and looks are more similar to a freaky artist not a studious person)..:D:D

  • @petroskoulouris3225
    @petroskoulouris3225 2 ปีที่แล้ว

    Great vid. I cant find the data on your github account

  • @Ziemecki
    @Ziemecki ปีที่แล้ว

    Thank you for this video! I didn't understand why we bias if we train the calibration in the training set and not in the test set. Could you give us an example please? +Subscribe

    • @Ziemecki
      @Ziemecki ปีที่แล้ว

      I know you gave an example later in the notebook, but the what if the data is the other way around? I mean the training is the testing and testing is the training, will we still see this behavior?

  • @EdiPrifti
    @EdiPrifti 6 หลายเดือนก่อน

    Thank you. This makes sense in a regression task. How about a binary classification task. What would be the real emperical probability to fit the calibration task ?