ML Interpretability: SHAP/LIME

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 พ.ค. 2024
  • First in our series on ML interpretability and going through Christoph Molnar's interpretability book.
    When we're applying machine learning models, we often want to understand what is really going on in the world, we don't just want get a prediction.
    Sometimes we want to get an intuitive understanding for how the overall model works. But often, we often want to explain an individual prediction: Maybe your application for a credit card was denied and you want to know why. Maybe you want to understand the uncertainty associated with your prediction. Maybe you're going to take a real-world decision based on your model.
    That's where Shapely values come in!
    With Connor Tann and Dr. Tim Scarfe
    References:
    Whimsical canvas we were using:
    whimsical.com/12th-march-chri...
    We were using Christoph's book as a guide:
    christophm.github.io/interpre...
    christophm.github.io/interpre...
    christophm.github.io/interpre...
    SHAPLEY VALUES
    Shapley, Lloyd S. "A value for n-person games." Contributions to the Theory of Games 2.28 (1953): 307-317.
    www.rand.org/content/dam/rand...
    SHAP
    Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems. 2017.
    papers.nips.cc/paper/2017/has...
    LIME
    Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should I trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016).
    arxiv.org/abs/1602.04938

ความคิดเห็น • 25

  • @michaelallen1966
    @michaelallen1966 3 ปีที่แล้ว +4

    That was by far the best introduction to Shapley values and SHAP I have seen. Thank you all !!! Perfect timings as well, as we're just looking to use Shapley values and SHAP.

  • @EtienneTremblay
    @EtienneTremblay 3 ปีที่แล้ว +3

    You should do a podcast series about interpretable ML techniques.

  • @aniruddhaghosh9823
    @aniruddhaghosh9823 ปีที่แล้ว

    Really awesome explaination !!

  • @juliocardenas4485
    @juliocardenas4485 ปีที่แล้ว

    I really like this format (just discovered this channel); it goes deeper but it is not intractable.
    Thank you for editing the conversation =)

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover2470 3 ปีที่แล้ว +1

    In for every interpretability method!!!

  • @scottmiller2591
    @scottmiller2591 3 ปีที่แล้ว +1

    I suspect the 0.75 isn't empirical or arbitrary, but is the 3/4 scaling of the Epanechnikov kernel - the optimal (in the sense of requiring the fewest samples for a given accuracy) kernel for nonparametric density estimation: en.wikipedia.org/wiki/Kernel_%28statistics%29

  • @francescolucantoni3243
    @francescolucantoni3243 3 ปีที่แล้ว +1

    First! But I'll watch tomorrow! Love from Italy

    • @francescolucantoni3243
      @francescolucantoni3243 3 ปีที่แล้ว +1

      Great video! At 7:30 I think you meant that the house price went *down* due to the RM value in fact blue bars are negative contributions while red bars are positive, correct? Thanks

  • @williammcnulty8408
    @williammcnulty8408 3 ปีที่แล้ว +2

    Incredible work by Scott Lundberg and Su-In Lee. Why they are not sourced?

  • @thomaskurian9025
    @thomaskurian9025 2 ปีที่แล้ว

    This feels like therapy

  • @araldjean-charles3924
    @araldjean-charles3924 9 หลายเดือนก่อน

    Does this result hold for non-linear models?

  • @jerbear97
    @jerbear97 ปีที่แล้ว

    that intro go hard tho

  • @sreevidyaswathi4069
    @sreevidyaswathi4069 4 หลายเดือนก่อน

    Does all the time, sum of shpa values gets equal to difference in model prediction and explanation mean values?

  • @satishvavilapalli24
    @satishvavilapalli24 2 ปีที่แล้ว

    can we say SHAP value = sum of squared residuals?

  • @AICoffeeBreak
    @AICoffeeBreak 3 ปีที่แล้ว +9

    Haha, when Tim says "bite-sized" he means 40 minutes. When Ms. Coffee Bean says bite-sized, it's about 10x less. 🤣
    Now fun aside: I really appreciate the shorter format.

    • @machinelearningdojowithtim2898
      @machinelearningdojowithtim2898  3 ปีที่แล้ว +1

      Thanks Letitia! 40 mins is "bite sized" for me! Haha, I need to get good at making shorter videos

    • @afafssaf925
      @afafssaf925 3 ปีที่แล้ว +1

      @@machinelearningdojowithtim2898 Don't make them too short!

  • @scottmiller2591
    @scottmiller2591 3 ปีที่แล้ว

    I find it interesting that Rob Tibshirani, co-author with Jerome Friedman on many of the LASSO papers, says LASSO as la-so, while Ryan Friedman (Jerome Friedman's son) says la-su, as is said here. I'm going to continue to say it the correct way, which you will simply have to guess.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 3 ปีที่แล้ว +1

    First 😎😎

  • @flaskapp9885
    @flaskapp9885 3 ปีที่แล้ว

    second