The Science Behind InterpretML: SHAP

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 พ.ค. 2020
  • Learn more about the research that powers InterpretML from SHAP creator, Scott Lundberg from Microsoft Research
    Learn More:
    Azure Blog aka.ms/AiShow/AzureBlog
    Responsible ML aka.ms/AiShow/ResponsibleML
    Azure ML aka.ms/AiShow/AzureMLResponsi...
    The AI Show's Favorite links:
    Don't miss new episodes, subscribe to the AI Show aka.ms/aishowsubscribe
    Create a Free account (Azure) aka.ms/aishow-seth-azurefree
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 23

  • @robertoooooooooo
    @robertoooooooooo ปีที่แล้ว +1

    Insightful, thank you

  • @user-xs9rm6hd5c
    @user-xs9rm6hd5c 3 ปีที่แล้ว +2

    great introduction of SHAP

  • @chandrimad5776
    @chandrimad5776 2 ปีที่แล้ว +5

    Like SHAP, there is LIME as well for interpretability of the models. Would be great if you post a video on comparing these two. I have built one credit risk model using lightgbm regressor, there I used SHAP to represent model interptetability. I would also like to use LIME for the same purpose. If you are interested, will post my kaggle link to get your feedback on my work.

    • @sehaconsulting
      @sehaconsulting ปีที่แล้ว

      Would love to see the work you did for SHAP be LIME

    • @missfatmissfat1929
      @missfatmissfat1929 ปีที่แล้ว

      Hello I’m also interested as well 🙂

  • @muhammadusman7466
    @muhammadusman7466 11 หลายเดือนก่อน

    does shaply works well with categorical features?

  • @caiyu538
    @caiyu538 ปีที่แล้ว +1

    It looks that shap is a brutal force search of all features, consider all kinds of combinations. Is my understanding correct.

  • @juanete69
    @juanete69 ปีที่แล้ว +1

    In a linear regression what is the difference (interpretation) between the SHAP value and the partial R^2?

    • @Lucky10279
      @Lucky10279 ปีที่แล้ว +1

      In linear regression, R^2 is literally just the proportion of the variance of the probability distribution of the dependent variable(s) that can be predicted/explained by the model. Or, in other words, if I understand correctly, it's the proportial difference between the variance of the actual variable we're attempting to model and the variance of the predicted variable. So if y is what we're trying to predict and the model is y_hat = mx+b,
      R² = |var(y)-var(y_hat)|/var(y). It's a measure of how well the model matches the actual relationship between the independent and dependent variables.
      Shap values, OTH, if I'm understandingly the video correctly, don't necessarily say anything, in themselves, about how well the model fits the actual data overall -- instead they tell us how much each independent variable is affecting the predictions/classifications the model spits out.

  • @igoriakubovskii1958
    @igoriakubovskii1958 3 ปีที่แล้ว

    How can we make a decision based on shap, when it’s not causality ?

  • @manuelillanes1635
    @manuelillanes1635 3 ปีที่แล้ว +1

    Can shap values be used to interpret unsupervised models too?

    • @mananthakral9977
      @mananthakral9977 2 ปีที่แล้ว

      No,I think we can't use it for unsupervised learning since it requires to look at the output value after each feature is added to the model.

    • @cassidymentus7450
      @cassidymentus7450 2 ปีที่แล้ว

      For unsupervised learning there might not be a 1dimensional numeric output (e.g. credit risk). It still might be possible to make a useful one. Take PCA for example. You can define define the output as |x-x_mean| (|| = Euclidean distance aka Pythagorean formula).
      Shap will tell you how much principle component contributes to distance from mean... essentially it is the variance along that axis. (Depending on how you look at it.)

  • @SandeepPawar1
    @SandeepPawar1 3 ปีที่แล้ว +2

    Scott - should the SHAP values for feature importance be based on training data or test data?

    • @lipei7704
      @lipei7704 3 ปีที่แล้ว +4

      I believe the SHAP values for feature importance should be based on training data, Because It will cause a data breach If you choose test data.

    • @claustrost614
      @claustrost614 3 ปีที่แล้ว +1

      @@lipei7704 I agree. It depends what you want to do with it. If you want to use it for more or less global feature importance/feature selection I would only use it on the training set.
      If you mean by "feature importance" the local sharpley values for one example of your test data you can have a look whetever it would be an outerlayer or something like that compared to your traing set.

  • @FirstNameLastName-fv4eu
    @FirstNameLastName-fv4eu 4 วันที่ผ่านมา

    Since when XGBoost became an AI model.

  • @berniethejet
    @berniethejet 2 ปีที่แล้ว +2

    Seems Scott confounds the term 'linear models' with 'models with no interactions'. Linear regression models can still have cross products of variables, e.g. y = a + b1 x1 + b2 x2 + b3 x1 x2.

    • @sebastienban6074
      @sebastienban6074 2 ปีที่แล้ว

      i think he meant that they dont interact the same way trees do.

  • @Burnwarcrimetemple
    @Burnwarcrimetemple ปีที่แล้ว

    tentara tuh harus hitam

  • @zhanli9121
    @zhanli9121 ปีที่แล้ว +1

    This is some most opaque presentation of SHAP. And the presentation style is also bad -- very rare to see a presenter uses formula when the formula is not really needed. However, this guy manages to be one of them.

    • @samirelzein1095
      @samirelzein1095 8 หลายเดือนก่อน

      after i wrote a similar sharp comment above, found yours. actually everyone agrees here. Good case for Microsoft staffing.

  • @samirelzein1095
    @samirelzein1095 8 หลายเดือนก่อน

    the dude managed to get many questions, but didnt manage getting many compliments. Neither will i give one. dont make videos to be uploaded for Microsoft, make videos when you feel the ideas is complete and the intuition is clear and you re capable of speaking 10 mins out of hours you could keep showing your idea. wasted my time.