05e Machine Learning: Shapley Value

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 26

  • @aziron8955
    @aziron8955 3 ปีที่แล้ว +7

    You are a hero! Never saw a clearer explanation in this detail. Thank you Michael!

  • @biswajyotikar4007
    @biswajyotikar4007 2 ปีที่แล้ว +1

    Thanks a lot for detailing with examples. Very few persons have such gift of imparting knowledge.

  • @kishoresurana
    @kishoresurana 2 ปีที่แล้ว

    Thank you for recording and sharing this explanation. It was very helpful!

  • @JishanAhmed-dr8wh
    @JishanAhmed-dr8wh 5 หลายเดือนก่อน

    Excellent video Professor. May I get access to your PPT slides. Thank you so much for making such amazing lectures open source and accessible.

  • @raymondlion314
    @raymondlion314 3 ปีที่แล้ว +1

    You saved my life after searching so many introductions on Google.

  • @xuesongwang5877
    @xuesongwang5877 3 ปีที่แล้ว +1

    Thanks for the amazing tutorial. Just want to point out the typo in 16:20 in case someone like me got confused there.
    The 3rd order should be x3, f(x3)-E(Y) x4, f(x3, x4) - f(x3) x1, f(x3, x4, x1) - f(x3, x4) .
    besides, in 20:56 there's a missing factor symbol ! after (|F| - |S| - 1). should be easy to tell from the later slides

    • @xuesongwang5877
      @xuesongwang5877 3 ปีที่แล้ว +1

      Also, I was a bit confused why there isn't a f(X2 = x2, X3 = x3) - f(X2 = x2) in 24:25 ?

  • @hockshemchong4263
    @hockshemchong4263 3 ปีที่แล้ว

    Thank you for making this video. It helps me alot to understand this!

  • @vi5hnupradeep
    @vi5hnupradeep 3 ปีที่แล้ว +5

    Best lecture on Shapley Value for ML in whoel of youtube, thank you so much sir . All the OR folks just made it look so complicated . Your explaination is 100!

  • @Anon-vo9qg
    @Anon-vo9qg 2 ปีที่แล้ว +1

    Thanks for your great lecture! Im wondering how we cope with multicollinearity here? i.e. if we determine the weights on the training set, due to multicollinearity a potentially "valuable" feature may have its weight shrunk to 0 due to the presence of other "slightly more valuable features". Therefore out of sample, the marginal contribution of this feature will be based on the shrunken training weight, hence not reflect its true potential for predictive performance 🤔

  • @EshwarNorthEast
    @EshwarNorthEast ปีที่แล้ว

    Why is the case where no one does nothing including player1’s value?

  • @dhruvnigam7488
    @dhruvnigam7488 2 ปีที่แล้ว

    The hero we need.

  • @Ricardo-pz4zq
    @Ricardo-pz4zq 2 ปีที่แล้ว +1

    Great video class!! I have one doubt: it seems to me that at 19'05" the f(x1,x2,x3,x4=E[x4) works only if f(x) is linear. I think that a more general approximation of f(x1,x2,x3) is to fix x1,x2,x3 to the local input values and average f(x) over x4.

    • @manuiitm
      @manuiitm 2 ปีที่แล้ว

      Yes you are right. I have seen this simplification many times and I was confused about it. You are right that this works for linear models but not all models.

  • @juanete69
    @juanete69 ปีที่แล้ว

    If applied to a linear regression model... How the interpretation of the SHAP values different from the coefficients of the standardized variables or from the partial R^2?

  • @juanete69
    @juanete69 ปีที่แล้ว

    How is the interpretation of the ALE plots different from the SHAP dependence plot?

  • @pierriva9196
    @pierriva9196 5 หลายเดือนก่อน

    Super!

  • @bebla8381
    @bebla8381 3 ปีที่แล้ว

    very helpful! thanks

  • @PA_hunter
    @PA_hunter 2 ปีที่แล้ว

    Thanks for this! Do you think that the Shapley approach largely solves the ML interpretability challenge?

    • @PA_hunter
      @PA_hunter 2 ปีที่แล้ว

      Also, if we are generally speaking that ML models are superior to linear and logistic regressions in prediction accuracy...would it be appropriate to say that ML models coupled with Shapley values are superior to linear/logistic in variable coefficients for understanding causal inference?

  • @AakarshNair
    @AakarshNair 2 ปีที่แล้ว

    amazing

  • @ahp-6785
    @ahp-6785 ปีที่แล้ว

    At 9:18 why can't we just split $120k in the ratio of 30:70? If we do that, Player 1 gets $36k. Whats wrong with that approach?

  • @sfcsarah
    @sfcsarah ปีที่แล้ว

    for 17:28 - are the sequences referred to combinations or permutations i.e. is x1, x2, x4 the same as x1, x4, x2 ? seems like you are referring to combinations based on 16:43

  • @SimonAJager
    @SimonAJager 3 ปีที่แล้ว +1

    Brilliant video! Really helped me to grasp the concept. Many thanks! Isn't there a missing factorial in the weighted average expression?

  • @aozyhuang5847
    @aozyhuang5847 9 หลายเดือนก่อน

    It's a clear explanation and very helpful. Much appreciated.

  • @manafalsheyyab9104
    @manafalsheyyab9104 3 ปีที่แล้ว

    Very clear. Thanks