4 Significant Limitations of SHAP

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 เม.ย. 2023
  • SHAP is the most powerful Python package for understanding and debugging your machine learning models. Yet, it still has its limitations. Understanding these is critical to avoid incorrect conclusions when using the package. We explore the 4 most significant limitations of SHAP: issues with the package, feature dependencies, causal inference and human error.
    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap...
    XAI course: adataodyssey.com/courses/xai-...
    Newsletter signup: mailchi.mp/40909011987b/signup
    Read the companion article (no-paywall link): towardsdatascience.com/the-li...
    Medium: / conorosullyds
    Twitter: / conorosullyds
    Mastodon: sigmoid.social/@conorosully
    Website: adataodyssey.com/

ความคิดเห็น • 26

  • @adataodyssey
    @adataodyssey  4 หลายเดือนก่อน

    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap-with-python/
    XAI course: adataodyssey.com/courses/xai-with-python/
    Newsletter signup: mailchi.mp/40909011987b/signup

  • @saremish
    @saremish 7 หลายเดือนก่อน +2

    I really enjoyed such a deep discussion about the clear distinction between correlation and causation!

    • @adataodyssey
      @adataodyssey  7 หลายเดือนก่อน +1

      Thanks Sarem! A very important concept when it comes to XAI. I am definitely guilty of jumping to causality conclusions without enough evidence.

  • @Hoxle-87
    @Hoxle-87 11 หลายเดือนก่อน +6

    Great video series. Don’t stop making them. Maybe take another app/tool/methodology and break it into parts like you did with SHAP. Very digestible.

    • @adataodyssey
      @adataodyssey  11 หลายเดือนก่อน

      Thanks! Planning some more videos soon

  • @shrishchandrapandey801
    @shrishchandrapandey801 7 หลายเดือนก่อน

    Amazing work, Conor! Keep them coming. These 6 mins have helped clarify so many topics!

    • @adataodyssey
      @adataodyssey  7 หลายเดือนก่อน

      Great to hear! I’m glad I could help.

  • @yijunfu6808
    @yijunfu6808 11 หลายเดือนก่อน

    best youtuber explaining SHAP I have found!

    • @adataodyssey
      @adataodyssey  11 หลายเดือนก่อน

      Thank you! I am here to help :)

  • @azizjedidi1180
    @azizjedidi1180 ปีที่แล้ว

    Great video man. Thank you very much.

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว +1

      I’m glad you enjoyed it Aziz!

  • @jenilsaliya3769
    @jenilsaliya3769 ปีที่แล้ว +1

    good explanation on topic ,
    thank you sir

  • @cesarepiscopo2549
    @cesarepiscopo2549 5 หลายเดือนก่อน

    AMAZING WORK!

    • @adataodyssey
      @adataodyssey  5 หลายเดือนก่อน

      I really appreciate that!

  • @AZ-ph7gg
    @AZ-ph7gg 11 หลายเดือนก่อน

    Great explanation!

    • @adataodyssey
      @adataodyssey  11 หลายเดือนก่อน

      Thank you :)

  • @NA-ug5eq
    @NA-ug5eq 8 วันที่ผ่านมา

    Amazing video. Thank you so much.
    I have one question please: When explaining kernelShap, what do you mean by permuting values, please? What does mean grey circles in the graph at time 2.28, please? Does permuting refer to changing features order ( this is not clear in the graph in video at 2.28) or it refers to replacing some feature values with random values?
    Thank in advance for your response

    • @adataodyssey
      @adataodyssey  3 วันที่ผ่านมา

      Take a look at the theory videos in thius playlist. They should help :)
      th-cam.com/video/MQ6fFDwjuco/w-d-xo.html&pp=gAQBiAQB

  • @sasaglamocak2846
    @sasaglamocak2846 ปีที่แล้ว

    Great video. You mentioned that KernelSHAP suffers from extrapolation if features are correlated, like other permutation based methods. What about TreeSHAP with e.g., XGBoost?

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว +3

      Hi Sasa, this is a great question. To be honest, I don't completely understand the TreeSHAP algorithm.
      Looking into some other literature, it seems like TreeSHAP is not effected by correlations in the same way as KernelSHAP. "KernelSHAP ignores feature dependence. ... TreeSHAP solves this problem by explicitly modeling the conditional expected prediction." Then they go on to say "While TreeSHAP solves the problem of extrapolating to unlikely data points, it does so by changing the value function and therefore slightly changes the game. TreeSHAP changes the value function by relying on the conditional expected prediction. With the change in the value function, features that have no influence on the prediction can get a TreeSHAP value different from zero." You can read more here: christophm.github.io/interpretable-ml-book/shap.html

    • @sasaglamocak2846
      @sasaglamocak2846 ปีที่แล้ว +1

      @@adataodyssey great, thanks for the answer

  • @tschess7
    @tschess7 5 วันที่ผ่านมา

    I am confused. You said that Machine Leaning only cares about correlations not association but should it be said "only cares about correlations not causation"?

    • @adataodyssey
      @adataodyssey  3 วันที่ผ่านมา +1

      Yes, "causation" is correct. Thanks for pointing out the mistake

  • @mdabubakarchowdhurysunny2846
    @mdabubakarchowdhurysunny2846 3 วันที่ผ่านมา

    can show some code about LIME

    • @adataodyssey
      @adataodyssey  3 วันที่ผ่านมา +1

      Keep an eye for the next video on Monday ;)