Interpretable vs Explainable Machine Learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ม.ค. 2025

ความคิดเห็น • 33

  • @adataodyssey
    @adataodyssey  11 หลายเดือนก่อน +1

    *NOTE*: You will now get the XAI course for free if you sign up (not the SHAP course)
    SHAP course: adataodyssey.com/courses/shap-with-python/
    XAI course: adataodyssey.com/courses/xai-with-python/
    Newsletter signup: mailchi.mp/40909011987b/signup

  • @AllRounder-vc1yl
    @AllRounder-vc1yl ปีที่แล้ว +1

    Great explanation. Thanks 👍

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว +1

      No problem! I'm glad you found it useful

  • @banytoshbot
    @banytoshbot ปีที่แล้ว +2

    Great video, thank you!
    Thoughts on SHAP vs Explainable boosting classifier?

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว

      I don't know much about EBC. Will look into it!
      The major benefit of SHAP and other model agnostic methods is they can be used with any model. This gives you flexibility over model choice which can lead to higher accuracy

  • @jadonthomson8234
    @jadonthomson8234 ปีที่แล้ว +3

    Great video 🙌

  • @rizzbod
    @rizzbod ปีที่แล้ว

    Great! Thank you

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว

      No problem! I’m glad you found the video useful :)

  • @clazo37
    @clazo37 7 หลายเดือนก่อน

    Thank you for your clear and structured explanations, I have learned a lot from your videos. One question: are you aware about causal inference? Are those techniques something that you would be willing to tackle in your videos? Thank you and continue your excellent work

    • @adataodyssey
      @adataodyssey  7 หลายเดือนก่อน

      Hi Cesar, thanks for the kind words! Casual analysis is something I am aware of. For now, I will probably not go in to the topic of this channel. My focus is more on Computer Vision so I will be explaining methods in that field
      However, it is something I hope to learn more about in the future :)

  • @ahmadalis1517
    @ahmadalis1517 ปีที่แล้ว +2

    Great video, but I prefer to stick with white-box / black-box categories. I use Interpretable and Explainable interchangebly

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว

      That's fair! It is less confusing terminology. Interpretable and explainable really mean the same thing to a layperson.

    • @RyanMcCoppin
      @RyanMcCoppin ปีที่แล้ว

      @@adataodyssey Sick burn.

    • @ojaspatil2094
      @ojaspatil2094 5 หลายเดือนก่อน

      @@adataodyssey goddam

  • @constantineketskalo5203
    @constantineketskalo5203 ปีที่แล้ว

    Thanks.
    Just some thoughts from me:
    It seems to me, that every model could be considered as exaiplainable, because nothing stops you from running your analyzing tool on a simple algorythm, which could be understanded by a human alone without additional tools just by looking at that tree. The question is that whether it's mandatory or optional to use these additional tools to explain this ai logic for us. So if there is such way to call it something like "mandatory to be explained" or something like that but shorter - then I'd rather go with that term. If not - then let it be as it is.
    Also I don't think you need a gray area there. It's rather a line, but it's not clear. Just like there is not clear definition of junior/middle/senior software developer. One person could be called different grades from this calsification system in different companies. It's very subjective.

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว +1

      Some good points Constantine! Goes to show that these definitions are still very debatable. There are hopefully some model where everyone would agree on the definition.

  • @MarharytaMars
    @MarharytaMars 5 หลายเดือนก่อน

    Thank you!

    • @adataodyssey
      @adataodyssey  5 หลายเดือนก่อน

      @@MarharytaMars no problem!

  • @gemini_537
    @gemini_537 6 หลายเดือนก่อน

    Gemini 1.5 Pro: The video is about the difference between interpretable and explainable machine learning models.
    The video starts by acknowledging that the field of interpretable machine learning (IML) is new and there is no consensus on the definitions. There are two possible definitions for interpretable vs. explainable models discussed in the video.
    One definition is that an interpretable model is a model that is simple enough for a human to understand directly by looking at the model itself. For example, a decision tree is interpretable because you can follow the tree to see how it makes a decision. Another example is a linear regression model, where you can see the coefficients of the model to understand how each feature affects the prediction.
    On the other hand, an explainable model is a model that is too complex to understand directly. For these models, you need additional techniques to understand how they work. For example, a random forest is an ensemble of many decision trees, and it is too complex to understand how each tree contributes to the final prediction. Similarly, a neural network is a complex model with many layers and weights, and it is impossible to understand how it works by looking at the weights alone. To understand these models, you would need to use techniques like LIME or SHAP values.
    The video then argues that interpretability is on a spectrum, rather than a binary classification. There is a gray area where it might be difficult to decide whether a model is interpretable or explainable. For example, a random forest with a few trees might be interpretable, but a random forest with many trees might not be. Additionally, even a simple model can become difficult to understand if it has many features or parameters.
    Another issue with this definition is that it is subjective and depends on the person's understanding of the model. There is no formal way to measure interpretability, and what one person finds interpretable, another person might find difficult to understand.
    The video concludes by arguing that the goal of IML is to understand and explain models, rather than to classify them as interpretable or explainable. The best way to understand a model depends on the specific model and the questions you are trying to answer. There is no single best way to classify models, and the field is still under development.

  • @ojaspatil2094
    @ojaspatil2094 5 หลายเดือนก่อน

    thank you!

  • @dance__break4155
    @dance__break4155 10 หลายเดือนก่อน

    whats the color of your eyes?

    • @adataodyssey
      @adataodyssey  10 หลายเดือนก่อน

      Blue :)

    • @Nowobserving
      @Nowobserving 6 หลายเดือนก่อน

      Hahahaha - that is also part of parameters - that why asked lolz😂

  • @filoautomata
    @filoautomata ปีที่แล้ว

    This is where human logic fails, because since our young age, we are trained in Boolean logic paradigm, and not probabilistic / fuzzy logic paradigm. Things are either true or false, while in reality certain things can have a degree of truthiness and falseness.

    • @adataodyssey
      @adataodyssey  ปีที่แล้ว

      So true! So you’re saying the definition is often a false dichotomy?
      We probably make the same mistakes when providing explanations for model predictions. Usually they are only one of the many potential reasons for how they work.

  • @JosebaGonzálezdeAlaiza
    @JosebaGonzálezdeAlaiza 2 หลายเดือนก่อน

    La diferencia seria: modelo explicable o inexplicable 😅

  • @slimeminem7402
    @slimeminem7402 9 หลายเดือนก่อน

    Personally, the distinction is not necessary.

    • @adataodyssey
      @adataodyssey  9 หลายเดือนก่อน +1

      I agree :) But I did think it was important when I first got into XAI.

  • @camerashysd7165
    @camerashysd7165 6 หลายเดือนก่อน +2

    You said nothing in 7 minutes bro and then asked, what we think?😂 wow

    • @adataodyssey
      @adataodyssey  6 หลายเดือนก่อน

      Oh my first hater :)

    • @mikzinjakada8835
      @mikzinjakada8835 5 หลายเดือนก่อน

      My thoughts exactly 😭