Explainable AI explained! | #2 By-design interpretable models with Microsofts InterpretML

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 30

  • @nintishia
    @nintishia 3 ปีที่แล้ว +8

    The notion of interpretable models is explained well. Thanks for demonstrating InterpretML as well as for introducing EBMs.

    • @DeepFindr
      @DeepFindr  3 ปีที่แล้ว

      Thanks for the feedback!

  • @leksandradubovik6291
    @leksandradubovik6291 ปีที่แล้ว +2

    these series are genius, hope you land as professor one day

  • @jeffnc
    @jeffnc 2 ปีที่แล้ว +1

    Thanks so much for this series, it has been very informative and I like all of the examples and code you go through. I've now subscribed :)

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว

      Happy that you liked it! Thanks for the sub!

  • @qiguosun129
    @qiguosun129 2 ปีที่แล้ว +7

    Great lectures providing intuition for my research methodology. Thank your so much

  • @nimsarafernando8084
    @nimsarafernando8084 ปีที่แล้ว

    🎯 Key Takeaways for quick navigation:
    00:00 🎥 Introduction to Explainable AI (XAI) and its significance in understanding machine learning models.
    01:12 📈 Growing interest in Explainable AI due to the need for transparency, validation, and user experience.
    02:09 ⚖️ Trade-off between complex non-linear models and interpretable models; the challenge of understanding neural networks.
    03:33 🧠 Model-agnostic vs. model-specific methods; distinction between global and local explanations.
    04:56 📊 Different types of explanations: feature importance, correlation plots, data points, surrogates.
    05:37 📚 Upcoming topics in the series: interpretable machine learning models, LIME, CHAP, counterfactual explanations, and layer-wise relevance propagation.
    06:46 📺 Independence of video topics, introduction to a practical example, and mention of causal reasoning in the context of counterfactuals.
    Made with HARPA AI

  • @allaalzoy2010a
    @allaalzoy2010a 2 ปีที่แล้ว +1

    Thanks for your video. Can we assume that the final prediction 0 or 1 is the sum of the values of the orange and blue bars? Also, what does it mean if the values of the negative bars are equal to those of the blue bars? My guess is that the model for that instance is confused and its prediction is random. Is that sound correct? Thanks

  • @mfaramarzi8333
    @mfaramarzi8333 ปีที่แล้ว

    Thanks for this very useful series on explainable AI. Is there a way to extract the scores as an array?

  • @felixkraus1219
    @felixkraus1219 2 ปีที่แล้ว

    In the logistic regression example, the dummy variable trap must be avoided. One would have to take k-1, not k dummies per category (with k categorical values) into the model. Or is this handled automatically in interpret.glassbox?

  • @sRahsa
    @sRahsa ปีที่แล้ว

    if we add xJ by one unit, f(y) will be multiplied by exp(betaJ), if I'm not wrong.
    My impression from the video was it'll be added, then consulted the book and did the math and saw the multiplications.

  • @zeno.x-edit02
    @zeno.x-edit02 ปีที่แล้ว

    Can we use EBM for multi-class classification?

  • @heejuneAhn
    @heejuneAhn ปีที่แล้ว

    logistic regression function is 1/ ( 1 + exp(-y)) not 1/ (1+ exp(+y)) and the impact is df(y)/dx is not simply exp(beta)

  • @dinushachathuranga7657
    @dinushachathuranga7657 8 หลายเดือนก่อน

    Thanks a lot❤

  • @beautyisinmind2163
    @beautyisinmind2163 ปีที่แล้ว

    What is mean absolute score in overall feature importance???

  • @雷奇-w7b
    @雷奇-w7b 2 ปีที่แล้ว +1

    The sigmoid function is wrong. It should be (1/1+e^(-y)). Maybe the "-" is lost in this video.

    • @DeepFindr
      @DeepFindr  2 ปีที่แล้ว +1

      Yes you are right! It got lost :) thanks for pointing out!

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 ปีที่แล้ว

    Hello ,i have wondering between overall all importance mean feature ,i can do that by using features correlate matrix to figure out the realtion between results and input .i found both equal ,correct me if iam wronge please

    • @DeepFindr
      @DeepFindr  3 ปีที่แล้ว +1

      Hi :) yes that's true for some cases. But classical correlations only capture linear relationships. But what if there is a non-linear relationship between an input and the output.
      But you are right - correlation can also be seen as a simple sort of explainability.
      Also you have to distinguish between what is in the data (we use correlation for that) and what does our model learn (XAI). So we use explainable AI rather for model evaluation.
      And finally if you have more complex input such as images, text or graphs - calculating correlations doesn't necessarily make sense (for example each Pixel value with the output).
      I hope that answers the question :)

    • @البداية-ذ1ذ
      @البداية-ذ1ذ 3 ปีที่แล้ว

      amazing what you highlight,

  • @НикитаБуров-ъ6р
    @НикитаБуров-ъ6р 7 หลายเดือนก่อน

    what a magic '# %% cell' thank you!

  • @ehsannamjoo8837
    @ehsannamjoo8837 10 หลายเดือนก่อน

    thanks for the great videos. Please add requirements.txt or a readme file to the repository. I need the Python and package versions

  • @marya_haji
    @marya_haji 3 ปีที่แล้ว

    Can we access to your codes which are in videos?

    • @DeepFindr
      @DeepFindr  3 ปีที่แล้ว +3

      Yes :) I've uploaded them on Github

  • @danli1702
    @danli1702 ปีที่แล้ว

    How is logistic regression machine learning? isn't it just old school statistical fitting? 😁

    • @DeepFindr
      @DeepFindr  ปีที่แล้ว

      Well a neural network is also just a bunch of logistic regressions ;-) but yeah you're right

  • @Aften_ved
    @Aften_ved 3 หลายเดือนก่อน

    1:00

    • @Aften_ved
      @Aften_ved 3 หลายเดือนก่อน

      7:00

  • @wexwexexort
    @wexwexexort 7 หลายเดือนก่อน +1

    Job with children, lol.