🎯 Key Takeaways for quick navigation: 00:00 🎥 Introduction to Explainable AI (XAI) and its significance in understanding machine learning models. 01:12 📈 Growing interest in Explainable AI due to the need for transparency, validation, and user experience. 02:09 ⚖️ Trade-off between complex non-linear models and interpretable models; the challenge of understanding neural networks. 03:33 🧠 Model-agnostic vs. model-specific methods; distinction between global and local explanations. 04:56 📊 Different types of explanations: feature importance, correlation plots, data points, surrogates. 05:37 📚 Upcoming topics in the series: interpretable machine learning models, LIME, CHAP, counterfactual explanations, and layer-wise relevance propagation. 06:46 📺 Independence of video topics, introduction to a practical example, and mention of causal reasoning in the context of counterfactuals. Made with HARPA AI
Thanks for your video. Can we assume that the final prediction 0 or 1 is the sum of the values of the orange and blue bars? Also, what does it mean if the values of the negative bars are equal to those of the blue bars? My guess is that the model for that instance is confused and its prediction is random. Is that sound correct? Thanks
In the logistic regression example, the dummy variable trap must be avoided. One would have to take k-1, not k dummies per category (with k categorical values) into the model. Or is this handled automatically in interpret.glassbox?
if we add xJ by one unit, f(y) will be multiplied by exp(betaJ), if I'm not wrong. My impression from the video was it'll be added, then consulted the book and did the math and saw the multiplications.
Hello ,i have wondering between overall all importance mean feature ,i can do that by using features correlate matrix to figure out the realtion between results and input .i found both equal ,correct me if iam wronge please
Hi :) yes that's true for some cases. But classical correlations only capture linear relationships. But what if there is a non-linear relationship between an input and the output. But you are right - correlation can also be seen as a simple sort of explainability. Also you have to distinguish between what is in the data (we use correlation for that) and what does our model learn (XAI). So we use explainable AI rather for model evaluation. And finally if you have more complex input such as images, text or graphs - calculating correlations doesn't necessarily make sense (for example each Pixel value with the output). I hope that answers the question :)
The notion of interpretable models is explained well. Thanks for demonstrating InterpretML as well as for introducing EBMs.
Thanks for the feedback!
these series are genius, hope you land as professor one day
Thanks so much for this series, it has been very informative and I like all of the examples and code you go through. I've now subscribed :)
Happy that you liked it! Thanks for the sub!
Great lectures providing intuition for my research methodology. Thank your so much
🎯 Key Takeaways for quick navigation:
00:00 🎥 Introduction to Explainable AI (XAI) and its significance in understanding machine learning models.
01:12 📈 Growing interest in Explainable AI due to the need for transparency, validation, and user experience.
02:09 ⚖️ Trade-off between complex non-linear models and interpretable models; the challenge of understanding neural networks.
03:33 🧠 Model-agnostic vs. model-specific methods; distinction between global and local explanations.
04:56 📊 Different types of explanations: feature importance, correlation plots, data points, surrogates.
05:37 📚 Upcoming topics in the series: interpretable machine learning models, LIME, CHAP, counterfactual explanations, and layer-wise relevance propagation.
06:46 📺 Independence of video topics, introduction to a practical example, and mention of causal reasoning in the context of counterfactuals.
Made with HARPA AI
Thanks for your video. Can we assume that the final prediction 0 or 1 is the sum of the values of the orange and blue bars? Also, what does it mean if the values of the negative bars are equal to those of the blue bars? My guess is that the model for that instance is confused and its prediction is random. Is that sound correct? Thanks
Thanks for this very useful series on explainable AI. Is there a way to extract the scores as an array?
In the logistic regression example, the dummy variable trap must be avoided. One would have to take k-1, not k dummies per category (with k categorical values) into the model. Or is this handled automatically in interpret.glassbox?
if we add xJ by one unit, f(y) will be multiplied by exp(betaJ), if I'm not wrong.
My impression from the video was it'll be added, then consulted the book and did the math and saw the multiplications.
Can we use EBM for multi-class classification?
logistic regression function is 1/ ( 1 + exp(-y)) not 1/ (1+ exp(+y)) and the impact is df(y)/dx is not simply exp(beta)
Thanks a lot❤
What is mean absolute score in overall feature importance???
The sigmoid function is wrong. It should be (1/1+e^(-y)). Maybe the "-" is lost in this video.
Yes you are right! It got lost :) thanks for pointing out!
Hello ,i have wondering between overall all importance mean feature ,i can do that by using features correlate matrix to figure out the realtion between results and input .i found both equal ,correct me if iam wronge please
Hi :) yes that's true for some cases. But classical correlations only capture linear relationships. But what if there is a non-linear relationship between an input and the output.
But you are right - correlation can also be seen as a simple sort of explainability.
Also you have to distinguish between what is in the data (we use correlation for that) and what does our model learn (XAI). So we use explainable AI rather for model evaluation.
And finally if you have more complex input such as images, text or graphs - calculating correlations doesn't necessarily make sense (for example each Pixel value with the output).
I hope that answers the question :)
amazing what you highlight,
what a magic '# %% cell' thank you!
thanks for the great videos. Please add requirements.txt or a readme file to the repository. I need the Python and package versions
Can we access to your codes which are in videos?
Yes :) I've uploaded them on Github
How is logistic regression machine learning? isn't it just old school statistical fitting? 😁
Well a neural network is also just a bunch of logistic regressions ;-) but yeah you're right
1:00
7:00
Job with children, lol.