Thanks for the great series of videos! Quick question, do you know of any studies where they have actually tested these counterfactuals on real data to see whether changing those features would indeed change the output? In other words, have they generated the counterfactuals and then actually been able to look at real life data to see if those counterfactuals actually do change the output. Thanks again. Your videos are great!
Hi, thanks for your feedback :) Yes that would be interesting to see. Generally all the counterfactual statements (and how valid they are) depends on two things: 1. How good is the model on which the Counterfactuals are calculated 2. How similar is the train/test data to the real world data If the model has a very good performance and the distribution of the real world data is the same as for the data the model was trained on, it would say there is no doubt that for Counterfactuals a real world evidence can be found. Many of the datasets used for publications with Counterfactuals are based on real world data, for instance: - Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification (Zephyr dataset) - On Counterfactual Explanations under Predictive Multiplicity (HELOC dataset) - Model-Agnostic CFs for consequential decisions (look for coverage) - Counterfactual Explanations for Machine Learning - A review (they also talk about causality here) Hope that this is what you look for :)
Thanks for the videos and good explanantions. Just started my Master Thesis and the process of reading through all these papers is quite tedious for me. I just can't concentrate on papers as good as on videos. And btw. your english is quite good ;) When I started watching your videos I wasn't even sure that you're german :) Where do you / did you study? :) Subbed
I think you have misunderstood the CF generated. It twas old to change bmi by 0.9, that mean make it 30.9 from 30 to change stroke from 0 (no stroke) to 1(stroke). This is more feasible CF also. Please let me know if I am wrong.
Hi, no actually the value is 0.9. That's why I put in "permitted range" afterwards, to guarantee more feasible values. The CFs returned are always new data points, so not just the changes :)
Hi! It's VS Code Cell magic :) You simply put a #%% in the file to create cells. Here you can find more information: code.visualstudio.com/docs/python/jupyter-support-py
Thank you for this fantastic series of XAI videos! I was wondering if there is the possibility of adding the minimum probability for the counterfactual decision class. So lets say we have a person with 80% stroke and we want to provide a counterfactual not only for this person to be detected as no_stroke (which could be 51% and so there is a lot of uncertainty there) but with a 90% probability of no_stroke. Is that possible?
Hi! Thanks, I'm happy that you liked it! When it comes to Counterfactuals you can be very creative, so yes that is possible. However, I don't think that this is possible out of the box using any of the libraries during the CF generation. Some time ago I build a simple genetic algorithm that creates Counterfactuals (similar as in the paper of CertifAI) - there I could include all the constraints I wanted to add. I also used the probabilities as confidence scores for the generated Counterfactuals. In my experiments I also realized that sometimes no Counterfactuals can be found and the max probability is for instance 60%. However, you can always ask your model how certain it is about the Counterfactual. That means you could generate a couple of CFs and then simply discard the ones that fall below your threshold. Best regards!
You are such an intuitive explainer supported by mathematical explanations as well. Truly a gem of a teacher! Thank you!
Thanks a lot for this lucid presentation on counterfactual explanations and the DiCE python toolbox. Kudos to the presenter.
I'm happy that you liked it!
One of the grey areas in AI/ML is well explained. Great work. Thank you so much.
Thank you :)
You are amazing, great intuative explanations. Thanks a lot for your effort and time
I agree with your lecture and was very nice. How to apply for images like x-ray or ct scan
Excellent presentation. I have a question, can DICE be carried out in a model that has 3 classes instead of 2?
Thats a very good explanation. Good stuff. Thank you.
Thank you for such a nice, simple explanation.
Thanks for the great series of videos! Quick question, do you know of any studies where they have actually tested these counterfactuals on real data to see whether changing those features would indeed change the output? In other words, have they generated the counterfactuals and then actually been able to look at real life data to see if those counterfactuals actually do change the output. Thanks again. Your videos are great!
Hi, thanks for your feedback :)
Yes that would be interesting to see. Generally all the counterfactual statements (and how valid they are) depends on two things:
1. How good is the model on which the Counterfactuals are calculated
2. How similar is the train/test data to the real world data
If the model has a very good performance and the distribution of the real world data is the same as for the data the model was trained on, it would say there is no doubt that for Counterfactuals a real world evidence can be found.
Many of the datasets used for publications with Counterfactuals are based on real world data, for instance:
- Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification (Zephyr dataset)
- On Counterfactual Explanations under Predictive Multiplicity (HELOC dataset)
- Model-Agnostic CFs for consequential decisions (look for coverage)
- Counterfactual Explanations for Machine Learning - A review (they also talk about causality here)
Hope that this is what you look for :)
@@DeepFindr thanks for the response. That helps. I will check out those papers. Cheers.
Thanks for the videos and good explanantions. Just started my Master Thesis and the process of reading through all these papers is quite tedious for me. I just can't concentrate on papers as good as on videos.
And btw. your english is quite good ;) When I started watching your videos I wasn't even sure that you're german :)
Where do you / did you study? :)
Subbed
Thanks! I studied at the KIT. :)
@@DeepFindr haha so cool, I study there too :)
Awesome! I really liked it there. Good luck with your studies!
Great job!
Is it okay to not scale the numerical data? Can we just proceed with the analysis as is?
Brilliant!!
I think you have misunderstood the CF generated. It twas old to change bmi by 0.9, that mean make it 30.9 from 30 to change stroke from 0 (no stroke) to 1(stroke). This is more feasible CF also. Please let me know if I am wrong.
Hi, no actually the value is 0.9. That's why I put in "permitted range" afterwards, to guarantee more feasible values.
The CFs returned are always new data points, so not just the changes :)
Very nice presentation. Quick question: how do you get the options to run a cell in a .py file? It’s not a notebook right?
Hi! It's VS Code Cell magic :)
You simply put a #%% in the file to create cells. Here you can find more information: code.visualstudio.com/docs/python/jupyter-support-py
Thank you for this fantastic series of XAI videos! I was wondering if there is the possibility of adding the minimum probability for the counterfactual decision class. So lets say we have a person with 80% stroke and we want to provide a counterfactual not only for this person to be detected as no_stroke (which could be 51% and so there is a lot of uncertainty there) but with a 90% probability of no_stroke. Is that possible?
Hi! Thanks, I'm happy that you liked it!
When it comes to Counterfactuals you can be very creative, so yes that is possible. However, I don't think that this is possible out of the box using any of the libraries during the CF generation.
Some time ago I build a simple genetic algorithm that creates Counterfactuals (similar as in the paper of CertifAI) - there I could include all the constraints I wanted to add. I also used the probabilities as confidence scores for the generated Counterfactuals.
In my experiments I also realized that sometimes no Counterfactuals can be found and the max probability is for instance 60%.
However, you can always ask your model how certain it is about the Counterfactual. That means you could generate a couple of CFs and then simply discard the ones that fall below your threshold.
Best regards!
Amazing, thanks for your clear explanations. Are there any tools for neural network for calculating counter factual?
Hi! Yes, you can try out the CEML or DICE python libraries :)
github.com/interpretml/DiCE