Andrey will review approaches and tools for explaining ML models along with a retail use case.
Some of these explainability methods, like LIME, are statistical methods that suffer from instability, giving a different explanation for the same prediction. Did you encounter this issue in your practice, and if yes, how have you addressed it?
You showed us a number of tools, do you typically run them all? or do you run a few? Which top three tools do you did find useful?
If you run multiple explainer tools and get different or contradicting explanations, what is your typical next step?
4)Have you had cases in which the explanation triggered you to go back and make changes to the model? If yes, is there any example you could share?
•Machine learning algorithms are increasingly being used in high stakes decisions •These algorithms can be black boxes, giving us prediction and decision recommendations without explaining why •To mitigate the risk of these black-box algorithms, a number of explainability techniques •Andrey shared with us a great collection of resources representing the application of explainability methods in a practical setting •And also helped us gain an appreciation of some of the challenges applying these methods in the field