[DLIME] Deterministic Local Interpretable Model-Agnostic Explanations - Let's dig into the code

Time: Thursday 9-Jul-2020 16:00 (This is a past event.)

Discussion Facilitator:

Artifacts

Motivation / Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms.  DLIME (Deterministic - LIME) is a novel method that improves on the stability of LIME.  This will be a hands-on demo of the code to get a deep dive into the experiment and results.
Questions Discussed
This is a Python code walkthrough.  It will cover a lot of interesting techniques including Generating Local Model Agnostic Explanations (LIME) and Deterministic-LIME (DLIME) explanations, measuring the stability of explanation using the Jaccard similarity coefficient.

DLIME has been reported more stable explanations than those generated by LIME.
Key Takeaways
Demonstration of the coding techniques and run through the experiment. 
Stream Categories:
 ML Interpretability