Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. DLIME (Deterministic - LIME) is a novel method that improves on the stability of LIME. This will be a hands-on demo of the code to get a deep dive into the experiment and results.
This is a Python code walkthrough. It will cover a lot of interesting techniques including Generating Local Model Agnostic Explanations (LIME) and Deterministic-LIME (DLIME) explanations, measuring the stability of explanation using the Jaccard similarity coefficient.
DLIME has been reported more stable explanations than those generated by LIME.
Demonstration of the coding techniques and run through the experiment.