[DLIME] A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems

Time: Thursday 28-May-2020 16:00 (This is a past event.)

Discussion Facilitator:

Artifacts

Motivation / Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically generates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g. linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation and feature selection methods result in "instability" in the generated explanations, where for the same prediction, different explanations can be generated. This is a critical issue that can prevent deployment of LIME in a Computer-Aided Diagnosis (CAD) system, where stability is of utmost importance to earn the trust of medical professionals. In this paper, we propose a deterministic version of LIME. Instead of random perturbation, we utilize agglomerative Hierarchical Clustering (HC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a linear model is trained over the selected cluster to generate the explanations. Experimental results on three different medical datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability of DLIME compared to LIME utilizing the Jaccard similarity among multiple generated explanations.
Questions Discussed
1) Does the paper make a claim on improving explanation fidelity?
2) Can the DLIME approach be applied to non-tabular datasets?
3) Can the DLIME approach be applied to non-medical datasets?
Key Takeaways
The author presented a better understanding of the vulnerability of LIME (lack of stability and clear definition and measure of explanation stability presented in terms of input features

DLIME is a novel method that improves on the stability of LIME for tabular data using hierarchical clustering
Stream Categories:
 SpotlightAuthor SpeakingML Interpretability