Covers: theory of Optimized LIME Explanations
Estimated time needed to finish: 60 minutes
Questions this item addresses:
  • Can we optimize the LIME explanations?
How to use this item?

Complete video will be useful

Fail to play? Open the link directly: https://youtu.be/HCDkMM0vq0E
Author(s) / creator(s) / reference(s)
Giorgio Visani
0 comment
Recipe
publicShare
Star0

Proper Machine Learning Explanations through LIME, using the OptiLIME framework

Contributors
Total time needed: ~3 hours
Objectives
With this list you will learn approaches to improve the explanations generated with LIME
Potential Use Cases
Improve LIME explanations
Who is This For ?
INTERMEDIATEPeople with basic knowledge on interpretable machine learning
Click on each of the following annotated items to see details.
VIDEO 1. Proper Machine Learning Explanations through LIME using OptiLIME framework
  • Can we optimize the LIME explanations?
60 minutes
PAPER 2. Understand explaining the predictions of any machine learning models
  • Can we explain blackbox models?
  • Are the explanations useful for evaluating the model ?
30 minutes
VIDEO 3. Understand local interpretable model-agnostic explanations (LIME)
  • Can we explain blackbox models?
25 minutes
PAPER 4. How can we deal with instability associated with LIME explanations?
  • Can explanations generated by a locally interpretable model provide consistent results for the same instance?
30 minutes
PAPER 5. Understand generating robust and stable explanations
  • Can we generate explanations robust to data shifts?
  • Can we generate stable explanations?
30 minutes

Concepts Covered

0 comment