Past Recording
Share
Star()
Explaining by Removing: A Unified Framework for Model Explanation
Tuesday Dec 22 2020 17:00 GMT
Please to join the live chat.
Explaining by Removing: A Unified Framework for Model Explanation
Why This Is Interesting

This work highlights the common patterns in 20+ different ML explanation methods including several of the most widely used approaches (SHAP, LIME, Meaningful Perturbations, permutation tests). If there was one paper, you SHOULD read to better understand the choices and assumptions made by many current explanatory methods, it is this one.

Discussion Points

What are the common patterns between removal based explanation methods?

What are the choices and assumptions made by many of the removal based explanatory methods?

Takeaways

Ian presented a new class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature’s influence. These methods vary in several respects, so we develop a framework that characterizes each method along three dimensions: 1) how the method removes features, 2) what model behavior the method explains, and 3) how the method summarizes each feature’s influence.

Time of Recording: Tuesday Dec 22 2020 17:00 GMT