As a result of the complexity in machine learning models, researchers have proposed a number of techniques to explain model predictions. Often, the motivation for using such techniques is to increase trust in ML models. However, to what extent are explanation methods vulnerable to manipulation? In this talk, we introduce an attack that fools two popular explainability methods called LIME and SHAP through exploiting a common assumption in both techniques. This allows us to create models which have arbitrary explanations according to LIME and SHAP. We demonstrate the potential significance of our attack through building classifiers that solely rely on protected attributes (e.g. Race or Gender) but explanation methods do not indicate that these features are important.