ABSTRACT

Interpreting machine learning models is an emerging field that has become known as interpretable machine learning. Global interpretability is about understanding how the model makes predictions, based on a holistic view of its features and how they influence the underlying model structure. Global model interpretability helps to understand the relationship between the response variable and the individual features. Arguably, comprehensive global model interpretability is very hard to achieve in practice. Global interpretability methods help us understand the inputs and their overall relationship with the response variable, but they can be highly deceptive in some cases. Local Interpretable Model-agnostic Explanations (LIME) is an algorithm that helps explain individual predictions and was introduced by M. T. Ribeiro et al. LIME samples the training data multiple times to identify observations that are similar to the individual record of interest.