The Local Interpretable Model-agnostic Explanations (LIME) method was originally proposed by Ribeiro et al. The key idea behind it is to locally approximate a black-box model by a simpler glass-box model, which is easier to interpret. In this chapter, the authors describe this approach. The most typical choices are regularized linear models like LASSO regression or decision trees. Both lead to sparse models that are easier to understand. It is worth noting that super-pixels, based on image segmentation, are frequent choices for image data. For text data, groups of words are frequently used as interpretable variables. To develop a local-approximation glass-box model, the authors need new data points in the low-dimensional interpretable data space around the instance of interest. To summarize, the most useful applications of LIME are limited to high-dimensional data for which one can define a low-dimensional interpretable data representation, as in image analysis, text analysis, or genomics.