ABSTRACT

Various multimedia forgery such as deepfakes’ detection techniques use deep learning techniques nowadays. But since these deep learning techniques are based on neural networks, whose results are tough to explain because of their inherent black box nature, there is a need to focus on that aspect of deep learning-based detection approaches. Any forensic procedure must have a human-understandable explanation for real-world scenarios. That’s why deepfake detection approaches should be explainable. There are various ways to incorporate explainability in deepfake detection approaches, one of which is Local Interpretable Model-Agnostic Explanations (LIME). This chapter provides a comprehensive overview of explainability in deepfake detection. Various deep learning models are first trained on the Deepfake Detection Challenge (DFDC) dataset, and then evaluation is done based on accuracy metric for each. After that, the LIME tool is applied to justify the classification done by the models.