ABSTRACT

In recent years, deep learning-based methods and models have made remarkable advancements in various biomedical imaging tasks, including image segmentation, detection, and classification. However, the interpretability of these models remains a significant challenge, which limits their clinical usability and reliability. To tackle this issue, researchers have proposed explainable artificial intelligence (XAI) methods and models based on deep learning that provide clear and understandable explanations for the decisions. This chapter examines the current progress and ongoing research challenges in the application of XAI to deep learning in the field of medical imaging. In this chapter we cover the fundamental concepts of XAI, followed by a review of recent advancements in applying XAI to deep learning approaches in different medical imaging, encompassing image classification, segmentation, and synthesis. In conclusion, we highlight critical research challenges, such as finding the right balance between accuracy and interpretability, scaling XAI methods to handle large and complex datasets, and integrating XAI into clinical workflows and decision-making processes. This chapter will inspire further research and development in this exciting and rapidly evolving field, ultimately leading to more transparent, trustworthy, and effective deep learning models for medical imaging applications.