ABSTRACT

A significant growth is experienced in the area of explainable artificial intelligence (XAI) in recent years. It is because of worldwide machine learning applications, particularly deep learning, that has produced extremely accurate models but lacks interpretability and explainability. Many solutions have been developed, suggested, and put to the test to deal with this issue. As a result, AI tools have produced results that are equally accurate to human specialists. In addition to this, ANNs are also among the most popular AI techniques. The existence of XAI enables the reliable decision-making for both patients and physicians. Nowadays, machine learning algorithms rather than only doctors do the majority of the diagnosis and testing procedures. As a result, the emphasis of this chapter is on the necessity for XAIs to be able to fully explain their conclusions to subject-matter experts. Here, deep learning methods and its applications in the biomedical domain are described along with the brief discussions on XAI and the importance of explainability in AI systems. In addition to that, this article also discusses the state-of-the-art in XAI and suggests some areas for further research. A performance comparison of various ML and explainable Deep Neural Network (xDNN) techniques is made on the Caltech-256 dataset that shows that xDNN outperforms all ML techniques with the highest level of interpretability with an accuracy of 94.27%.