ABSTRACT

In recent years, artificial intelligence-based technologies have become an inevitable part of our lives and have shown great promise in medicine. Although artificial intelligence (AI) methods used in medical systems produce impressive results, they lack adequate interpretability and transparency. Therefore, it is urgent to develop explainable artificial intelligence (XAI) which acts more transparently to provide practitioners with reliable results about “why AI methods work” rather than “ it works”. However, there are significant challenges in achieving XAI in the medical field. This chapter aims at providing the difficulties that may be encountered during the application of XAI methods, especially in the medical field.