ABSTRACT

The automatic and accurate segmentation of brain tumors from medical images is vital for treatment planning, including surgery, chemotherapy, and radiotherapy. Deep neural networks (DNNs) work for brain tumor segmentation and provide localization of the tumorous part of the brain. Further, machine learning (ML) helps predict the patient’s overall survival (OS). The usage of the DNN models for segmentation tasks has some limitations. First, the black-box nature of models has a lack of explainability. This in turn limits the use of such models among medical practitioners, even for the naïve opinion. Second, the segmentation task is the localization problem, which requires in-depth analysis of DNN model behavior. The model’s explainability and interpretability involve analysis of model performance, analysis of the importance of the specific input of images in the presence of variability, and finding the importance of specific parts of the input image. The focus of this chapter is to explore the interpretability and explainability of DNN models used for brain tumor segmentation and ML models for OS prediction.