ABSTRACT

Deep learning models are often considered black boxes because the traceability of the model's decisions is unclear. This reduces trust in the results and can hinder the application of the model. The black-box conundrum also plagues geospatial artificial intelligence (GeoAI) methods based on deep learning and has led to debates on the usefulness of AI versus traditional methods in solving geospatial problems. To make model decisions transparent, research in explainability — including the development of explainable artificial intelligence (XAI) methods — is in high demand. This chapter will give an overview of established XAI methods and their basic principles. Moreover, it will highlight the benefit of applying XAI methods for GeoAI applications based on several use cases. Further, we discuss explicit challenges and opportunities for applying XAI methods in GeoAI.