ABSTRACT

(AI) and Machine Learning (ML) are set to revolutionize all industries, Intelligent Transportation Systems (ITS) field is no exception. However, being a safety-critical system, transportation needs explanation and justification in order to ensure that AI-based decisions and results were made fairly and without errors. Nevertheless, due to the block-box nature of ML especially deep learning models, the outcomes provided by these models are not amenable to human scrutiny. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human interpretable representations of ML models while maintaining performance. These methods hold the potential to increase public acceptance and trust in AI-based ITS. This chapter investigates the use of XAI in smart transportation, it aims (i) to provide the necessary background regarding XAI, its main concepts, and algorithms. (ii) To identify the need of explanations for intelligent systems in transportation. (iii) To examine efforts deployed to enable and/or improve ITS interpretability, by reviewing the literature related to explanations for intelligent transportation problems. And finally, (iv) to explore potential challenges and possible research avenues in the emergent field of Explainable ITS.