ABSTRACT

In the last decades, Reinforcement Learning (RL) started entering every-days life since its super-human performances. Recently, RL has been applied to problems related to transportation systems, e.g. self-driving cars and traffic control, bringing contributions to the new field of Intelligent Transportation Systems. In these cases, each action is of vital importance, and for this reason, understanding why the RL agent is choosing a particular plan instead of a different one can help the human user find the right decision. Therefore, the focus of research in this field was recently centred on Explainable Reinforcement Learning (XRL) methods, i.e., algorithms that are interpretable and understandable to users. Consequently, an enormous amount of papers have been published. In this chapter, we summarize the most important advances in the XRL area over the last six years, focusing on outlining the pros and cons of each approach, and proposing a new classification based on how the explanations are generated and presented. Finally, a discussion highlighting open questions and possible future work are presented.