ABSTRACT

This chapter examines previous research on human-automation interaction and the challenges caused by the introduction of automation in complex systems. One of the challenges identified is the increased effort required to interpret the plethora of additional information introduced by automation into the driving task. In an effort to decrease the effort to interpret systems states, this chapter draws on theories from human–human communication to highlight the need to make such systems transparent and to bridge the gap between determining system states and how system behaviour corresponds to expectations.

To achieve this, the chapter posits a novel application of linguistics and, more specifically, Gricean theories of human–human communication to the domain of human–automation interaction, treating the automated driving system as an intelligent agent or a co-driver whose task is to ensure that the driver does not have an incorrect model of the vehicle state, is aware of changes to driving dynamics, and has the necessary knowledge to perform the driving task. The proposed application of linguistic theory to human–automation interaction is then exemplified through the application of the framework in two aviation incidents where automation played a significant role in the events that unfolded.