ABSTRACT

Several decades have now passed since the introduction of advanced automation systems in aircraft cockpits and, despite the considerable efforts spent in dedicated research, human-computer interaction breakdowns are still a key safety concern (Billings, 1997). The literature has consistently shown that in aviation accidents the weak link in the chain is often the human component. Human error, as a contributory factor, accounts for 70-80% in incidents and accidents, and this figure has remained constant over the last decade, with no sign of decreasing (O’Hare & Chalmers, 1999). Tenney, Rogers and Pew (1998) suggested that failures of automation or malfunctions could generate unexpected crew behaviour that could lead to adverse consequences. Commonly known as “automation surprises”, these humanmachine breakdowns are defined as “situations where crews are surprised by actions taken (or not taken) by the auto-flight system” (Woods & Sarter, 2000). The term “surprise” is adopted since crews are not fully aware of the automation’s status or the aircraft’s status, until some cue or event eventually trigger the operators’ attention, and contradicts the current shared mental model between crews. These situations may occur as a consequence of undetected malfunctions or in a fully operative system affected by faulty inputs or “autonomous” system operations.