ABSTRACT

The retrospective analysis of actions and inactions by operational personnel involved in accidents and incidents has been the traditional method utilised by aviation to assess the impact of human performance in regard to safety. The established safety paradigm, and prevailing beliefs about what constitutes safe and unsafe acts, guide this analysis in such a way that it traces back an event under consideration until a point in which investigators find a behaviour that did not produce the results intended. At such point, human error is concluded. This conclusion is generally arrived at with limited consideration of the processes that could have led to the 'bad' outcome. Furthermore, when reviewing events, investigators know that the behaviours displayed by operational personnel were 'bad' or 'inappropriate', because the negative outcomes are a matter of record. This is, however, a benefit the operational personnel involved did not have when they selected what they thought at the time were good or appropriate behaviours, and which would lead to a good outcome. In this sense, it is suggested that investigators examining human performance in safety occurrences enjoy the benefit of hindsight. Furthermore, conventional safety wisdom holds that, in aviation, safety is first. Consequently, human behaviours and decision-making in aviation operations are considered to be one hundred percent safety oriented. This is not true, and a more realistic approach is to consider human behaviours and decision-making in operational contexts as a compromise between productionoriented behaviours and decisions, and safety-oriented behaviours and decisions. The optimum behaviours to achieve the actual production demands of the operational task at hand may not always be fully compatible with the optimum behaviours to achieve the theoretical safety demands. All production systems (and aviation is no exception) generate a migration of behaviours: under the imperative of economics and efficiency, people are forced to operate at the edges of the system's safety space. Consequently, human decision-making in operational contexts lies at the intersection of production and safety, and is therefore a

compromise. In fact, it might be argued that the trademark of experts is not years of experience and exposure to aviation operations, but rather how effectively they manage the compromise between production and safety. Operational errors do not reside in the person, as conventional safety knowledge would have the aviation industry believe. Operational errors primarily reside in latency within task and situational factors in the context, and emerge as consequences of mis-managing compromises between safety and production goals, largely influenced by the shared attitudes across individuals (i.e., culture). This compromise between production and safety is a complex and delicate balance and humans are generally very effective in applying the right mechanisms to successfully achieve it, hence the extraordinary safety record of aviation. Humans do occasionally mis-manage tasks and/or situational factors and fail in balancing the compromise, thus contributing to safety breakdowns. However, since successful compromises far outnumber failures, in order to understand human performance in context the industry needs to capture, through systematic analyses, the mechanisms underlying successful compromises when operating at the edges of the system, rather than those that failed. It is suggested that understanding the human contribution to successes and failures in aviation can be better achieved by monitoring normal operations, rather than accidents and incidents. The Line Operational Safety Audit (LOSA), discussed in detail by Helmreich in his chapter, is the vehicle endorsed by the International Civil Aviation Organisation (ICAO) for this purpose.