ABSTRACT

Accidents often arise from an interaction between technical, systemic and human factors. What stands out, and is intensified by the media, is the human element as there is a desire to ascribe blame (Woods et al., 2010). However, human operators in safety critical systems do not intentionally set out to make mistakes. Aside from the extremely rare incidents of deliberate intent to cause damage and harm, the situation surrounding the human contribution to accidents is much more complicated than it might initially appear. The contemporary perspective of human error does not blame individuals or use the term as a causal attribute. Instead, human error is considered the starting point for any investigation, rejecting the notion of faulty reasoning, and seeks to explore why certain decisions were made over others. Dekker (2006, p. 68) summarised this:

With that in mind, human error cannot begin to be understood without understanding the situation and precursors to erroneous actions, that is the decision-making processes that underpinned them. Traditional decision-making research has focused on the output product of whether a good or bad decision was made to establish the effectiveness of decision-making (Orasanu and Martin, 1998). However, understanding whether an effective decision-making process was employed, regardless of the observable manifestation of the decision, is arguably more important in order that potential training and mitigation strategies can be proposed. Furthermore, it is only with the benefit of hindsight that a label of ‘bad decision-making’ or ‘human error’ can be prescribed. What should be of interest to researchers and accident investigators is achieving an understanding of why actions and assessments made sense to an operator at the time they were made (i.e. local rationality).