ABSTRACT

The questions are animated by a common concern: Each organization or industry feels that their progress on safety depends on having a firm definition of human error. Attempts to make complete, exhaustive policies that apply to all cases creates or exacerbates double binds or to make it easy to attribute adverse events to 'human error' and stop. "Human error" is an attribution. Given that human error is just an attribution that it is just one way of saying what the cause was, just one way of telling a story about a dreadful event it is entirely justified for us to ask why telling the story that way makes sense to the people listening. So a story of human error is merely a starting point to get at what went wrong, not a conclusion about what went wrong. The implication is that new technology by itself reduces human error and minimizes the risk of system breakdown.