ABSTRACT

The idea that human performance is systematically connected to the features of people’s tools and tasks effectively constitutes the birth of human factors. However, accidents are often still seen as the result of “human error,” either at the sharp operational end or the blunt organizational end. This chapter aims to point out some practical and ethical implications of “human error” (and its subcategories) as an explanation for why sometimes sociotechnological systems fail. In briey discussing some “costs” of relying on this reductionist approach to explaining and dealing with failure, we argue that as a human factors community we need to engage in (ethical) discussions about, and take responsibility for, the effects of the practices that we promote.