ABSTRACT

Designing human error out of systems was one of the earliest activities of human factors. Then it seems obvious to describe the behavior as a human error where a specific person confused these two switches. The attribution of "human error" is no longer adequate as an explanation for a poor outcome; the label "human error" is not an adequate stopping rule. There is a great diversity of notions about what "human error" means. Defining human error as a form of process defect implies that there some criterion or standard against which the performance has been measured and deemed inadequate. The perception that there is a "human error problem" is one force that leads to computerization and increased automation in operational systems. If incidents are the result of "human error," then it seems justified to respond by retreating further into the philosophy that "just a little more technology will be enough".