ABSTRACT

Autopilots in the cockpit and controlled emergency shutdowns in nuclear power plants represent some of the earliest forms of industrial automation, dating to the 1950s and 1960s. ese automated systems, while capable of more reliable, timely, and precise control than humans, possessed few of the features commonly ascribed to human intelligence. As such, even though it is now commonplace to speak of human “interaction” with automation, human usage of most automated systems typically involved no more social dynamics or interpersonal exchange than would occur in using a tool such as a hammer or a laptop computer. Yet, when a blunder or fault occurs-hitting one’s thumb with the hammer or the laptop freezing up at an inconvenient time-human users may blame themselves in the former case but the machine in the latter (Miller, 2004)—as if the computer were a bumbling human subordinate. e distinction seems to be related to the human tendency to ascribe agent-like qualities (such as awareness,

volition, and intent) to tools such as computers (but not hammers) that are capable of a degree of autonomous behavior.