ABSTRACT

Why do you keep your money in a bank? The answer is that you know from previous experience that the money will be safe, that the bank will honor your checks, and that it will not go bankrupt. These inferences are not deductively valid – as so often in life, you lack the information to reach such conclusions – but they are plausible inductions. Induction is, indeed, part of both everyday and scientific thinking. It enables us to understand the world and provides an ever-ready guide to people and their behavior, but induction is a risky business. One of the most momentous cognitive errors of the 20th-century was an inductive inference that was wrong. The engineers in charge at the Chernobyl power plant inferred that the explosion had not destroyed the reactor (Medvedev, 1990). They knew from previous experience that such an event was highly unlikely, and, at first, they had no evidence to the contrary. Their inference was initially plausible. As more evidence became available, however, they should have abandoned it. Two probationary engineers whom they had sent to examine the reactor returned with a report that the reactor was destroyed. Their observations cost them their lives. Firemen reported that large amounts of graphite were lying around the reactor building, and its only source was the reactor. Yet, the engineers did not abandon their inductive conclusion in the face of these signs to the contrary. They clung stubbornly to their belief that the reactor was intact, and this psychological fixation was a major cause of the appalling delay in evacuating the inhabitants of the nearby town and countryside. If human beings are to perform more skillfully, and if machines are to be clever enough to guide them, then we need a better theory of both the strengths and weaknesses of human inductive competence.