ABSTRACT

Public discussions about technological risk tend to be polarized between experts and the public. Consider the controversy about the risks of genetically modified food in Europe. Many people support a ban on such food by claiming that there are serious health and environmental risks, whereas many experts argue that such risks are small or at least manageable. Experts typically accuse the public of biases and ‘emotional’ responses, to which they oppose their own views based on ‘scientific evidence’. A similar polarization can be observed in worldwide discussions about the risks of nuclear energy, another highly contested technology. Opponents refer to disasters such as Chernobyl and Hiroshima; experts try to counter this public ‘imagination’ with results from ‘scientific research’ on the risks of radiation and nuclear waste disposal. Experts argue, therefore, that the public should be informed about and educated on the ‘real’ risks attached to these technologies. This expert position received support from psychological and social research. For instance, Paul Slovic and others have studied emotional factors that play a role in risk perception (e.g. Slovic, 2000; Slovic et al, 2004). Their intention is to defend the legitimacy of the public's position: emotions do and should count. Their work, however, appears to undermine this claim. Lay people are said to perceive risks, using their feelings and imagination, whereas experts analyse and assess it. In this way, opposition to genetically modified foods or nuclear energy can be dismissed as distorted perception, as the outcome of an ‘affect heuristic’ (Slovic et al, 2004), as a typical instance of social amplification of risk (e.g. Smith and McCloskey, 1998, p46) and stigmatization (e.g. Flynn et al, 2001). Risk communication, then, is a one-way process based on a paternalistic attitude: experts claim to know what the public should think about the risk related to technology.