ABSTRACT

When faced with a risk that threatens something they value, peoples’ risk perceptions and assessments of the risk’s acceptability are typically influenced first by affective reactions (risk as feelings). These reactions may be followed by considered application of logic and reasoning to the situation (risk as analysis), but this subsequent consideration may follow a path biased in favour of confirmation of affectively formed hypotheses. Such reactions are important within the context of organizational decision making and risk management because they may cause managers and employees to act in ways inconsistent with organizational effectiveness. In this chapter, I discuss how such biased decision processes unfold, two categories of harm that can result, and, especially, how an organization’s definitions of risk influence the process. I examine, in particular, how organizations sometimes convey definitions of risk and values relevant to risk management through formal systems, e.g., performance management systems, often without intending to or even realizing it; and also how such definitions and messages may interact with employees’ and managers’ affective reactions to risk. In organizational settings, people may over-perceive or over-react to risk, causing misallocation of limited resources, or they may under-estimate or under-react to risk, which can lead to disaster. I argue that risks defined, valued, and communicated about, and thus socially amplified by formal systems, are more likely to be perceived and assessed as unacceptable and worthy of attention. Risks not defined or communicated about are seemingly not valued and thus socially attenuated by formal systems, are more likely to not be perceived at all, or to be assessed as acceptable and ignored. Managers who take the time to consider these broad effects on risk management will be better equipped to help their organizations get risk management more right more often.

Individuals’ perceptions of risk, and also their assessments of its acceptability, affect how they behave in risk situations. This is, of course, entirely reasonable. But, it applies as well to misperceptions and mistaken assessments. 1 When the individuals in question are managers and employees in an organization, inclinations toward misperception or mistaken assessment can be determinants of how well the organization functions as it pursues its objectives. In general, we can characterize misperceptions and mistaken assessments as happening in two directions, each presenting different perils to an organization’s ability to function.

297First, people may over-perceive or over-react to risks, thereby causing an organization to misallocate limited resources away from real concerns or opportunities. For example, during a time of immense budget shortfalls following the 2008 financial crisis, the University of California at San Diego (UCSD) invested large sums of money in renovations to its Literature Building to address employee worries about health risks, even though studies found the worries to be unfounded. Despite strong evidence to the contrary, beliefs that the work environment contained health hazards persisted and forced the university to act (Austin, 2015a, 2015b; UCSD, 2009). 2

Situations in which risk is over-estimated, or in which people over-react to a possible risk, are hard to detect as such, either at the time they are perceived or after the fact. Those who want to take action, or do take action, believe they are mitigating or avoiding real problems. And unless risk mitigation efforts lead to other observable problems, the fact that feared outcomes never materialize is usually taken as evidence that mitigation actions were successful – whether or not those actions were actually ever needed at all (which often cannot be known). At a societal level, some authors (e.g., Gawande, 1999) have argued that the vast majority of concerns about some risks – cancer clusters, for example, such as those that drove the situation at UCSD – are misplaced and cause misallocation of societal resources on a grand scale.

An analogy to medicine helps highlight just how high the stakes might be in instances of over-estimation or over-reaction. In medicine, diagnosing and treating a problem that does not exist, or that would never progress enough to cause symptomatic problems, is referred to as over-diagnosis and over-treatment. These problems relate especially to early disease screening and preventive treatment (Austin, Reventlow, Sandøe, & Brodersen, 2013; Moynihan, Doust, & Henry, 2012). Sometimes early-stage disease that has only a very small probability of progressing enough to ever cause harm is, nevertheless, diagnosed and treated. When this happens, most people who are treated receive no benefit (because they would not have experienced harm from the disease without treatment), but may suffer diagnosis and treatment harms, such as discomfort, being labelled as ill and forgoing activities restricted to those who are well, reduced quality of life from unneeded surgery, risk from subsequent complications of treatment, and so forth. 3 These consequences of over-diagnosis and over-treatment are experienced by individuals, but there are also, of course, societal costs. Medical researchers look at long-term aggregate data, summarized across many individuals and over time, to determine the societal value of preventive practices in net benefit and harm terms (see e.g., Brodersen, Jørgensen, & Gøtzsche, 2010). Over-treatment of large numbers of people increases the possibility that at the aggregate level treatment harm outweighs benefit, meaning treatment funds have been allocated inefficiently (i.e., there were better uses for the money). Unfortunately, aggregate data is generally not available for organizational risk management efforts, but potential costs to organizations of over-diagnosis and over-treatment are no less real just because they are harder to detect.

The second category of peril for managers, which is more widely recognized and addressed in organizational risk management research and practice, arises when risk is under-estimated, or judged ‘acceptable’ when it should be judged ‘not acceptable.’ In these cases, people take risks they otherwise, on the basis of better estimates or judgments, would not take. Or, they fail to engage in preventive or risk mitigation behaviours that they otherwise would. When people in organizations take risks and fail to take appropriate preventive or risk mitigation actions, losses may be incurred by individuals or the organization. When compounded across many individuals and over time, failure to adequately perceive, properly assess, and appropriately react to possible risk increases the likelihood that events will spiral into large-scale problems, even outright disaster. Returning to the medical analogy, problems that are not diagnosed and not treated are said to be under-diagnosed and under-treated, which represents a high-stakes failure to take beneficial action that was readily available.

298When large-scale disasters happen in organizational settings, subsequent investigation often examine contextual and behavioural factors to understand the most fundamental causes of the problem. This is important so that the organization in question, and other similar organizations, can learn from the events. Some disasters are found to have been due to technical error, such as the explosion on Apollo 13 (Cortright et al., 1970). Some are attributed to errors by one or a few individuals (Shappell et al., 2007). Others are attributed to poor managerial decision making and risk management processes that lead to systemic under-diagnosis or under-treatment of risk in the organization. For example: “Better management by BP, Halliburton, and Transocean would almost certainly have prevented the [oil platform] blowout by improving the ability of individuals involved to identify the risks they faced, and to properly evaluate, communicate, and address them” (National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, 2011). Regarding the Fukushima Daiichi Nuclear Power Plant disaster: “The accident was clearly ‘manmade.’ We believe that the root causes were the organizational and regulatory systems that supported faulty rationales for decisions and actions, rather than issues relating to the competency of any specific individual” (Kurokawa et al., 2012, p. 16). And,

The [financial] crisis was the result of human action and inaction, not of Mother Nature or computer models gone haywire. The captains of finance and the public stewards of our financial system ignored warnings and failed to question, understand, and manage evolving risks within a system essential to the well-being of the American public.

(Financial Crisis Inquiry Commission, 2011)

Just as there are high stakes trade-offs between under-diagnosis/under-treatment or over-diagnosis/over-treatment in medicine, there are similar trade-offs in organizational risk management. There is the potential in organizations to over-estimate risk (over-diagnose), or over-react and introduce preventive activities that are not really needed (over-treat). This may result in diverting limited resources away for more pressing concerns or opportunities. If an organization suffers routinely from under-estimation of risk (under-diagnosis), too much acceptance of risk, or too little risk mitigation effort (under-treatment), the resulting behaviours can compound over time and across individuals, possibly leading to more spectacular and obvious problems than those associated with organizational over-treatment of risk. This greater potential for spectacular (negative) outcomes suggests to many that organizations need to be more concerned about systematic under-treatment of risk than about systematic over-treatment. Both directions, however, can be sources of significant problems, as an organization that too often misallocates its resources may eventually find its objectives threatened as well.

Thankfully, most failures to accurately perceive, assess, and behave appropriately with respect to risks in organizations do not lead to disasters. However, systematic mismanagement of risk, in whichever direction, generally causes harm eventually. Small organizations have less ability to absorb the costs and harms of poor risk management than large organizations like BP, Tokyo Electric Power Company, or large US banks. Clearly, it is important for all organizations, large and small, to get risk perceptions, assessment, and risk management as ‘right’ as possible throughout the organization.

Given all this, here are three broad questions that we might take as a starting point for balancing trade-offs between being overly cautious and not cautious enough in managing risk in organizations:

What affects how people perceive risks around them in organizations?

What affects how people in organizations judge risk acceptability?

What can we do to improve people’s ability to appropriately perceive and assess risk, and therefore their risk taking and risk management behaviours in organizational contexts?

299

In this chapter, I discuss some of the common ways our intuitive, automatic thinking systematically affects how we perceive risk in organizations. My focus will be on some ways people are inclined to selectively think about information in ways that lead to biased risk perceptions or faulty conclusions, especially those related to affective responses – perceptions of risk that manifest in feelings. I’ll discuss factors that affect how people judge acceptability of a risk, including how organizational risk definitions, values, and pay-for-performance systems can impact risk perceptions and behaviours. Finally, I will discuss some things managers and leaders in organizations can do to try to improve the balance between over- and under-diagnosis and treatment of risk in organizations.