Safety Cultures and Safety Systems
High-risk industries assume both human fallibility and system fallibility but the health sector has been reluctant until recently to acknowledge that professionals can make mistakes and that systems can fail. This is paradoxical since the health professions value learning from failures as well as successes. This chapter examines regulatory efforts to inculcate a safety culture and to design safer systems. This is a challenge since the medical profession, in particular, as shown in the previous chapter, has been reluctant to overcome its complacency, to put patients first, and to comply with interventions to improve safety and quality. Contemporary medical culture in this regard has been likened to the culture of mediaeval knights:
A Safety Culture
The safety management literature stresses the importance of promoting cultural change throughout industries and organizations (Helmreich and Merritt 1998). The term safety culture refers to ‘a constant commitment to safety as a top-level priority, which permeates the entire organization’ (Pizzi et al. 2001: 452). A patient safety culture proceeds from the premise that health care involves risk, the potential for error cannot be totally eliminated since people and systems are fallible, and therefore we must learn from the ‘things that go wrong’ in order to prevent future incidents. But efforts to learn from error will not flourish unless there is a culture of safety where people can discuss adverse events without incurring recriminations, disciplinary proceedings and legal consequences (Wellington 2004). One of the lessons the UK Government drew from public inquiries into hospital scandals was that staff are afraid to report safety concerns in a blame culture. The National Patient Safety Agency therefore made ‘building a safety culture’ the first step in its seven-step guide to patient safety for NHS staff (National Patient Safety Agency 2004). A ‘no blame’ culture means that people are confident they will not be punished for reporting adverse events that involved them. Since public
inquiries into medical scandals look to fix blame upon individuals, front line staff suspect that in an adverse event inquiry the management hierarchy will look to blame them. Clinicians themselves also are inclined attribute adverse incidents to personal failure:
But the public may be unhappy if no blame is attached to things that go wrong. The victims of corporate crime, for example, are not happy to learn that the institutional philosophy is that no one is to blame (Fisse and Braithwaite 1993). The NSW Minister for Health rejected the ‘no blame’ approach of the Health Care Complaints Commissioner in her inquiry into the Campden and Campbelltown hospitals (Thomas 2006). In dismissing the Commissioner, the Health Minister commented ‘But for an investigation that took 13 months to complete, the HCCC doesn’t go far enough in terms of finding anyone responsible for these failures’ (Van Der Weyden 2004). A ‘no blame’ culture should not mean a ‘no accountability’ culture. Accountability by health authorities entails a commitment to investigating the systemic and human factors that contribute to adverse events, and to providing support both to staff and patients (Wellington 2004). Regulators should not immediately react by blaming and shaming, however, since this kills off a culture of active responsibility (Braithwaite 2002). Moving beyond a culture of blame means moving from passive responsibility that holds someone responsible for something that happened in the past, to active responsibility where people take responsibility for putting things right in the future.