Contestation and contingency in advisory governance CLAIRe A . DUNLOP
Introduction Randomised controlled trials (RCTs) are lodged atop of evidence-ranking hierarchies (Petticrew and Roberts 2003), and frequently referred to as offering the ‘gold standard’ in policy-relevant knowledge (Cartwright 2007, 2009; Grossman and MacKenzie 2005). This status is increasingly reflected in public administrations as decision-makers strive to identify ‘what works’ in a bid to deliver efficient policy design and effective outputs. Indeed, in the US, the method is the preferred policy-evaluation technology (Greenberg and Shroder 2004; Jowell 2003: section 5); an enthusiasm that is beginning to be matched in the UK, where there are calls for RCTs to be institutionalised into advisory arrangements (Halpern 2015; Haynes et al. 2012; John 2014). The influence of modernist social science is clear here. Experiments, it is argued, can reach the parts that other methods cannot. Through the random allocation of treatment and control to different groups, researchers are able to compare the results of taking policy action with the status quo, thereby demonstrating what works, where, when, and how. Against this backdrop, a growing literature on experiments in public policy, administration, and management offers practical ‘how to’ guides for budding experimenters (for example, Baekgaard et al. 2015), attempts to establish standards of best practice (McDermott 2013), and details empirical exemplars of policy-oriented RCTs (for a collection see Blom-Hansen et al. 2015). Despite this high esteem, such functional analyses do sound a note of caution. Classic accounts of the knowledge utilisation literature have long since laid bare the challenging reality of getting evidence into policy (for example, Weiss 1983). Accordingly, recent studies in the European literature, written by critics and experimenters alike, have gone beyond the gold-standard shorthand to explore the barriers that can limit the impact on policy of evidence from experiments (Jowell 2003; Stoker 2010). We are reminded that, like all policy advice, RCT evidence is set against a political backdrop, and how experimenters negotiate certain logistical challenges is critical to their success – notably, how and when findings are communicated (Weiss 1999); whether results find a timely ‘window of opportunity’ in the policy cycle (Dunlop 2010; Stoker 2010); the extent to which key stakeholders and officials are kept
engaged by the research team (Stoker 2010; Jowell 2003); and the sociopolitical acceptability of the experiments themselves. How can such functional accounts help us understand the reality of advisory governance and the prospects for RCTs as sources of policy-relevant knowledge? Specifically, what can they offer to explain the contestation of evidence that is now a ubiquitous part of claims of policy relevance made in policymaking on technical issues? Policy-oriented experiments are not simply left to designated scientific advisers working with civil servants to develop institutional responses to instrumental problems. Rather RCTs’ design, execution, and conclusions – and experimenters’ qualifications, normative beliefs, and institutional affiliations – are questioned and debated by motivated policy actors. When it comes to illuminating and explaining this routine epistemic deconstruction, the analytical reach of functionalist explanations is heavily circumscribed. The accounts that dominate the literature understate conflict, preferring to address advisory governance as a series of practical obstacles to be overcome by researchers aiming to promote their evidence as policy-relevant. The alternative approach is to decentre: to interrogate epistemic practice not in terms of statistical regularities but by asking what actually happens when experts create and communicate policy advice. In bringing analysis down to the micro-level we treat the relevance of methods and the knowledge they create as residing in people and their beliefs rather than institutional procedures. Following Bevir (2013), this decentred account analyses advisory governance using interpretive tools that spotlight meaning-making activities around RCTs. Examining the contending narratives constructed by different policy actors illuminates the contingent meanings attached to RCTs by individual actors in individual cases. In doing so, we zoom in on scientific advisers and their interactions with policymakers, other ‘rival’ scientists, and policy stakeholders. How do scientific advisers create meaning and claim policy relevance for the RCTs they conduct? How are these meanings interpreted and reshaped by other actors? The chapter is structured as follows. First comes the chapter’s motivation and context. Specifically, we present the claims of special policy relevance that are increasingly attached to RCTs in the UK to make the case for a decentred analysis of advisory governance. The utility of this approach is demonstrated in an analysis of the advisory governance of bovine tuberculosis (BTB) in England. Next we introduce the case and method. The most recent epistemic centrepiece of policy advice around BTB was the largest and most expensive randomised controlled field experiment ever conducted in the UK, which ran from 1998 to 2007. Despite the unequivocal conclusions drawn by the scientists who conducted the RCT, government policy contradicts the experiment’s findings, which have become the focus of contestation in this policy issue. Evidence from elite interviews and select committee testimonies exposes the importance of individuals’ beliefs and interpretations in defining the policy relevance of RCTs. The next section contains the empirical analysis and specifically the contested narratives that emerged around two epistemic sites – the internal and external validity of the experiment. The chapter
concludes with the lessons that researchers and policymakers alike can usefully draw by shifting their understanding of how policy relevance is constructed from the meso-to micro-level.