The Origin and Development of Stare Decisis at the U.S. Supreme Court: Timothy R. Johnson, James F. Spriggs, II, and Paul J. Wahlbeck
In the past few decades there has been a wealth of scholarship aimed at understanding the origin and development of institutional rules as agents of political, economic, and social change. In the eyes of many scholars, the questions of where institutions originate and how they develop are two of the most important puzzles confronting social science. Indeed, social scientists have spent a great deal of time trying to understand why institutional rules emerge, when and where they emerge, and the effects of their emergence on society. Existing literature examines the development of such political institutions as constitutions (Riker 1988; Tsebelis 1990), legislative rules (Bach and Smith 1988; Binder 1997; Jenkins, Crespin, and Carson 2005; McCubbins, Noll, and Weingast 1987; Shepsle 1986; Shipan 1995, 1996, 1997), and voting rules (Duverger 1954). This scholarly interest in institutional rules stems from a general recognition that they can have pronounced effects on social outcomes-that is, they are not neutral but serve to allocate resources in society (Knight 1992; North 1990). Simply put, rules determine opportunities by defining choice sets and by giving strategic advantage to some actors over others. For instance, the rules of legislative debate in the U.S. House of Representatives often advantage one political party over the other and thus influence legislative outcomes (Binder 1996). More generally, rules provide the structure within which both government and nongovernmental actors make choices and, as a result, affect the distribution of political, social, and economic benefits. As North (1990, 30) argues, “Institutions are the rules of the game in society, or more formally, are the humanly devised constraints that shape human interaction.” Our interest lies with understanding the quintessential institutional rule in the American judiciary-stare decisis. This informal norm directs judges to follow legal rulings from prior cases that are factually similar to ones being decided.1 It is the defining feature of American courts, and lawyers, judges, and scholars recognize it represents the most critical piece of American judicial infrastructure (Knight and Epstein 1996a; Powell 1990; Schauer 1987). Additionally, the transfer of the common law framework from England to the United States, and the role stare decisis plays within it, is the “central theme of
early American legal history” (Flaherty 1969, 5). Indeed, put into place in the mid-eighteenth and nineteenth centuries, the creation and development of this institutional structure represents a significant part of the American nation-building experience and serves as the most important transformational change in U.S. legal history (Friedman 1985; Hall 1989). Despite the recognized centrality of stare decisis in the American judiciary, no social scientific study to date has endeavored to explain systematically why and when it developed. Instead, scholars generally discuss the purported advantages of stare decisis (e.g., stability, fairness, legitimacy, and efficiency) without reference to whether these factors were the motivating reasons for its adoption in the first place (Healy 2001; Knight and Epstein 1996a; Lee 1999; Schauer 1987). Additionally, while social scientists try to understand why judges follow precedent (e.g. Bueno de Mesquita and Stephenson 2002; Hansford and Spriggs 2006; Rasmusen 1994), they tend not to explain the origin of this rule (but see Heiner 1986; Shapiro 1972). Finally, most of the discussions of its origin and development come from legal historians (e.g. Allen 1964, 220-230; Friedman 1985, 124-126; Karsten 1997; Kempin 1959), who have not subjected their various conjectures to rigorous empirical tests. Beyond the lack of a generalizable explanation for why it arose, scholars do not even agree on when the norm respecting precedent became institutionalized in the United States. Legal historians generally agree that the idea of past cases being binding did not exist prior to the late eighteenth century, but there is no consensus concerning when this norm became a routine part of legal decision-making. Some suggest that it was “firmly established” by the time of the American Revolution (Anastasoff v. United States 2000; Holdsworth 1934; Jones 1975, 452; Lee 1999; Price 2000). As Justice Story argued in his Commentaries on the Constitution of the United States, stare decisis was “in full view of the framers of the constitution” and “was required, and enforced in every state in the Union” (1833, § 378). Other legal historians, however, contend that the principle, at least as we know it today, did not develop until later in the nineteenth century. These scholars suggest during the pre-revolutionary period “the whole theory and practice of precedent was in a highly fluctuating condition” (Allen 1964, 209), and prior to somewhere between 1800 and 1850 American courts “had no firm doctrine of stare decisis” (Berman and Greiner 1966, 491-494; Caminker 1994; Healy 2001; Kempin 1959, 50). This discussion leads to our central question: how, when, and why did the rule of treating prior cases as binding precedent emerge and develop in the United States? To answer this vitally important question, we argue that judges, desirous of increasing their policy-making authority, fostered stare decisis as a way to legitimize the judiciary and to insulate it from outside political attack. By doing so and by promoting the idea that judging is driven by neutral, legal considerations, rather than by politics, the judiciary gained a strengthened presence in the American political system. This argument is consistent with McGuire’s (2004) analysis, which indicates that as the Court
institutionalized itself within the system of federal policy-making justices were better able to achieve their legal and policy objectives. It is also consistent with some historical work on the Marshall Court era, which contends that Chief Justice Marshall emphasized the rule of law as a way to bolster the Court’s authority (Knight and Epstein 1996b; Newmyer 1985). To test our theoretical argument, the chapter proceeds as follows. In the next section we build the case that the U.S. Supreme Court began to base its decisions on its own precedents by the early 1800s and that such a norm was entrenched by 1815. We do so with two separate datasets. The first compares the Court’s use of English common law (that is, law developed through the decisions of England’s judges) to its citation of its own precedents and other American legal authorities (including lower court decisions and statutes) from 1791 to 1815. This initial analysis demonstrates the movement away from what had been the controlling legal rules in the form of English common law to the new rules set by American courts. From there we analyze the way in which the Court cites and interprets its own precedents from 1791 to 2005. By focusing on how the Court utilizes its own case law we can begin to pinpoint when the Court began to clearly invoke its own precedents to justify its decisions.