ABSTRACT

We saw in the last chapter that events-based measures of human rights grew out of a larger tradition in the social sciences on political violence and political events, such as strikes, demonstrations and other political manifestations. These measures were then developed in particular ways that have become useful in monitoring, documenting and analysing large-scale human rights violations. Standards-based measures have a similar history in that they, too, have developed out of a larger tradition in the social sciences that sought comparability in measures and indicators for cross-national and time-series statistical analysis. This effort at providing comparable measures for cross-national research has included developing measures for democracy (e.g. the Polity measures), warfare (e.g. Correlates of War), corruption (International Country Risk Group), and governance (e.g. World Bank) (see Landman and Häusermann 2003). Moreover, since the emergence of the ‘good governance’ agenda, there has been a demand from intergovernmental and governmental donor agencies and policy makers for such measures to use in the allocation of international aid and other decisions. Both the Millennium Challenge Account in the USA and theWorld Bank use standards-based measures for passing judgement on the performance of governments and allocate foreign aid accordingly. Standardsbased measures of human rights have developed in parallel to these other efforts and in certain instances have been included as measures of these other concepts, or as components of measures that combine different kinds of indicators into country level indices (e.g. the World Bank governance measures). The label ‘standards-based’ comes from the fact that they code country-level

information about human rights on a standardized scale that typically is both ordinal and limited in range. This means that the scale values denote ‘better’ and ‘worse’ protection of human rights, while the range of the values themselves is limited to a few values per scale. For example, as we shall see below, the Freedom House Civil and Political Liberties scales range from 1 (good protection) to 7 (bad protection); the Political Terror scale ranges from 1 (good protection) to 5 (bad protection); and many of the rights scales in the Cingranelli and Richards human rights data project range from 0 (bad protection) to 2 (good

each for compare performance on certain sets of human rights across space and over time. In this way, there is a ‘universality’ assumption built into the scales, since the same set of criteria for coding are applied to every country. This chapter provides an outline and assessment of these different standards-

based measures of human rights. The first section examines the background to these measures, which in many ways began with political science efforts to code regime types using simple systemswhich typically divided the world between democracies and non-democracies, and various categories of regime in between the two (e.g. Lipset 1959; Fitzgibbon and Johnson 1967; Dahl 1971; Duff and McCamant 1976; and Jaggers and Gurr 1995). The next section examines the most popular efforts at developing standards-based measures for human rights, starting with Raymond D. Gastil’s work on freedom (later to be taken over by Freedom House), the political terror scale developed at Purdue University and now housed at the University of North Carolina Asheville, Oona Hathaway’s (2002) scale of torture, and Cingranelli and Richards’ ongoing project of developing measures for different sets of human rights. The section also considers the attempts at providing measures for the de jure commitments states make by coding treaty ratifications (e.g. Keith 1999; Hafner-Burton 2005; Landman 2005) and reservations (Landman 2005a). The final section considers the main limitations to these kinds of measures, including the types of source materials that are used for the coding, the limited sets of human rights that have been coded using these methods, the validity and reliability of the coding process, aggregation bias, and the problem of ‘variance truncation’.