ABSTRACT

As stressed throughout the book, the primary aim of PRA is to identify and assess the likelihood of sociopolitical developments that may harm a given business

venture. These operations are based on forward-looking causal thinking, which implies the use or manipulation of existing data, but frequently also the production of new data. A traditional distinction, already recalled in Chapter 1 , sets apart qualitative versus quantitative approaches to PRA. The former are often described as ‘subjective’ techniques based on the analyst’s or manager’s ‘judgment’, while the latter may include statistical procedures using “external economic indicators, internal economic indicators, and political indicators” (Kim, 2011, p. 377). Pahud de Mortanges and Allers (1996), for instance, propose the following classification: (1) Qualitative unstructured methods, such as the so-called ‘grand tours’ and ‘old hands’ as described in Chapter 1 ; ( 2 ) Qualitative structured methods, including the Delphi technique, standardized ‘checklists’ and scenarios; and (3) Quantitative methods which are supposed to “reduce the bias of the subjectivity of qualitative methods [. . .] through use of certain measurable factors that act as lead indicators” (p. 307). However, after sketching out this classification, the authors warn that to serve the purpose, “reliable data have to be collected adequately, sophisticated computer programs are required, and experts are needed to carefully interpret the results” (p. 308, emphasis added). In sum, even when political risks are carefully operationalized, 2 their success for use in forecasting models “is highly dependent on the quality of the data, mainly on effective classification of the events” (Burnley et al., 2008, p. 3): as further highlighted in Chapter 5 , in spite of claims of the higher level of ‘objectivity’ of quantitative methods, it is virtually impossible to exclude elements of human judgment from PR assessment. To look at a typical instance, in surveying ‘quantitative’ country risk analysis methods and their political risk component, along with some genuinely quantitative indicators such as the number of strikes, riots or coups, Nath (2008) lists variables which would hardly qualify as ‘quantitative’, such as ‘high and low’ political violence or the ICRG political risk rating. This is not to deny that the use of purely quantitative indicators is possible and sometimes even indispensable in PRA, provided that the nature of the measured phenomena allows for the construction of such indicators. Rather, the point that is worth underscoring here is that at its very core, political risk assessment entails an element of human judgment, which may take very diverse forms. ‘Subjective’ judgment is key when it comes to choosing a given approach, selecting an indicator or even constructing an index. It should be stressed that this is a feature which PRA shares with the social sciences in general. On the one hand, diverging opinions still exist about whether or not quantitative and qualitative social research are fundamentally different in terms of logic of inference (Brady, 2004), while on the other hand there is a widespread convergence on the idea that in both cases causal language should be used with caution in social sciences and that the quantitative template leaves some important problems unsolved, for instance in the case of omitted variables and endogeneity (Collier et al., 2004). In any case, it can be said that ‘quantitative’ and ‘qualitative’ as methodological categories are far less discrete than might initially appear (Creswell, 2014, p. 87), especially as far as PRA is concerned. In fact, any exercise in political risk measurement is influenced by two sets of problems which are not only relevant to PRA but have larger implications for global economic governance as well as for public

and private policy-making: the ‘power of numbers’ and the role of ‘expertise’. With regard to the first issue, it should be stressed that today the production, publication and commercialization of indicators or composite indexes is crucial to generate website traffic and/or to boost the demand for the provider’s consultancy services (Davis et al., 2012, p. 14). Although quantitative indexes or rankings are admittedly no substitute for in-depth tailor-made reports, it can certainly be said that there is a demand for them from consumers of PR intelligence services also owing to their success as ‘marketing devices’ (Interview, 2015a; Interview, 2015f ), epitomized by the fact that all major PR consultancies offer similar products, as for instance the PRS Group’s Global Risk Index, Aon’s Political Risk Map, Euromoney’s country risk score containing a political risk indicator, the Political Monitor’s Political Risk Index for Asia, or the Verisk Maplecroft’s Political Risk Atlas mentioned in Chapter 2 . Obviously, PR indexes are just one of many categories of tools for PRA, and certainly not the sharpest one in terms of accuracy, as will be shown throughout the chapter. Yet, to the extent that they represent a response, however partial, to the long-standing quest for measurability in a world which is perceived as magmatic and unpredictable, they can be connected to the pervasive trend towards practices of ‘quantifying, classifying, and formalizing’ as defining aspects of modern life (Lampland & Star, 2008). Upon closer examination, the appeal of the idea of quantification applied to political risk can be explained by making reference to at least four – however controversial – features of numerical indicators vis-à-vis qualitative assessments – that is, their putative (1) objectivity; (2) persuasiveness; (3) brevity; and (4) comparability. As already highlighted in Chapter 2 , objectivity is the ever-elusive chimera of positivists. Approaching numbers as strategies of communication, Porter (1996) points out that the resonance of the concept of ‘objectivity’ is overwhelmingly positive, having to do with the “exclusion of judgment, the struggle against subjectivity . . . [which] . . . has long been taken to be one of the hallmarks of science ” (p. ix). Translating a message into numbers is to “summarize complexity, not by accident but by design, and speak with a quantitative and apparently objective authority that commands respect” (Morse, 2004, p. xiv). The aura of apparent impartiality assumed by the ‘quantified’ entity leads to the second point mentioned above – that is, the persuasiveness of numbers: indexes in this sense can be thought of as numbers whose ‘vested authority’ augments as they circulate:

‘Raw’ information typically is collected and compiled by workers near the bottom of organizational hierarchies; but as it is manipulated, parsed and moved upward, it is transformed so as to make it accessible and amenable for those near the top, who make the big decisions. This “editing” removes assumptions, discretion and sociology of quantification ambiguity, a process that results in “uncertainty absorption”: information appears more robust than it actually is.