ABSTRACT

Introduction: a plethora of complexities This chapter will extend further a discussion regarding the relationships between different views of economic complexity, or complexity more generally, that has been going on for some time, with Velupillai and some of his associates (2005) broadly on one side and Rosser (2009) on the other. This earlier round tended to focus on the relative usefulness of various versions of computational complexity as contrasted with what Rosser labelled dynamic complexity. Here we shall seek to understand more deeply the roots of each of these perspectives in logic and evolutionary theory, particularly the concept of emergence associated with certain versions of evolutionary theory. While this discussion will highlight more sharply the contrasts in these views, it will also lead to the position of their possible reconciliation in actually existing economic systems, with this perhaps a fulfilment of the desire stated by Professor Velupilllai in the opening quotation provided above. Following Day (1994), Rosser (1999, 2009) defined dynamic complexity as being determined by the deterministically endogenous nature of the dynamics of a system: that it does not converge on a point, a limit cycle, or simply expand or implode exponentially. This ‘broad tent’ definition includes the ‘four Cs’ of cybernetics, catastrophe theory, chaos theory and the ‘small tent’ complexity of heterogeneous agent models in which local interactions dominate the dynamics. Curiously, while Horgan (1997: 303) considers chaotic dynamics to be complex (and coined the term ‘chaoplexity’ to link the two concepts together), the list he provides of 45 definitions gathered by Seth Lloyd does not clearly include anything that fits this view that is so widely used by economists.2 Indeed, many of

the definitions appear to be variations of algorithmic or computational complexity, many of them being information measures of one sort or another. This last is not surprising, as Shannon’s information measure as Velupillai (2000) has argued that it is the foundation for the Kolmogorov-Solomonoff-ChaitinRissanen varieties of algorithmic and computational complexity. Most of these measures involve some variation on the minimum length of a computer program that will solve some problem. While there are several of these measures, a general argument in favour of all of them is that each of them is precisely defined, and can provide a measure of a degree of complexity. This has been argued to be favourable aspect of these definitions (Markose 2005; Israel 2005), even if they do not clearly allow for a distinguishing between that which is complex and that which is not. At the same time they allow for qualitatively higher degrees of complexity. In particular, a program is truly computationally complex if it will not halt; it does not compute. The bringing in of the halting problem is what brings in through the back door the question of logic with the work of Church (1936) and Turing (1937) on recursive systems, which in turn depends on the work of Gödel. In general, one advantage of the dynamic definition is that it provides a clearer distinction between systems that are complex and those that are not, although there are some fuzzy zones as well with it. Thus, while the definition given above categorizes systems with deterministically endogenous limit cycles as not complex, some observers would say that any system with a periodic cycle is complex as this shows endogenous business cycles in macroeconomics. Others argue that complexity only emerges with aperiodic cycles, the appearance of chaos or discontinuities associated with bifurcations or multiple basins of attraction or catastrophic leaps or some form of non-chaotic aperiodicity. So, there is a gradation from very simple systems that merely converge to a point or a growth path, all the way to fully aperiodic or discontinuous ones. In almost all cases, some form of nonlinearity is present in dynamically complex systems and is the source of the complexity, whatever form it takes.3 This sort of hierarchical categorization shows up in the levels identified by Wolfram (1984; see also Lee 2005) which arguably combine computational with dynamic concepts. The idea of either emergence or evolution, much less emergent evolution, is not a necessary part of the dynamic conceptualization of complexity, and certainly is not so for the various computational ones. However, it has been argued by many that evolution is inherently a dynamically complex system (Foster 1997; Potts 2000), possibly even the paradigmatic complex system of them all (Hodgson 2006a). Curiously, even as complex dynamics are not necessarily evolutionary, likewise emergence is not always present in evolution or even arguably its core (which is presumably natural selection with variation, mutation and inheritability). But, the most dramatic events of evolution have been those involving emergence, the appearance of higher order structures or beings, such as the appearance of multi-cellular organisms. While most evolutionary theorists reject teleological perspectives that increasing complexity associated with successive emergence events (Gould 2002), the association of emergence with

evolution goes back to the very invention of the concept of emergence by Lewes (1875), who was influenced by John Stuart Mill’s (1843) study of heteropathic laws, and who in turn influenced C. Lloyd Morgan (1923) and his development of the concept of emergent evolution. While many advocates of computational ideas find the idea of emergence to be essentially empty, we shall see that it reappears when one considers the economy as a computational system (Mirowski 2007a). Whatever its analytic content, emergence seems to happen in the increasingly computational based real economies we observe, with this emergence being a central phenomenon of great importance. Thus, some effort of reconciliation of these ideas is desirable, although this effort will first take us on a journey through the deeper controversies within the logic of computability.