ABSTRACT

How does Schrödinger’s question touch on philosophical propositions, such as René Descartes’ famous proposition Cogito ergo sum (‘I think, therefore I am’). This article tries to cast thinking in terms of probabilistic beliefs and thereby link consciousness to probabilistic descriptions of self-organization – of the sort found in statistical physics and dynamical systems theory. This agenda may sound ambitious, but it may be much easier than at first glance. The trick is to associate philosophical notions – such as ‘I think’ and ‘I am’ – with formal constructs, such as probabilistic inference and the implications of existing over extended periods of time. In brief, we start off by considering what it means for a system, like a cell or a brain, to exist. Mathematically, this implies certain properties that constrain the way the states of such systems must change, such as ergodicity, i.e., any measurement of such systems must converge over time (Birkhoff, 1931). Crucially, if we consider these fundamental properties in the context of a separation between internal and external states, one can show that the internal states minimize the same quantity minimized in Bayesian statistics. This means one can always interpret a system that exists and as making probabilistic inferences about its external milieu. Formally, this turns Descartes’ proposition on its head; suggesting that ‘I am [ergodic], therefore I think’. However, if a system ‘thinks’ – in the sense of updating probabilistic beliefs – then what does it believe? We will consider the only self-consistent (prior) belief, which is ‘I think,

therefore I am [ergodic]’. This prior belief is formally equivalent to minimizing uncertainty about the (external) causes of sensory states that intervene between external and internal states. This is precisely the imperative that drives both scientific hypothesis testing and active perception (Gregory, 1980; Helmholtz, 1866/1962; O’Regan & Noë, 2001; Wurtz, McAlonan, Cavanaugh, & Berman, 2011). Furthermore, it provides a nice perspective on the active sampling of our sensorium that suggests that perceptual processing can be associated with

Bayesian belief updating (Dayan, Hinton, & Neal, 1995) – a dynamic that can therefore be derived from the basic principles of self organization. To make these arguments more concrete, we will consider an example of the ensuing active inference, using simulations originally described in (Friston, Adams, Perrinet, & Breakspear, 2012). This article comprises three sections. The first section draws on two recent

developments in formal treatments of self-organization. The first is an application to Bayesian inference and embodied perception in the brain (Friston, Kilner, & Harrison, 2006). The second is an attempt to understand the nature of self-organization in random dynamical systems (Ashby, 1947; Haken, 1983; Maturana & Varela, 1980; Nicolis & Prigogine, 1977; Schrödinger, 1944). This material has been presented previously (Friston, 2013) and, although rather technical, has some relatively simple implications. Its premise is that biological self-organization is (almost) inevitable and manifests as a form of active Bayesian inference. We have previously suggested (Friston, 2013) that the events ‘within the spatial boundary of a living organism’ (Schrödinger, 1944) may arise from the very existence of a boundary or blanket – and that a (Markov) blanket may be inevitable under local coupling among dynamical systems. We will see that the very existence of a Markov blanket means we can interpret the self-organization of internal states in terms of Bayesian inference about external states. The second section looks more closely at the nature of Bayesian inference,

in terms of prior beliefs that might be associated with a system that minimizes the dispersion (entropy) of its external states through action. We will see that these prior beliefs lead to a pre-emptive sampling of the sensorium that tries to minimize uncertainty about hypotheses encoded by the internal states. The final section illustrates these ideas using simulations of saccadic searches

to unpack the nature of active inference. This section contains a brief description of an agent’s prior beliefs about the causal structure of its (visual) world and how that world would be sampled to minimize uncertainty. Finally, the results of simulated saccadic eye movements and associated inference are described, with a special emphasis on the selection of beliefs and hypotheses that are entertained by our enactive brain.