ABSTRACT

Connectionism has made an important contribution to the intriguing challenge of nding a physical basis for mind. To understand its contribution, we need to see it in the context of the surrounding views and knowledge at the time. Placing it in context inevitably leads to charting its rise in the mid-1980s, its period of ascendancy throughout the 1990s, and a plateau of interest in the 2000s. During its ascendancy, it was seen by many as providing a new paradigm for the study of mind. In what follows, we shall trace the beginnings, and strengths of connectionism before turning to a consideration of some of the issues and problems that began to beset it. As will become apparent, some characteristics of connectionism, such as its relatively abstract modelling of brain functions, can be seen as an advantage, or a limitation, depending on the current perspective of the scientic community. Finally, we shall seek to evaluate and assess its lasting contributions and present state. Connectionism is based on both the alleged operation of the nervous system and on distributed computation. Neuron-like units are connected by means of weighted links, in a manner that resembles the synaptic connections between neurons in the brain. These weighted links capture the knowledge of the system; they may be arrived at either analytically or by “training” the system with repeated presentations of inputoutput training examples. In the last two decades of the twentieth century, considerable effort was directed towards exploring the implications of the connectionist approach for our understanding and modelling of the mind. However, connectionism has a longer history, and its antecedents in fact predate classical articial intelligence. As long ago as 1943, McCulloch and Pitts wrote a paper called “A Logical Calculus of the Ideas Immanent in Nervous Activity,” in which they provided an inuential computational analysis of what they believed to be a reasonable abstraction of brain-like systems. To make the step from the complexity of the brain to binary computation required them to make a number of simplications. The ground for McCulloch and Pitts was prepared by earlier work. Until a hundred and twenty years ago the scientic community still believed that the nervous system was a continuous network similar to the blood system through which electricity owed. Then a most important discovery was made by the Spanish scientist Ramón y Cajal in the nineteenth century. He found that there were tiny gaps or synapses,

approximately 0.00002 (1/50,000) millimetres across, in what had been considered to be a continuous neural tube. This discovery paved the way for the notion of separable neurons communicating with one another and quickly gave rise to the doctrine of the neuron (Waldeyer 1891). Cajal was also responsible for the suggestion that learning involved adjustments of the connections between neurons. And it was not long before William James (James 1961 [1892]), the great philosopher and psychologist, speculated about how and when neural learning might occur. His idea was that when two processes in the brain are active at the same time, they tend to make permanent connections (e.g. the sight of an object and the sound of its name). But this idea was not to go much further for over fty years. By ignoring the physical and chemical complexity of the nervous system, McCulloch and Pitts (1943) were able to build their abstract model neurons into networks capable of computing logical functions. In particular their paper showed how modifying weight coefcients and the thresholds in networks could result in different Boolean functions being computed. They proved this in an archaic proof to show that by gluing together simple functions such as AND, OR and NOT, all possible Boolean functions could be computed by their networks. Although they did not take up James’ challenging question of how and when synapses are modied by learning, McCulloch and Pitts’ seminal work showed the possible utility of abstract computational analysis for the study of the mind/brain relation. They believed that they had cracked the problem of linking brain activity to George Boole’s language of thought. This has not worked out as planned but nonetheless their paper remains a cornerstone of modern connectionist research and computer science. Their rst simplication arose from the observation that neural communication is thresholded. That is, the spike action potential is all or none; it is either active enough to re fully or it does not re at all (the amount of charge needed to re a neuron is about 10 millivolts). Thus the neuron could be conceived of as a binary computing device, an idea said to have inspired von Neumann when designing the modern digital computer. The other important simplication was that the synapses had numerical weightings between the binary computing elements. Computation proceeded by summing the weighted inputs to an element and using the binary threshold as an output function (Figure 12.1). Later in the same decade the Canadian psychologist Donald Hebb made James’ learning proposal concrete. Although he cites neither James nor McCulloch and Pitts, Hebb (1949) took a step beyond them in attempting to causally relate memory and perception to the physical world. His idea was that the representations of objects may be considered to be states (or patterns) of neural activity in the brain. He proposed that, each time a neural pathway is used, there is a metabolic change in the synaptic connection between the neurons in the path that facilitates subsequent signal transmission. In this way the more often two neurons are used together, the stronger will be their strength of connection and the greater the likelihood of one activating the other. The synaptic connections come to represent the statistical correlates of experience. Thus in learning to recognise objects, groups of neurons are linked together to form

assemblies (the neurons in any assembly may come from many areas of the brain, e.g. visual and motor etc.). This notion of modiable synapses, or synaptic plasticity, and its role in learning and memory still persists today. Although to some in the neuroscience community Hebb’s ideas are over simplistic, it has to be remembered that little was known about these issues in his day, and he did not have the technology to carry out the physiological experiments. Indeed, it was not until 1973 that Bliss and Lomo rst reported, in detail, that, following brief pulses of stimulation, there is a sustained increase in the amplitude of electrically evoked responses in specic neural pathways. This is the now well known long-term potentiation phenomenon. Subsequent research has shown that one of a variety of synaptic types is indeed a Hebbian synapse (e.g. Kelso et al. 1986; Alkon 1987). Taken together then, the approaches of Hebb and McCulloch-Pitts provided a new avenue to begin to study the physical basis of mind. On the one hand, the McCullochPitts approach suggested a methodology for a computational analysis of the brain. On the other hand, Hebb’s approach gave us an idea of how a device like the nervous system could learn the statistical correlates of the world needed to support perception

Figure 12.1 McCulloch and Pitts net for the Boolean function AND.