ABSTRACT

Approach to modelling used in AI (artifical intelligence), based on the broadly associationist principle of linear circuitry. Rooted in classical learning theory, in which learning involved the elaboration of S-R connections, it provided AI modellers with a relatively simple starting point for designing or hypothesising circuitry which emulated this principle. In doing so, they broke away from the notion that the brain operated like a computer processing a symbolic language, for by incorporating a differential weighting of the postulated synapses linking their virtual ‘neurons’ they were able to develop what is known as ‘neural networks’ capable of analoging more complex psychological processes. The central advantage of this was that the system could ‘learn’ and did not require, nor was constrained by, preset programming, connections developing in the light of feedback. Few connectionists, however, now believe the opposition between Connectionist and Computational approaches is clear-cut, and some believe them to be complementary. Historically, Connectionism thus represents the ‘analog’ theoretical strand in AI. Neural network theory provides a model of how representation (see representationalism) may be neurally implemented in a distributed fashion, such that my idea of a ‘tree’ does not have a single corresponding site in my brain. As with AI as a whole, a number of philosophical issues have surfaced over recent decades in relation to the sufficiency of neural network models to account adequately for all psychological phenomena, with some of its exponents such as Churchland (1986) and Churchland (1988) and Stich (1983) adopting radically reductionist positions, especially in relation to Folk Psychology. These debates are too complex for summary here. Parallel distributed processing (PDP) represented a major advance in connectionist theorising.