ABSTRACT

As this volume attests, in the last several years we have witnessed an explosion of interest in computational neural network models of learning and memory. In each of these models information is stored in the “synaptic” coupling between vast arrays of converging inputs. Such distributed memories can be shown to display many properties of human memory: recognition, association, generalization, and resistance to the partial destruction of elements within the network. An interesting feature of these models is that the performance is constrained by the patterns of connectivity within the network. This reinforces the view, long held by neurobiologists, that an understanding of neural circuitry holds a key to elucidating brain function. Hence, modern neural network models attempt to incorporate the salient architectural features of the brain regions of interest. However, another crucial aspect of network function concerns the way that the synaptic junctions are modified to change their strength of coupling. Most models have assumed a form of modification based on Hebb's (1949) proposal that synaptic coupling increases when the activity of converging elements is coincident. Variations on this venerable “learning rule” have been enormously successful in simulations of various forms of animal learning. However, this work has also shown that just as network behavior depends on connectivity, the capabilities of the network vary profoundly with different modification rules. What forms of synaptic modification are most appropriate? Again, we must look to the brain for the answer.