ABSTRACT

The computational capabilities of artificial neural networks emerge from at least three key factors: the properties of the individual adaptive elements, the patterns of connectivity within the networks, and the rules governing the interactions between the elements. Many artificial neural networks are composed of simple elements (often referred to as neurons). Each element sums its weighted inputs (often referred to as synaptic inputs), and if the sum of these synaptic inputs equals or exceeds a threshold value, the element is activated (the equivalent of a neuronal action potential). Generally, there is little structure within these networks, and the elements are either highly interconnected or arranged as two or three layers of elements that receive converging synaptic input from the

preceding layer. The strengths, or weights, of the synaptic connections between the elements change according to rules or algorithms that are referred to as learning rules. For example, with a Hebbian learning rule synaptic efficacy changes as a function of simultaneous activities in the presynaptic and postsynaptic elements. As a result of training, the synaptic weights are altered via the learning rules and the arbitrarily connected network develops a functional struc­ ture that is appropriate for solving a particular problem. Theoretical work illus­ trates that networks with such apparently simple characteristics are capable of quite complex collective computations (e.g., Anderson & Rosenfeld, 1988; Bear, Cooper, & Ebner, 1987; Dobbins, Zucker & Cynader, 1987; Fukushima, Miyake & Ito, 1983; Hopfield, 1982, 1984; Hopfield & Tank, 1985, 1986; Koch & Segev, 1989; Lehky & Sejnowski, 1988; Linsker, 1986; Lippmann, 1989; see also, McClelland, Rumelhart and the PDP Research Group, 1986; Pearson, Finkel & Edelman, 1987; Rumelhart, McClelland and the PDP Research Group, 1986; Sejnowski, Koch, & Churchland, 1988; Sejnowski & Rosenberg, 1986; Zipser & Anderson, 1988). Other artificial neural networks (e.g., Grossberg, 1971; Grossberg & Levine, 1987; Schmajuk, this volume) include distinct substructures and interconnections that more closely approximate those in certain brain regions, but their units do not correspond to individual neurons. Therefore, an intriguing question, which we are pursuing, is what computational capabilities emerge as the properties of the adaptive elements, the patterns of connectivity and the learning rules within simulated neural networks are made more reflective of the details of neuronal biochemistry and physiology.