ABSTRACT

Some neural network categorization models involve supervised learning; that is, certain output nodes are trained to respond to certain patterns, and the changes in connection weights due to learning cause those same nodes to respond to more general classes of patterns. Synaptic conservation was imposed to prevent the unbounded growth of synaptic strengths that would otherwise result from associative learning. Stephen Grossberg developed a model that has many principles in common with C. von der Malsburg’s but does not use a synaptic conservation law for learning. All the networks include modifiable connections between one layer of nodes encoding features of the sensory environment, and another layer of nodes encoding categories of sensory patterns composed of those features. Neural network categorization algorithms have been quite diverse, but many of them have some general points in common. Of the connections in Malsburg’s model, only the connections from retinal afferents to cortical nodes have modifiable weights.