ABSTRACT

CONTENTS 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 11.2 Basic Feature-Mapping Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 11.3 Kohonen’s Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 11.4 Convergence Analysis of Self-Organizing Maps . . . . . . . . . . . . . . . . . . . . 396 11.5 Self-Organizing Map Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 11.6 Variants of Self-Organizing Maps Based on Robust Statistics . . . . . . 404 11.7 A Class of Split-Merge Self-Organizing Maps . . . . . . . . . . . . . . . . . . . . . . 420 11.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440

Neural networks (NNs) are able to learn from their environment so that their performance is improved. In several NN categories, learning is provided by a desirable input-output mapping that the NN approximates. This is called supervised learning. Typical NNs where supervised learning is employed are called multilayer perceptron or radial-basis functions networks. Another principle is the unsupervised learning or self-organized learning that aims at identifying the important features in the input data without a supervisor. Unsupervised learning algorithms are equipped with a set of rules that locally update the synaptic weights of the network. The topologies of NNs that are trained using unsupervised learning are more similar to neurobiological structures than are those of NNs that are trained using supervised learning. The basic topologies of self-organizing NNs are as follows:

Methods, and

layer where the neurons of the input layer are connected to the neurons of the output layer with feedforward connections, and the neurons of the output layer are connected with lateral connections

2. NNs of multiple layers in which the self-organization proceeds from one layer to another

There are two self-organized learning methods:

1. Hebbian learning that yields NNs that extract the principal components1,2

2. Competitive learning that yields K -means clustering3

Self-organized learning is essentially a repetitive updating of NN synaptic weights as a response to input patterns, according to a set of prescribed rules, until a final configuration is obtained.2 A number of observations have motivated the research toward self-organized learning. It is worth noting that in 1952 Turing stated that “global ordering can arise from local interactions,” and von der Malsburg observed that self-organization is achieved through self-amplification, competition, and cooperation of the synaptic weights of the NN (see Reference 2, and references therein). In this chapter, we focus on competitive learning and on self-organizing maps (SOM) in particular. The latter can be viewed as a computational procedure for finding a discrete approximation of principal curves.4 Principal curves could be conceived of as a nonlinear principal component analysis method.1