ABSTRACT

This chapter considers a network with both inputs and outputs, but there is no feedback from the environment to say what those outputs should be or whether they are correct. The network must discover for itself patterns, features, regularities, correlations, or categories in the input data and code for them in the output. The units and connections must thus display some degree of self-organization. In most of the cases considered in this chapter the architecture and learning rule come simply from intuitively plausible suggestions. The architectures considered so far for principal component analysis have all been one-layer feed-forward networks. Other networks, with more layers or with lateral connections, can also perform it, and may have some advantages. The chapter considers some techniques based on connections that learn using a modified Hebb rule. Thus the purpose is not clustering or classification of patterns, but rather measuring familiarity or projecting onto the principal components of the input data.