ABSTRACT

This chapter analyzes the close relationship between the biologically motivated Hebbian self-organizing principle which governs neural assemblies and the classical principal component analysis (PCA) method used by statisticians for almost a century for multivariate data analysis and feature extraction. Extensions to the classical PCA models were proposed to cope with nonlinear data dependencies. Classical PCA is based on the second-order statistics of the data and, in particular, on the eigenstructure of the data covariance matrix. Classical PCA neural models incorporate only cells with linear activation functions. The chapter offers the basic PCA theorem along with examples illustrating the related concepts. M. A. Kramer’s nonlinear PCA neural network is a multilayer perceptron with a special structure. The cell assembly self-organizes by modifying the synaptic weights of the network using the local neural activations and without reference to any external teachers or target values. The nonlinear features are the activation vector of the second layer of the network.