ABSTRACT

This chapter reviews various easy-to-train architectures and shows that abilities to recognize patterns strongly depend on the used architectures. It discusses a modified feedforward version as described by Zurada. The feedforward neural networks allow only for one directional signal flow. Furthermore, most of the feedforward neural networks are organized in layers. Training of multilayer neural networks is difficult. It is much easier to train a single neuron or a single layer of neurons. Using nonlinear terms with initially determined functions, the actual number of inputs supplied to the one layer neural network is increased. Note that the functional link network can be treated as a one-layer network, where additional input data are generated off-line using nonlinear transformations. The counterpropagation network is very easy to design. The number of neurons in the hidden layer should be equal to the number of patterns. The cascade correlation architecture was proposed by Fahlman and Lebiere.