ABSTRACT

Neural networks, because o f their cognitive capacity, have widely been em­ ployed both as feature extractors and pattern classifiers [81,82]. The main strength o f back-propagation networks consists in the capability o f supervised

learning through training for nature applications. The networks, as nonlinear pat­ tern discriminators, map an n-dimensional input vector into an m-dimensional output vector by adjusting-during the learning phase-the weights o f the net­ work interconnection links. The weight adjustment is carried out by minimizing the error, at network output, defined as the difference between the desired output vector yd and the actual output vector yo. For minimization, usually the gradient descent algorithm is used and the mean-squared error is the performance index. The mapping itself is performed-within the network-in two steps: the input vector is first mapped on the hidden layer o f the network that, in sequence, is mapped on the network output layer. The hidden layer is thus equivalent to a data concentrator that internally encodes the essential features o f the input pattern in a compressed form [83].