ABSTRACT

This chapter presents a survey of different neural network fast training procedures. It discusses an overview of further up-to-date new techniques and also presents different algorithms and techniques in unified forms and particular emphasis on their corresponding behavior, including the reduction of the iteration number, and their computational complexities, generalization capacities, and other parameters. Feedforward neural networks such as the multilayer perceptron are some of the most popular artificial neural network structures being used. Finding optimal initial weights to start the learning phase can considerably improve the convergence speed. In the learning phase, in each iteration, patterns can be selected arbitrarily or in a certain order. The use of an adaptive slope of the activation function or a global adaptation of the learning rate and/or momentum rate can increase the convergence speed in some applications. Numerous heuristic optimization algorithms have been proposed to improve the convergence speed of the Standard backpropagation algorithm.