ABSTRACT

The Multi-layer Perceptron (MLP) is one of the most common neural networks in use. It is often treated as a ‘black box’, in that people use it without understanding how it works, which often results in fairly poor results. The MLP algorithm suggests that the weights are initialised to small random numbers, both positive and negative. The MLP is designed to be a batch algorithm. All of the training examples are presented to the neural network, the average sum-of-squares error is then computed, and this is used to update the weights. The chapter looks at the design and implementation of the MLP network itself. There are two other considerations concerning the number of weights that are inherent in the calculation, which is the choice of the number of hidden nodes, and the number of hidden layers. The chapter discusses the back-propagation algorithm. This is important to understand how and why the algorithm works.