ABSTRACT

This chapter presents algorithms of automatic artificial neural networks (ANN) structure selection. It discusses the problem of ANN architecture selection and some theoretical aspects of this problem. The fundamental learning algorithm for the Multilayer perceptron (MLP) is the backpropagation (BP) algorithm. This algorithm is of iterative type and it is based on the minimization of a sum-squared error utilizing optimization gradient-descent method. The chapter considers the MLP network with two hidden layers and units with a sigmoid activation function. Most of the proposed methods were dedicated to specific types of neural networks. A variety of architecture optimization algorithms have been proposed. They can be divided into three classes: bottom-up approaches, top-down approaches and discrete optimization methods. Unlike the tiling algorithm, the upstart method does not build a network layer by layer from the input to output, but new units are introduced between the input and output layers, and their task is to correct the error of the output unit.