ABSTRACT

This chapter provides an algorithm that sizes the Multilayer perceptron (MLP) by relating it to a piecewise linear network with the same pattern storage. It describes a method for obtaining Cramer–Rao maximum a posteriori lower bounds on the estimation error variance. The chapter presents some algorithms which help researchers apply the MLP to signal processing problems. MLP neural networks with sufficiently many nonlinear units in a single hidden layer have been established as universal function approximators. MLPs have several significant advantages over conventional approximations. The number of free parameters in the MLP can be unambiguously increased in small increments by simply increasing the number of hidden units. The chapter outlines MLPs with a single hidden layer are trained with hidden weight optimization-output weight optimization. The pattern storage of a network is the number of randomly chosen input–output pairs the network can be trained to memorize without error.