ABSTRACT

This chapter describes the fundamentals of radial basis function (RBF), their variations, learning strategies, selection of centres and widths using the concept of generalized degrees of freedom as well as other methods, comparative features between RBFs and multilayer perceptrons, and some application areas. The multilayer perceptron–back-propagation neural network can be considered as a type of stochastic approximation. RBF neural networks are considered as curve-fitting problems in high-dimensional space. One such heuristics is the forward selection method, which starts from an empty subset and incrementally adds basis functions, one at a time, until the greatest reduction in the sum of squares of the error is achieved. In forward selection, one basis function is added at each step. Regularized forward selection is a combination of standard ridge regression and forward selection. The orthogonal least squares algorithm, which is superior to the ordinary least squares method, implements the forward selection for subset selection.