ABSTRACT

A new approach to the problem of n-dimensional function approximation using two-layer neural network is presented. The generalized Nyquist theorem is introduced to solve for the optimum number of learning patterns in n-dimensional Input space. Choosing the smallest but still sufficient set of training vectors results in the reduced number of hidden neurons and learning time for the network. Analytical formulas and algorithm for training set size reduction are developed and illustrated by two-dimensional data examples.