ABSTRACT

This paper derives an error bound for the output from a three layer feedforward neural network operating on test data from outside of the training set. Three factors influence the size of this bound: accuracy of the neural network model on the training data, distance of the test point from the training set, and volatility of the process being modeled and the network model. Volatility is defined as a bound for the derivatives of both the process and the network model. Process derivatives can be estimated from the training data. An example using training and test data generated by the chaotic Henon mapping shows that this error bound can follow actual errors quite closely. For applications such as time series prediction, such an error bound can provide some credibility in advance for the output of a neural network predictor.