ABSTRACT

The paper describes a novel means of performing calculus using networks of radial-Gaussian neurons. The network architecture and training algorithm used for this purpose is described briefly. Once trained, a network can be converted into a form that provides the differential or integral of the learned function, by a simple substitution of the type of activation function used at the hidden neurons. A range of substitute activation funtions, for conversion to first and second order partial differential and integral forms of the network, are derived. Following this, the technique is tested on a selection of calculus operations. The converted networks produced in these experiments provide accurate models of the actual differentials and integrals of the function the origmal networks had been taught. A method of improving the accuracy of results by training the original network beyond the region in which the converted networks operate, is described. The paper concludes by identifying some areas for further development of the technique.