ABSTRACT

The problem of generalisation of logical functions is addressed. It is argued that instead of arbitrarily choosing a particular neural net to perform generalisation, the statistically most 162likely generalisation should be computed and it appears that this computation can be performed by a class of feedforward ANNs using sinusoidal non-linearities. The ANNs are called the self-organising perceptron (SOP) and Fourier multilayer perceptron (FMLP), and are able to correctly generalise with data on which conventional ANNs fail. The justification for using the SOP and FMLP is based on an analysis of the amount of structure in a function and a reformulation of the Shannon-Hartley Law to show that they generalise in the statistically most likely way.