ABSTRACT

Neural networks are characterized by being massively parallel in their architecture and that they use a learning paradigm rather than being programmed. The radial basis function (RBF) represents one such processing method. The approximation by the sum can also be interpreted as a rather simple singlelayer type of artificial neural network. The RBFs taking on the role of the activation functions of the neural network. The most well-known learning paradigm for feed-forward neural networks is the backpropagation paradigm. Generally speaking, very few implementations of neural networks have been done outside the university world, where such implementations have been quite popular. In very general terms, the approach is to map an N-dimensional space by prototypes. The RMS brightness jitter as well as the position jitter in pixels turned out to be equally good or even better than most conventional star tracker of same complexity.