ABSTRACT

The set P^h] o f all polynomials is dense in C [a, b]. In other words, given a n / e C

[a, b] and s > 0, there is polynomialp for which: | p ( x ) - f (x)\< s for all x e

[a, b].

Cybenko's Theorem

Let a be any sigmoidal function and I d the ¿/-dimensional cube [0, Xf. Then finite

sums o f the form,

are dense in C [Id]. In other words, given a n / e C [Id] and 8 > 0, there is a sum,

F(x) o f the above form for which: | F(\) - f (\)\<s for all x e I d . (Where w¡, v

and bj represent output layer weights, hidden layer ones and bias weights o f the

hidden layer respectively.)

Theorem for the Density of Gaussian Functions

Let G be a Gaussian function and I d the ¿/-dimensional cube [0, \ ] d . Then finite

sums o f the form,

are dense in "C [Id]. In other words, given a n / e C [Id] and s > 0, there is a sum,

F(x) o f the above form for which: \F(\) - f(\)\<s for all x e I d . (Where w¡,

and C/ represent output layer (OL) weights and centers o f hidden layer ( H L ) mul t i -

variate Gaussian functions respectively.) 2

The same results o f universal approximation properties do exist for fuzzy

models, too. Results o f this type can also be stated for many other different

!The reader less interested in these theoretical issues may, without loss of continuity, skip the first part of this Subsection and continue the reading at the first paragraph after Eq. (4). 2 In passing, let us note that Eqs. (1) and (2) represent the two most popular feedforward neural networks used today-multilayer perceptron and radial basis function NN. Their graphical representation wil l be given in Section II.B. Multilayer perceptron stands for N N with one or more hidden layers comprising neurons with sigmoidal activation functions. The standard representative of such fucntions is a tangent hyperbolic function. Structure of RBF networks is the same, but the activation functions are radially symmetric.