ABSTRACT
The structure o f both networks is the same in the sense that both have just
one hidden layer, and that in both networks the connections between the input and
the hidden layer are fixed and not the subject o f learning. The subjects o f learning
are the connections w or r between the hidden layer and the output layer, respec-
tively. I t should be stressed that the seemingly second hidden layer in Fuzzy (or
Soft RBF) network is not an additional hidden layer, but the normalization part o f
the only hidden layer. Due to this normalization the sum o f the outputs from the
hidden layer in Soft RBF is equal to one, i.e., S oiF = 1. This is not the case in clas-
sical RBF. (The meaning o f the words "soft" and "normalization" are explained
below.)
The equality o f these two approximation schemes is obvious i f (28) and (34)
are compared. The only difference is that in the so-called fuzzy approximation the
output value from the H L y is "normalized. " The word normalized in quotation
marks is used because y is calculated using the normalized output signals oiF in
Figure 33 from the neurons whose sum is equal to 1. This is not the case wi th
standard RBF network. The fuzzy approximation, due to the effect o f "normaliza-
t ion," is doing some kind o f soft approximation wi th the approximating function
always going through the middle point between the two training data. In analogy
with the "softmax" function introduced into neural network community for the
sigmoidal type o f activation functions by John Bridle in [14], we name the fuzzy
approximation as a soft RBF approximation scheme.