ABSTRACT

Both N N were RBF networks wi th 100 hidden layer neurons each having 2-dimensional Gaussian activation functions. (Concerning the number o f H L neurons, it should be mentioned that the A B C worked well wi th the networks having fewer H L neurons after the optimiza­ tion by the orthogonal least square method.) A l l Gaussians were symmetrically placed and had fixed centers and width. In other words hidden layer weights were not a subject o f learning.