ABSTRACT

We study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms for Pi-Sigma networks training are presented. More specifically the distributed version of the Differential Evolution algorithm has been employed. To this end, each processor is assigned a subpopulation of potential solutions. The subpopulations are independently evolved in parallel and occasional migration is employed to allow cooperation between them. The proposed approach is applied to train Pi-Sigma networks using threshold activation functions. Moreover, the weights and biases were confined to a narrow band of integers, constrained in the range [-32,32], thus they can be represented by just 6 bits. Such networks are better suited for hardware implementation than the real weight ones. Preliminary results suggest that this training process is fast, stable and reliable and the distributed trained Pi-Sigma network exhibited good generalization capabilities.