ABSTRACT

This paper presents a high-performance technique for implementing artificial neural networks (ANNs) on hypercube-based general-purpose massively parallel machines. The paper synthesizes a tree-based parallel structure which is embedded into the hypercube topology. This structure is referred to as mesh-of-appendixed-sheered-trees (MAST). Both the recall and the learning phases of the multilayer with backpropagation ANN model are mapped on the MAST architecture. Unlike other techniques presented in the literature which require 0(N) time, where N is the size of the largest layer, our implementation requires only 0(log N) time. Moreover, it allows pipelining of more than one input pattern and thus further improves the performance.