ABSTRACT

Training set parallelism and network based parallelism are two popular paradigms for parallelising a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This study analyses training set parallelism for feedforward neural networks when implemented on a transputer array configured in a pipelined ring topology. Analytical expression for the training time per epoch (iteration) is derived. Given a fixed neural network and a fixed number of training samples, using this expression, one can find out the optimal number of transputers needed to minimise the training time per epoch without actually performing the simulations. An expression for speed up is also derived. This expression shows that the speed up is a function of the number of patterns per processor, communication overhead per epoch and the total number of processors in the topology.