ABSTRACT

Efficient implementation of neural networks requires high-performance architectures and data representations, while practical VLSI realization must include fault-tolerance techniques. Contemporaneous solution of such problems has not yet been completely afforded in the literature. This paper focuses on data representation to support high-performance neural computation and on error detection to provide the basic information for any fault-tolerance strategy. To achieve massively-parallel performances and to guarantee the early verification of computation correctness, we propose the use of redundant binary representation with a three-rail logic implementation. Costs and performances are evaluated for different architectural solutions, referring to multi-layered feed-forward networks.