ABSTRACT

In the last decade, renewed interest in developing computational models of the brain, coupled with advances in VLSI technology, has created many opportunities to realize these computational models in silicon. Along with these biologically motivated neural chips, artificial neural networks (ANN) have also re-emerged as massively parallel solutions to previous adaptive learning algorithms. The advantage of ANN over adaptive algorithms is in terms of much faster computation and learning. However, to realize that computational power, ANNs must be realized in VLSI hardware. When algorithms are put on silicon, many issues have to be addressed, such as what is the circuit architecture, analog or digital implementations, voltage mode or current mode, general or special applications, size of the network, learning algorithms, and weight storage. Several challenges are particularly important to realize the computational power of neurocomputers, which in the broad sense of the word can mean any computer architectures based on neural network paradigms. Many neural network chips and boards, including digital ones, have been shown to be able to perform extremely fast computations. But, input/output interfaces are still very important to bring the data in and out of the chips/boards. Another issue is packaging. Because of the massively parallel nature of neural network architectures, three-dimensional (3-D) packaging may be necessary to avoid serial data interfaces. Circuit and computational architecture is another important factor in realizing high performance neurocomputers. In this paper, the latest research efforts in neurocomputers, and the challenges facing this community, will be discussed.