ABSTRACT

With the emerging applications of artificial intelligence, computer vision, speech recognition, and machine learning, neural networks have been the most useful solution. Due to the low efficiency of neural networks in general processors, variable-specific heterogeneous neural network accelerators were proposed. This chapter begins with an introduction to the principles of neural network algorithms and the background of their hardware acceleration, followed by an overview of several common neural network hardware accelerator architectures, and then a summary of the design and optimization methods of typical neural network accelerators, and finally the progress and conclusions of related work. With this chapter as a foreshadowing, the accelerator deployment on field programmable gate array (FPGA) platforms will be used as an example to specifically explain the hardware accelerator customization method for neural network algorithms.