ABSTRACT

Deep belief networks (DBNs) are a basic type of deep neural network in deep learning and a typical deep generative model, which consists of a stack of multi-layer constrained Boltzmann machines. Currently, DBNs are commonly used in speech recognition, handwriting recognition, text classification, and other applications. DBNs are fully connected neural networks, so the number of parameters and computation volume increase dramatically when the network size increases. One of the main research directions of DBNs is processing massive data more quickly and effectively. This chapter focuses on the customization and optimization of hardware accelerator for DBNs, first introducing the background of hardware acceleration for DBNs in Section 8.1, then introducing the algorithmic principle of DBNs, and finally combining with an example of a DBN deployed on an Field Programable Gate Array (FPGA) platform to introduce the details of hardware customization.