ABSTRACT

Geoffrey Hinton proposed an unsupervised network model called deep belief networks (DBNs) and a new training method to train deep neural networks like DBNs. The new method employs greedy layer-wise pretraining to solve the local optimum and other problems existing in artificial neural network (ANNs). Since the high performance and energy-efficiency, accelerations for deep learning on hardware accelerators have sprung up in recent years. Deep learning builds computational models and simulates the human neuron network mechanism to interpret text, images, speech, and other complex data. Neuron values of the lower layer are the input data of the upper layer. Within deep neural networks (DNNs) in a narrow sense, neurons of adjacent layers are fully connected by individual synaptic weights, while neurons in the same layer or non-adjacent layers are not connected. Hardware acceleration means mapping the whole algorithm or compute-intensive part to hardware devices and speeding up the computation with the help of the inherent parallel in hardware devices.