ABSTRACT

Machine learning algorithms have proven unprecedented performance to solve many real-world detection and classification tasks, for example, in image or speech recognition. Despite these advances, there are some deficits. First, Machine learning algorithms require significant memory access, thus ruling out an implementation using standard platforms for embedded applications. Second, most machine learning algorithms need to be trained with huge datasets. Resistive memories (RRAM) have demonstrated to be a promising candidate for overcoming both these constraints. RRAM arrays can act as a dot product accelerator, which is one of the main building blocks in neuromorphic computing systems. This approach could provide improvements in power consumption and speed with respect to the graphics processing units- or central processing units-based networks. The chapter presents a possible hardware implementation of a spike-based convolutional neural network for visual pattern recognition. Developments in the neuroscience community evidence that biological synapses exhibit different kinds of plasticity rules.