In this chapter, we present a simple yet powerful visual memory model called Increment Pattern Association Memory Model (IPAMM) to achieve increment learning and rapid retrieval of visual information. We extend the basic pattern association network by learning patterns based on the Leabra framework which combines error-driven learning and Hebbian learning together. In the proposed model, image features act as conditioned stimulus and class labels act as unconditioned stimulus. Increment Learning can be achieved by assigning individual synaptic weight to different categories. Meanwhile, k-winners-take-all (kWTA) inhibitory competition function is employed to achieve sparse distributed representations. In addition, normalized dot product is used to realize rapid recall of visual memory. To evaluate the performance of the proposed model, we conduct a series of experiments on three benchmark datasets using three feature-extraction methods. Experimental results demonstrate that the proposed model has a compelling advantage over some prevalent neural networks such as SOINN, BPNN, Biased ART, and Self-supervised ART.