One of the primary factors in the resurgence of connectionist modeling is these models’ ability to learn input-output mappings. Simply by presenting the models with examples of inputs and the corresponding outputs, the models can learn to reproduce the examples and to generalize in interesting ways. After the limitations of perceptron learning (Minsky & Papert, 1969; Rosenblatt, 1958) were overcome, most notably by the back-propagation algorithm (Rumelhart, Hinton, & Williams, 1986) but also by other ingenious learning methods (e.g. Ackley, Hinton, & Sejnowski, 1985; Hopﬁeld, 1982), connectionist learning models exploded into popularity. Connectionist models provide a rich language in which to express theories of associative learning. Architectures and learning rules abound, all waiting to be explored and tested for their ability to account for learning by humans or other animals.