ABSTRACT

There is a widespread conviction among cognitive psychologists that humans, and indeed all animals, are equipped with powerful learning mechanisms for extracting regularities from the environment. Many believe that regularities are learned as an inevitable consequence of encoding individual events in memory, through learning processes that can be broadly classed as “superpositional.” Events are represented as sets of features, and as the representations of successive events are “superimposed” on each other, common features and underlying generalizations are extracted. Exemplar-based memory models (Hintzman, 1986) and connectionist networks (McClelland & Rumelhart, 1985) are computational instantiations of this principle. Typically, such models take no account of conscious states because learning is assumed to operate unconsciously, and as an inevitable by-product of the way in which events are encoded in memory. Learning mechanisms of this type do not in any way depend on conscious states; they operate implicitly.