Computational Models of Implicit Learning
This chapter examines computational models of implicit learning. One of the important results from the literature on people learning artificial grammars is that they can transfer their knowledge to a different letter set embodying the same grammar. The appealing intuition motivating the work of Servan-Schreiber and Anderson is that perception and memory are both more-or-less automatic processes of chunking. The Competitive Chunking model was given the same data, and it was assumed that it would recall a string only if nchunks equalled one for that string. Connectionism attempts to model human performance according to patterns of activation across a number of simple computational elements, or units, connected by weights. The auto associator may have been trained on grammatical strings in the learning phase. A number of promising approaches exist for modelling the learning of finite-state grammars: classifier systems, Competitive Chunking, exemplar models, and connectionist models.