ABSTRACT

We report a series of experiments on connectionist learning that addresses a particularly pressing set of objections to the plausibility of connectionist learning as a model of human learning. Connectionist models have typically suffered from rather severe problems of inadequate generalization (where generalizations are significantly fewer than training inputs) and interference of newly learned items with previously learned items. Taking a cue from the domains in which human learning dramatically overcomes such problems, we see that indeed connectionist learning can escape these problems in combinatorially structured domains. In the simple combinatorial domain of letter sequences, we find that a basic connectionist learning model trained on 50 6-letter sequences can correctly generalize to about 10,000 novel sequences. We also discover that the model exhibits over 1,000,000 virtual memories: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact. We conclude that connectionist learning is not as harmful to the empiricist position as previously reported experiments might suggest.