ABSTRACT

We explore the ability of a static connectionist algorithm to model children's acquisition of velocity, time, and distance concepts under architectures of different levels of computational power. Diagnosis of rules learned by networks indicated that static networks were either too powerful or too weak to capture the developmental course of children's concepts. Networks with too much power missed intermediate stages; those with too little power failed to reach terminal stages. These results were robust under a variety of learning parameter values. We argue that a generative connectionist algorithm provides a better model of development of these concepts by gradually increasing representational power.