ABSTRACT

Recent work in developmental psycholinguistics suggests that children may bootstrap grammatical categories and basic syntactic structure by exploiting distributional, phonological, and prosodic cues. Previous connectionist work has indicated that multiple-cue integration is computationally feasible for small artificial languages. In this paper, we present a series of simulations exploring the integration of distributional and phonological cues in a connectionist model trained on a fullblown corpus of child-directed speech. In the first simulation, we demonstrate that the connectionist model performs very well when trained on purely distributional information represented in terms of lexical categories. In the second simulation we demonstrate that networks trained on distributed vectors incorporating phonetic information about words also achieve a high level of performance. Finally, we employ discriminant analyses of hidden unit activations to show that the networks are able to integrate phonological and distributional cues in the service of developing highly reliable internal representations of lexical categories.