ABSTRACT

Recent debate on the representation of linguistic rules has focused on the role of phonological regularities in governing the mapping process between forms and meanings. The debate has also centered on domains where there is an explicit rule (versus exceptions) on a linguistically descriptive level. In this study, we present a problem where there is no explicit rule even on a descriptive level, and where the mapping is governed primarily by covert semantic structures or "cryptotypes" (see Whorf, 1956). We built a semantically grounded connectionist model to learn the reversive prefixes un- and dis- in English (Li, 1993; Li & MacWhinney, in press). Simulation results indicate that first, our model captures Whorf s "cryptotypes" in a precise manner. These cryptotypes are traditionally described as "subtle" and "intangible" by symbolic accounts. Second, the model shows how distributed, structured representations of cryptotypes constrain the system's productivity in learning. The simulation results provide insights into the psycholinguistic mechanisms underlying existing empirical data from human children (Bowerman, 1983; Clark, Carpenter, & Deutsch, 1995). Finally, the model displays early plasticity and late rigidity in learning to recover from productive errors, which is consistent with current empirical and computational evidence (see Elman, 1993). Simulations that incorporate both semantic and phonological information show that the model cannot learn the correct mapping by using phonological information alone, attesting to the importance of the semantic basis of the problem (see also Cottrell & Plunkett, 1991). However, the inclusion of phonological information helps the model to recover from errors more effectively and completely.