ABSTRACT

Every field has its sacred cows, and visual word recognition is no exception. Two such sacred cows are the assumptions that the mind contains several lexica of word forms and that there are a number of routines that make use of these word forms in various ways to read aloud, make lexical decisions, and access meaning. These assumptions are common to what otherwise are a number of quite different word recognition models (e.g., Balota & Chumbley, 1984; Becker, 1976, 1979; Besner & Johnston, 1989; Besner & McCann, 1987; Forster, 1976; Meyer & Schvaneveldt, 1971; Morton, 1969, 1979; Norris, 1986; Paap, McDonald, Schvaneveldt, & Noel, 1987; Paap, Newsome, McDonald, & Schvaneveldt, 1982; Rubenstein, Lewis, & Rubenstein, 1971; Treisman, 1960). Moreover, these basic assumptions have gone unchallenged – until recently. The parallel distributed processing model (PDP) developed by McClelland and his colleagues (e.g., Patterson, Seidenberg, & McClelland, in press; Seidenberg & McClelland, 1989) is unique in that it has no lexicon. Reading aloud, accessing semantics, and making lexical decisions to different types of alphabetic letter strings are all processes that are accomplished without a lexicon.