ABSTRACT

Research in psycholinguistics traditionally has been guided by the assumption that the comprehension of language involves the construction of increasingly abstract levels of representation over time. For example, in understanding an utterance, a listener initially may construct fairly shallow descriptions of the utterance’s acoustic and phonetic form. These shallow representations, then, guide (or serve as input to) intermediate-level processors (e.g., lexical and syn­ tactic processors) whose output, in turn, fuels higher level processing systems (e.g., those that determine the meaning of individual sentences, and build repre­ sentations of the discourse as a whole). The adoption of this levels-of-processing framework has influenced significantly the kinds of theoretical issues that psy­ cholinguists have tended to view as central to the field. Thus, researchers have been preoccupied with addressing such issues as the “psychological” reality of particular linguistic levels of representation, the temporal relationship among levels (i.e., whether levels of processing operate in strict temporal succession, or whether they operate in cascade), and the ways that a given level responds to ambiguous input (e.g., whether it computes a single interpretation or instead constructs multiple representations). One issue in particular, however, that has motivated an impressive amount of research over the past decade concerns the modular versus interactive nature of the various processing systems. This re­ search specifically has sought to determine whether there is only bottom-up communication among processing levels (i.e., where information flows only from lower to higher processors), or whether, in addition, higher level processors sometimes can guide the operation of lower level systems. Thus, it is the nature of the dependencies that exist among processing levels that is at the heart of the modularity debate.