Integration of Visuospatial and Linguistic Information: Language Comprehension in Real Time and Real Space
Many psycholinguistic theories postulate that as a spoken linguistic message unfolds over time, it is initially processed by modules that are encapsulated from information provided by other perceptual and cognitive systems. However, we observed immediate effects of relevant visual context on the rapid mental processes that accompany spoken language comprehension by recording eye movements using a head-mounted eye-tracking system while subjects followed instructions (containing spatial prepositions) to move real objects around on a table. Under conditions that approximate an ordinary language environment, the visual context influenced spoken word recognition and mediated the resolution of prepositional phrase ambiguity, even during the earliest moments of language processing. These results suggest that approaches toward mapping spatial language onto spatial vision may be most successful with early simultaneous integration of provisional (or probabilistic) interpretations from both modalities.