ABSTRACT

Although each of our sensory modalities uniquely specifies different kinds of sensory experiences, there are many dimensions of experience that correspond across the modalities. The phenomenon of intersensory correspondence and its meaning to perception and cognition has been of interest since the time of the Greek philosophers (Marks, 1978). What possible advantage can we gain from having multimodal sources of information about objects and events? Some have considered this question from an evolutionary perspective and have suggested that the ability to use multimodal information allows greater plasticity in behavior and that this, in turn, leads to greater adaptability to one’s ecological niche (Maier & Schneirla, 1935). For example, a predator is far more accurate in localizing a prey when both auditory and visual cues specify the prey than when only a unimodal cue specifies it (Stein & Meredith, 1993). Others have considered this question from a functional perspective and have suggested that the specification of an object or event in terms of several concurrent and corresponding attributes is advantageous because the resulting redundancy makes identification of the object or event more certain and the correspondences make perceptual integration possible (J. J. Gibson, 1966; Welch & Warren, 1986). For example, adult subjects are considerably more accurate in their identification of linguistic information when it is specified by both visible and audible information than they are when the linguistic information is specified only by audible information (Massaro & Cohen, 1990; Summerfield, 1979).