ABSTRACT

In recent years, cognitive neuroscience researchers have become increasingly interested in the question of how information from the various sensory epithelia (including visual, tactile, and proprioceptive cues concerning limb position) is integrated in the brain in order to enable people to localize tactile stimuli, as well as to give rise to the “felt” position of our limbs, and ultimately, the multisensory representation of peripersonal space. Here, we highlight recent research on this topic that has used the crossmodal congruency task. In its basic form, this task involves participants having to make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. The largest crossmodal congruency effects (calculated as the difference in performance between incongruent and congruent trials) are seen when visual and vibrotactile stimuli are presented from the same region of space, thus providing an index of common positions across different sensory modalities. Crossmodal congruency effects have now been demonstrated across a range of different target and distractor modalities, using both spatial and non-spatial versions of the congruency task. Cognitive neuroscientists are currently using the task to investigate a number of questions related to the multisensory representation of space in normal participants, and to assess putative disturbances to the multisensory representation of space in brain-damaged patients. In this review, we highlight the key findings to have emerged from research that has utilized the crossmodal congruency task over the last decade.