This chapter reviews neuropsychological evidence for the multimodal dual coding structures and processes described in Chapter 3. Because objects and language both come in visual, auditory, and haptic (feelable) modalities, it follows that there must be corresponding modality-specific neural representations that are activated during perception, memory, thought, and communication. Figure 7.1 illustrates this multi-modal dual coding model for the concept “telephone”—the word and the object telephone as seen, heard, and felt (including the feel of the object and of movement pattern when one writes the word). The corresponding neural representations presumably are located in different areas of the brain. Also shown are pathways that connect the representations to the perceptual world and to response systems, so that words and telephones “out there” can be recognized and responded to in appropriate ways. As well, there are connecting pathways between the different modalities of verbal and nonverbal representations, so that telephones as seen, heard, or felt can be named, and conversely, their names can evoke images in any modality. A more comprehensive DCT model would also include associative connections between the three sensorimotor modalities of telephone words or objects, and connections to representations for other concepts, so that activity can spread associatively within verbal or nonverbal systems. Depiction of the multimodal dual coding model showing visual, auditory, and haptic logogens and imagens corresponding to the names and sensorimotor properties of the object “telephone.” https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9781315785233/99db3120-43e6-4b08-a2ee-a4328fd01472/content/fig7_1_B.jpg" xmlns:xlink="https://www.w3.org/1999/xlink"/>