Extracting meaningful semantic information from a series of transient acoustical variations is a highly complex and resource-demanding process. Well over a century of research has focused on understanding the mechanisms of speech processing (Bagley, 1900-1901), with considerable accomplishments occurring particularly in the last few decades with the advent of brain-imaging techniques. Today, understanding of our remarkable ability to process speech has been informed by the integration of theories and empirical research from such disciplines as linguistics, psycholinguistics, psychology, cognitive science, computational science, and neuroscience (C. Cherry, 1953a; Friederici, 1999; Norris, 1994; Norris et al., 1995; Plomp & Mimpen, 1979; Treisman, 1964b). Yet, after over a century of research, many of the details of this process remain elusive. As Nygaard and Pisoni (1995) put it:

This chapter examines current views on the lexical interpretation of acoustic information. The interaction of sensory, acoustical, and cognitive factors is discussed with a focus on their impact on the mental workload of speech processing.