ABSTRACT

Theories of face processing also often consider the recovery of face identity and facial speech information as involving distinct operations (Bruce and Valentine, 1988: Liberman & Mattingly, 1985). This is analogous to traditional theories of auditory speech perception which assume a dissociation between linguistic and voice recovery (Halle, 1985; Liberman & Mattingly, 1985). However, recent observations with auditory speech suggest that these two tasks might not be as separate as once assumed. For example, Remez and his colleagues showed how isolated linguistic (phonetic) information can be informative about speaker identity (Remez, Fellowes, & Rubin, in press). An experiment was conducted to determine if isolated visual speech information can be salient for face recognition. A point-light technique was implemented to isolate visual speech information (Bassili, 1978; Rosenblum & Saldana, in press; Bruce and Valentine, 1988). Speakers were shown on videotape under both full-illumination and point-light conditions, articulating the sentence The football game is over'. The same stimuli was also shown under static conditions. A two-alternative forced choice (2AFC) procedure was used to determine if observers could match the correct articulating point-light face to the same articulating fully-illuminated face. Results revealed that dynamic point-light displays afforded high face matching accuracy which was significantly greater than chance and significantly greater face than accuracy with static point-light displays.