Characterizing Perceptual Interactions in Face Identification Using Multidimensional Signal Detection Theory
One of the core problems occupying a great number of researchers of face perception is the question of how the human face itself is perceptually processed by those observing it (Bruce, 1988). Given that the visual system is known to analyze the retinal image into basic attributes such as orientation, color, motion, and size, among others (De Valois & De Valois, 1988; Graham, 1989), which are then resynthesized, it seems natural to suppose that a face, being a visual stimulus, is analyzed into independent parts or features that are somehow reconstituted in the whole. However, a good deal of empirical evidence exists to suggest that the face is more than the simple additive sum of its parts. Many studies employing a wide array of paradigms purport to demonstrate that faces are perceived as Gestalt wholes when they are looked upon; or, more generally, somehow parts of a face interact during its perception. The difﬁculty with progress on this question stems from the lack of agreement regarding the basic deﬁnitions of part, interaction, and conﬁgural.