Viewpoint Generalization in Face Recognition: The Role of Category-Specific Processes
Computational theories of visual recognition that postulate imagebased, view-dependent object representations have been gaining both computational and psychophysical support in recent years. These provide an alternative to theoretical perspectives on object representation that assume three-dimensional shape reconstruction (Edelman, 1997; Ullman, 1996). This theoretical development has created a certain tension within the psychological literature on object and face recognition.1 Speciﬁcally, although psychologists consistently ﬁnd that the human visual system is capable of making sense of an image of an object even when the object is
encountered under novel viewing conditions, it is not immediately clear how a view-based representation can support this ability:
We attempt to ease the tensions between theory and experiments by showing that (a) given prior experience with objects of similar shape, multiple-view models can be made to exhibit a considerable degree of generalization, both to novel views and to novel objects; and (b) such models are relevant for understanding human generalization performance on novel stimuli, which, likewise, depends on prior exposure to objects that belong to the same shape class.