ABSTRACT

The model of perceptual processing developed thus far has focused on the principle of reification, or the perceptual generation of filled-in surfaces and objects in an internal representation of external reality. However this is not to deny the significance of the inverse of reification, which is the abstractive function of perception, in which extended features in the sensory stimulus are reduced to some kind of symbolic code, as required for the storage of perceived objects and events in memory, or for their communication through language. Historically, abstraction has been generally considered as the principle, if not the only, function of perception. This process is often described as occurring in stages from lower to higher levels of cortical representation, much like a sequence of image-processing steps in a machine vision algorithm. Typically, such algorithms begin with the detection of edges, and then proceed to the detection of corners or vertices defined by the intersection of edges, then on to the identification of surfaces and volumes, as delimited by their bounding vertices, and so forth. The ultimate objective is to attach some kind of symbolic label to different objects in the scene as a model of visual recognition, as described for example by Ballard and Brown (1982) for computer vision, and by Marr (1982) and Biederman (1987) for natural vision. However, this concept of visual processing ignores the reification function of perception, as identified by Gestalt theory and as elaborated in previous chapters. In fact, I propose that abstraction and reification are complementary functions in perception, for the abstract code defines the pattern or skeleton of the percept to be filled in by reification processes.