ABSTRACT

Measurement data obtained from images are often used for correlation with some other information about the structures and objects measured. It can be important to find relationships between image measurements and the history of the objects (for instance, due to the genetics of organisms or manufacturing variables for products), or between the image measurements and the objects’ performance, whether it involves consumer acceptance of appearance or a material’s resistance to fracture. In many fields of application, ranging from medical diagnosis to facial recognition, measurements can be important for object classification or identification.

Recognition of features in images covers an extremely wide range of applications. In some cases the targets are fully known, come from a limited number of possibilities, and can be completely represented by one or several images each, so that cross-correlation is an appropriate method for locating and matching them. In others, the goal is to have the computer “understand” natural three-dimensional scenes in which objects may appear in a wide variety of presentations. Applications such as automatic navigation or robotics require that the computer be able to extract surfaces and connect them hierarchically to construct threedimensional objects, which are then recognized (see, for example, Roberts, 1982; Ballard and Brown, 1982; Ballard et al., 1984). The topics and goals discussed here are much more limited: to allow the image analysis system to be able to recognize or classify discrete features in essentially two-dimensional scenes. If the objects are three-dimensional and can appear in different orientations, then different two-dimensional views may be considered as different target objects that carry the same label. Figure 12.1 shows an example.