ABSTRACT

Automated object recognition in synthetic aperture radar (SAR) imagery is a significant problem because recent developments in image collection platforms will soon produce far more imagery (terabytes per day per aircraft) than the declining ranks of image analysts are capable of handling. In this chapter, the problem scope is the recognition subsystem itself, starting with chips of military target vehicles from real SAR images at 1-ft resolution and ending with vehicle identification. The specific challenges for the recognition system are the need for automated recognition of vehicles with articulated parts (like the turret of a tank), or that have significant external configuration variants (like fuel barrels, searchlights, etc.), or that can be partially hidden. Previous recognition methods involving detection theory [1,2], pattern recognition [3-5], and neural networks [6,7] are not useful in these cases because articulation or occlusion changes global features like the object outline and major axis [8]. In order to characterize the performance of the recognition subsystem, we approach the problem scientifically from fundamentals. We characterize SAR azimuth variance to determine the number of models required, we utilize the invariance of the targets, and based on these invariants, we develop a SAR-specific recognition system. We characterize the performance of this system in terms of invariance of features, number of

features, and amount of occlusion for recognition of articulated objects, occluded objects, and configuration-variant objects. All of the experimental data are based on 1-ft resolution real SAR images of actual vehicles from the MSTAR (Public) targets dataset [9].