ABSTRACT

In this chapter, we present a statistical parts-based model (PBM) of image appearance designed to address the problem of modeling natural intersubject anatomical variability in medical imagery. In contrast to global image models, the PBM consists of a collection of localized image regions, called parts, whose appearance, geometry, and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case in which one-to-one correspondence does not exist between all subjects in a population due to anatomical differences, as model parts are not required to appear in all subjects. The model is constructed through an automatic machine learning algorithm, identifying image patterns that appear with statistical regularity in a large collection of subject images with minimal manual input. Because model parts are represented by generic image features, the model can be applied in a wide variety of contexts to describe the anatomical appearance of a population or to determine intersubject registration. Experimentation based on 2-D magnetic resonance (MR) slices of the human brain shows that a PBM learned from a set of 102 subjects can be robustly fit to 50 new subjects with accuracy comparable to three human raters. Additionally, it is shown that, unlike global models, PBM fitting is stable in the presence of unexpected, local perturbation.