ABSTRACT

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

3.1.1 Relationship to the prior works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.1.2 Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.1.3 Basic notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

3.2 Shape prior to control a parametric model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3 MGRF image model as an appearance prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.4 LCDG probability model of a current appearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.5 Model evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.6 Experimental results and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.7 Conclusions and future work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Abstract

Objects of specific shapes in an image are typically segmented with a deformable model, starting at a zero level for a geometric level set function specifying sign-alternate shortest distances to the object boundary from each pixel. The targeted shapes are approximated by a linear combination of 2D distance maps built from mutually aligned images for given training objects. Unfortunately, the approximate shapes may deviate from the training shapes because the space of the distance maps is not closed with respect to linear operations and the map for the zero level of a particular linear combination need not coincide with the shape. To avoid this drawback, we propose a parametric deformable model with the energy tending to approach the learned shapes. Instead of the level sets formalism, the target shapes are approximated directly with linear combinations of distance vectors describing positions of the mutually aligned training shapes with respect to their common centroid. Such a vector space is now closed with respect to the linear operations and

it comprises a smaller dimensionality than the 2D distance maps. Thus our shape model is easily simplified with the principal component analysis (PCA) with the shape-dependent energy terms guiding the boundary evolution to obtain a very simple analytic form. Prior knowledge of the visual appearance of the object is represented by Gibbs energies for the objects gray levels. To accurately separate the object from its background, each current empirical marginal probability distribution of gray values within a deformable boundary is also approximated with an adaptive linear combination of discrete Gaussians. Both the shape and appearance priors and the current probabilistic appearance description control the boundary evolution with the appearance-dependent energy terms also having simple forms due to analytical estimates of the Gibbs energy. Experiments with natural images confirm the robustness, accuracy, and high speed of the proposed approach (3.1).