ABSTRACT

Let a grayscale image be called “multimodal” if each region-of-interest in it is associated with a single dominant mode of the empirical marginal probability distribution of gray levels. One of the most common scenarios of unsupervised segmentation of these images considers each pair of an image and its region map a sample from a joint MGRF of conditionally independent pixel/voxel-wise image signals, given the region map, and interdependent region labels. To recover the goal region map for a given image, the MGRF model is identified first by precise approximation of the empirical marginal with the LCDG as shown in Chapter 3 and separated into the conditional LCDG models of each object. Under the same number of components, the LCDG approximates the empirical data better than a conventional Gaussian mixture with only positive components. The obtained conditional LCDG models of objects allow for initial segmentation that relates the pixel/voxelwise image signals to the most probable modes. The initial region map is then iteratively refined using the MGRF of region labels with analytically estimated potentials. Comparative experiments showed that various complex multimodal medical images are segmented in this case more accurately than by several other known segmentation algorithms.