ABSTRACT

This chapter describes attention to maximum likelihood estimation for mixtures, which still leads to a clear description of the expectation-maximization (EM) algorithm. Indeed, mixture structures allow one to clearly highlight the rationale, the advantages, and the possible drawbacks of the EM algorithm. The EM algorithm enjoys nice practical features, which explains its widespread use. Despite these possible drawbacks, the EM algorithm generally does a good job of deriving the maximum likelihood estimator or the posterior mode of a mixture model. However, improved versions of EM may have to be used in order to avoid the know pitfalls of the original algorithm. Starting from an initial value, the Stochastic EM algorithm replaces the missing labels by simulating them from their conditional distribution, knowing the observed data and a current value of the mixture parameters. The EM algorithm can thus fail because of these singularities depending on the starting values, models, and numbers of components.