ABSTRACT

This chapter presents some aspects of Bayesian inference in the context of mixture models. It describes the asymptotic behaviour of the corresponding posterior distributions and explores proposals to construct non-informative or vaguely informative priors in mixture models with a known number of components. The chapter analyses the mixture models can be used for density estimation, for classification or clustering, or for parameter estimation. It also describes the asymptotic behaviour of the posterior distribution in terms of the marginal density of the observations, which is a starting point of most asymptotic analysis. The chapter provides some general results on the concentration of the posterior distribution around the true marginal density of the observations. Then all the usual asymptotic results are valid: namely, the Bernstein–von Mises theorem on the parameters, the Laplace approximation of the marginal density of the observations and 1/n convergence rates of Bayesian estimators such as posterior means.