ABSTRACT

This chapter presents some of the Bayesian solutions to the different interpretations of picking the “right” number of components in a mixture, before concluding on the ill-posed nature of the question. It reviews such one-sweep Bayesian methods for cross-model inference on G, ranging from well-known methods such as reversible jump Markov chain Monte Carlo (MCMC) to more recent ideas involving sparse finite mixtures relying on overfitting in combination with a prior on the weight distribution that forces sparsity. With respect to inference, the Bayesian framework offers a slight advantage, as MCMC methods are able to deal with non-standard component densities in a more flexible way than the expectation-maximization algorithm. The intuition behind the Reversible jump MCMC method is to create bijections between pairs of parameter spaces by creating auxiliary variables that equate the dimensions of the augmented spaces and to keep the same bijection for a move and its reverse.