ABSTRACT

A large body of research has been devoted to variable selection in recent years. Bayesian methods have been successful in applications, particularly in settings where the amount of measured variables can be much greater than the number of observations. This chapter reviews mixture priors that employ a point mass distribution at zero for variable selection in regression settings. The popular stochastic search MCMC algorithm with add-delete-swap moves is described and posterior inference and prediction via Bayesian model averaging are briefly discussed. Regression models for non Gaussian data, including binary, multinomial, survival and compositional count data are also addressed. Prior constructions that take into account specific structures in the covariates are described. These constructions have been particularly successful in applications as they allow the integration of different sources of biological information into the analysis. A discussion of computational strategies, in particular variational algorithms for scalable inference, concludes the chapter. Throughout the chapter, some emphasis is given to the author's contribution.