ABSTRACT

Global-local shrinkage priors quickly found favor in the Bayesian community because of their excellent empirical behaviour and the promise of fast implementations. Soon after, theoretical guarantees followed. Such priors allow for sharing of information about the number of signals through one global parameter, while local parameters allow for adjustments at the individual parameter level. Many global-local shrinkage priors are now known to lead to optimal posterior concentration rates, which justifies the use of their posterior means as estimators. While results on uncertainty quantification are still scarce, a select few global-local shrinkage priors have even been proven to lead to good coverage results, as well as to variable selection procedures with low false discovery rates. In this chapter, theoretical results and their conditions will be reviewed, as well as their implications for practice. The focus is on the normal means model and on the horseshoe prior, since its behaviour is especially well understood, with some discussion of results for other global-local shrinkage priors.