ABSTRACT

Say that a scientific problem presents itself in the form of n datapoints dn linked to a vector of unknown parameters θ according to a statistical model, with this model expressed in the form of a conditional density function pi(dn|θ). Moreover, inferences will be relative to a proper prior distribution for the parameters, expressed as a marginal density function pi(θ). In committing to a choice of prior distribution and a choice of statistical model, the analyst has defined a joint distribution of observables and unobservables, to wit pi(θ ,dn) = pi(θ)pi(dn|θ). Bayes theorem, which simply corresponds to deducing the conditional distribution of the parameters given the data, is expressed as

pi(θ |dn) = pi(dn|θ)pi(θ)∫ pi (dn|θ˜)pi (θ˜)dθ˜ . (2.1) Under the Bayesian paradigm, (2.1) characterizes knowledge about the parameter values having observed the data values. We assume the reader has some basic exposure to Bayesian analysis, so that (2.1) can be used without much further ado. If this is not the case, the reader is recommended to consult one of the now numerous books that introduce the Bayesian approach (suggestions include Bolstad, 2007; Hoff, 2009; Carlin and Louis, 2011; Christensen et al., 2011; Lee, 2012; Gelman et al., 2013) .