ABSTRACT

From a Bayesian viewpoint, the final outcome of any problem of inference is the posterior distribution of the vector of interest. Thus, given a probability model Mz = {p(z |ω), z ∈ Z,ω ∈ Ω} which is assumed to describe the mechanism which has generated the available data z, all that can be said about any function θ(ω) ∈ Θ of the parameter vector ω is contained in its posterior distribution p(θ | z). This is computed using standard probability theory techniques from the posterior distribution p(ω | z) ∝ p(z |ω) p(ω) obtained

in Bayesian

from the assumed prior p(ω). To facilitate the assimilation of the inferential contents of p(θ | z), one often tries to summarize the information contained in this posterior by

1. providing θ values which, in the light of the data, are likely to be close to its true value (estimation), and

2. measuring the compatibility of the data with one or more possible values θ0 ∈ Θ of the vector of interest which might have been suggested by the research context (hypothesis testing).