ABSTRACT

The full information gained from a measurement of some physical quantity x

is not limited to just a single value. Rather, a measurement yields information

about a set of discrete probabilities Pj , when the possible values of x form a

discrete set. Similarly, when the possible values of x form a continuum with

probability density function p (x), a measurement yields information about a

set of infinitesimal probabilities p (x) dx of the true value lying between x and

x+dx. Note that x need not be a bona-fide random quantity for the probabil-

ity distribution to exist, but could be (and in science and technology it usually

is) an imperfectly known constant. In practice, users of measured data seldom

require knowledge of the complete posterior distribution, but usually request

Evaluation and

a “recommended value” for the respective quantity, accompanied by “error

bars” or some suitably equivalent summary of the posterior distribution. De-

cision theory can provide such a summary, since it describes the penalty for

bad estimates by a loss function. Since the true value is never known in prac-

tice, it is not possible to avoid a loss completely, but it is possible to minimize

the expected loss, which is what an optimal estimate must accomplish. As will

be shown in this Chapter, in the practically most important case of “quadratic

loss” involving a multivariate posterior distribution, the “recommended value”

turns out to be the vector of mean values, while the “error bars” are provided

by the corresponding covariance matrix. Conversely, an experimental result

reported in the form 〈x〉 ± ∆x, where ∆x represents the standard deviation (i.e., the root-mean-square error), is customarily interpreted as a short-hand

notation representing a distribution of possible values x that cannot be re-

covered in detailed form, but is characterized by the mean 〈x〉 and standard deviation ∆x.