ABSTRACT

Bayes’ theorem (Bayes [1764]) is a mathematical statement based on the subjectivists’ notion of belief governed by probability, and it is used to calculate conditional probabilities. In simplest terms, this theorem states that a hypothesis is confirmed by any body of data that its truth renders probable. Thus, the probability of a hypothesis h conditional on some given data d is defined as the ratio of the unconditional probability of the conjunction of the hypothesis with the data to the unconditional probability of the data alone. In other words,

pd(h) = p(h ∧ d)

p(d) , (14.1.1)

provided that both terms of this ratio exist and p(d) > 0, where pd is a probability function (Birnbaum [1962]), and ∧ denotes the logical and. A consequence of (14.1.1) is that if d entails h, then pd(h) = 1. Further, if p(h) = 1, then pd(h) = 1. Thus, combining these results, we have p(h) = p(d)pd(h) + p(¬d)p¬d(h), where ¬ denotes negation (Carnap [1962]). Hence,

Theorem 14.1.1. (Bayes’ Theorem): pd(h) = p(h)

p(d) ph(d), (14.1.2)

where ph(d) is known as the ‘prediction term’. Note that according to a slightly different statistical terminology the inverse probability ph(d) is called the likelihood of h on d.