ABSTRACT

A probability model is a set of rules describing the probabilities of all possible outcomes in the sample space. The classical interpretation of probability is frequency:

Pr(A) ¼ n N

(F:1)

in which Pr(A) indicates the probability of event A (0Pr(A) 1), n the number of times that A happens in N experiments. The following rules or laws apply to probabilities:

1. Pr(A)¼ 1Pr(A) in which Pr(A) denotes the probability of the nonoccurrence of event A. 2. Pr(AB)¼Pr(A)3Pr(B) if A and B are two independent events; AB denote the occurrence of

both events. This is called a joint probability. 3. Pr(AþB)¼Pr(A)þ Pr(B) Pr(AB); this is the probability that either A or B or both AþB

occur. If Pr(AB)¼ 0 then we have two mutually exclusive events. 4. Pr(AB)¼Pr(A)3Pr(BjA)¼ Pr(B) Pr(AjB): this is the joint probability that both A and

B happen if A and B are not independent events. Pr(BjA) is the conditional probability that B will occur given that A has occurred (and vice versa for Pr(AjB). (If A and B are independent events Pr(BjA)¼Pr(B) and we have rule 2 again.) This rule relates to Bayes theorem. Pr(A) is the prior probability and Pr(BjA) is the posterior probability. Conditional probability considers the probability of a second event in the light of a first event that already occurred. Bayes theorem considers the problem in reverse: if the second event is known to have occurred what is then the probability that the first event occurred? Using this theorem one can recalculate (i.e., update) the probability that the original event occurred each time a new sample is taken and its outcome is known.