ABSTRACT

In this chapter, the authors focus on learning probability tables for Bayesian networks (BN) where they have a predefined BN structure. They deal with the simplest case of discrete BNs learned entirely from data sets that are "complete" in the sense that there are no missing values. For such cases, different standard learning algorithms not only produce identical results but can also be easily understood and calculated. The authors describe the discrete BNs and complete data sets but show how they can supplement the data by incorporating prior expert judgment in a simple—but mathematically rigorous—way; indeed, this approach is exactly the same that is used to incorporate expert judgment for the more complex cases that follow. They consider the important and common situation whereby the data set is not complete. The authors cover one of the most popular algorithms for learning node probability tables from incomplete data.