ABSTRACT

We will first examine how to learn the probability parameters that make up the conditional probability tables of discrete networks when we have non-problematic samples of all the variables. Non-problematic here means that there is no noise in the data — all the variables are measured and measured accurately. We will then consider how to handle sample data where some of the variables fail to have been observed, that is when some of the data are incomplete or missing. Finally, we will look at a few methods for speeding up the learning of parameters when the conditional probabilities to be learned depend not just upon the parent variables’ values but also upon each other, in local structure learning.