ABSTRACT

Artificial Intelligence (AI) has struggled to find ways to effectively use probabilistic reasoning to aid in solving problems where knowledge is incomplete, error-prone, or approximate. It has invented logics to deal with the problem symbolically. It has invented concepts to skirt the issue of conditional independence, prior probabilities, and the difficulties of conditional probabilities and causal inferences. A summary of the development of these ideas could be stated as, “We would use Bayesian models if only we could satisfy all the assumptions and were omniscient.” We will focus on the dominant themes that have occupied most of the literature on uncertainty and expert systems. Those include the Bayesian approach, the certainty factor approach, the Dempster-Shafer approach, and the more advanced Bayesian belief networks approach. Fuzzy reasoning will not be discussed because it addresses the problem of vagueness rather than uncertainty. As Russell and Norvig (1995) point out, it is not a method for uncertain reasoning and is problematic in that it is inconsistent with the first-order predicate calculus.