ABSTRACT

Whether expert systems encode domain knowledge in rules or in conditional probability relations, that domain knowledge must come from somewhere. The main plausible source until recently has been human domain experts. When building knowledge representations from human experts, AI practitioners (called knowledge engineers in this role) must elicit the knowledge from the human experts, interviewing or testing them so as to discover compact representations of their understanding. This encounters a number of difficulties, which collectively make up the “knowledge bottleneck” (cf. Feigenbaum, 1977). For example, in many cases there simply are no human experts to interview; many tasks to which we would like to put robots and computers concern domains in which humans have had no opportunity to develop expertise — most obviously in exploration tasks, such as exploring volcanoes or exploring sea bottoms. In other cases it is difficult to articulate the humans’ expertise; for example, every serious computer chess project has had human advisors, but human chess expertise is notoriously inarticulable, so no good chess program relies upon rules derived from human experts as its primary means of play — they all rely upon brute-force search instead. In all substantial applications the elicitation of knowledge from human experts is time consuming, error prone and expensive. It may nevertheless be an important means of developing some applications, but if it is the only means, then the spread of the underlying technology through any large range of serious applications will be bound to be slow and arduous.