ABSTRACT

Over the past decade, a number of Artificial Intelligence programs have been constructed for solving problems in science, mathematics, and medicine. These programs, termed “Expert Systems” (Duda & Shortliffe, 1983; Feigenbaum, 1977) are designed to capture what specialists know, the kind of non-numeric, qualitative reasoning that is often passed on through apprenticeship rather than being written down in books. However, these programs are not generally intended to be models of expert problem-solving, neither in their organization of knowledge nor their reasoning process. Consequently, difficulties have been encountered in attempting to use the knowledge formulated in these programs outside of a consultation setting, where getting the right answer is mostly what matters. Their application to explanation and teaching, in particular, (Brown, 1977a; Clancey, 1983a; Swartout, 1981) has necessitated closer adherence to human problem-solving methods and more explicit representation of knowledge. That is, building expert systems whose problem-solving must be comprehensible to people requires a close study of the nature of expertise in people.