ABSTRACT

In this paper we report preliminary results on how people revise or update a previously held set of beliefs. When intelligent agents learn new things which conflict with their current belief set, they must revise their belief set. When the new information does not conflict, they merely must update their belief set. Various AI theories have been proposed to achieve these processes. There are two general dimensions along which these theories differ: whether they are syntactic-based or model-based, and what constitutes a minimal change of beliefs. This study investigates how people update and revise semantically equivalent but syntactically distinct belief sets, both in symbolic-logic problems and in quasi-real-world problems. Results indicate that syntactic form affects belief revision choices. In addition, for the symbolic problems, subjects update and revise semantically-equivalent belief sets identically, whereas for the quasi-real-world problems they both update and revise differently. Further, contrary to earlier studies, subjects are sometimes reluctant to accept that a sentence changes from false to true, but they are willing to accept that it would change from true to false.