ABSTRACT

Models of rationality typically rely on underlying logics that allow simulated agents to entertain beliefs about one another to any depth of nesting. We argue that representations of individual deeply nested beliefs are in principle unnecessary for any cooperative dialogue. We describe a simulation of such dialogues in a simple domain, and attempt to generalize the principles of this simulation, first to explain features of human dialogue in this domain, then those of cooperative dialogues in general.

We propose that for the purposes of cooperative interaction, the status of all deeply-nested beliefs about each concept can be conjoined into a single represented value, which will be affected by reasoning that might be expected to lead to conclusions in terms of deeply-nested beliefs. We concede that people are capable of using individual deeply-nested beliefs to some degree, but such beliefs need only be handled explicitly in dialogues involving secrecy or deception.