ABSTRACT

Reasoning about agents that we observe in the world must integrate two disparate levels. Our observations are often limited to the agent’s external behavior, which can frequently be summarized numerically as a trajectory in space-time (perhaps punctuated by actions from a fairly limited vocabulary). However, this behavior is driven by the agent’s internal state, which (in the case of a human) can involve high-level psychological and cognitive concepts such as intentions and emotions. A central challenge in many application domains is reasoning from external observations of agent behavior to an estimate of their internal state. Such reasoning is motivated by a desire to predict the agent’s behavior.