ABSTRACT

The problematic downside to the representational agnosticism of artificial intelligence planning systems is the gulf that it creates between these technologies and the real humans that would like to use them for everyday problem solving and planning. Highlighted by continuing work on mixed-initiative planning (e.g., Ferguson, 1995), human-computer cooperation will require the development of systems with representational commitments that are more in accordance with people’s commonsense understanding of goal-directed behavior. The difficulty in achieving this accord stems from the apparent breadth of our commonsense understanding of intentional action, which includes slippery concepts of commitment, preferences, opportunities, threats, ability prevention, enablement,

failures, constraints, decisions, postponement, interference, and uncertainty, among many others.