ABSTRACT

We are not always trying to predict what people will do, or coordinate our actions with theirs. Often when we think about people we are simply trying to understand them. We express our attempts at understanding as explanations of their actions. The motive can be future prediction or cooperation, or just intrinsic human curiosity. The purpose of this chapter is to argue that we get better explanations if we focus on ends rather than motives. Or, more carefully put, that explanations that invoke certain factors which induce mutual intelligibility between agents tend to be causally deeper than explanations that

invoke individual beliefs and desires. Many of the examples of such factors which will appear are individual and shared goods. This chapter thus makes a link between Chapter 2’s emphasis on the fragility and incompleteness of belief-desire explanation and a very general idea, which surfaces again at the very end of this book (the last section of Exploration IV) but for which the book does not claim to give a complete argument, that our deeper understandings of ourselves will generally give the potentiality for moral insight.