ABSTRACT

Statistical inference is the process of converting experience, in the form of observed data, to knowledge about the underlying population in question, and is an essential part of the scientific method and of human discovery. The conclusion of Chapter 2 is that existing approaches provide only approximations to the “best possible inference.” The goal of this chapter, and of the book more generally, is to describe our view of this elusive target. In particular, we provide two vague but hardlydisagreeable principles which we believe describe what this “best possible inference” should satisfy. The validity principle explains that probabilistic inference should be based on predicting an unobservable but predictable quantity, and that this prediction should be valid in the sense described in Chapter 1. That is, the inferential output ought to be calibrated to a meaningful scale for interpretation by intelligent minds. Since there are lots of ways to carry out a valid prediction of a predictable quantity, additional considerations are needed. The second principle, called the efficiency principle, says that the prediction should be made as efficient as possible, where efficiency is measured in terms of long-run frequencies. We explain why our calibration based on long-run frequencies is different from that for the more familiar frequentist procedures. These principles will be used to motivate the IM approach in Chapter 4.