ABSTRACT

In this chapter we consider computing observed confidence levels for the special case where there is a single parameter. The computation of observed confidence levels in the single parameter case differs greatly from the multiparameter case in many respects, and therefore a separate development is necessary. Motivated by some basic results from measure theory, the emphasis of the study of observed confidence levels in the single parameter case centers on regions that are interval subsets of the real line. This simplifies the theoretical structure used to study the confidence levels, and allows for the development of a wide variety of potential methods for computing the levels. With many potential methods available, it becomes necessary to develop techniques for comparing competing methods. In particular, the concept of asymptotic accuracy will be the central tool used to compare competing methods for computing observed confidence levels. The theory of asymptotic expansions, particularly Edgeworth and Cornish-Fisher expansions, will play a central role in computing the measure of asymptotic accuracy. Not surprisingly, it often turns out that more accurate confidence intervals can be used to compute more accurate observed confidence levels.