ABSTRACT

An evaluation's design is its unique research structure. The structure consists of evaluation questions and hypotheses; anticipated evidence of program effectiveness, quality, and value; criteria for study eligibility; rules for assigning study participants to programs; and rules for the timing and frequency of measurement. Each of these design elements is discussed in detail in this chapter, which also explains blocking, stratification, and blinding. The chapter also details the advantages and limitations of experimental and observational evaluation designs and discusses how to minimize evaluation design bias through blocking, stratification, and blinding. A focus of the chapter is comparative effectiveness research (CER). CER is characterized by evaluating programs in their natural rather than experimental settings. Finally, in the real world, evaluators must deal with practical and methodological challenges that prevent them from conducting the perfect study. Because these challenges “threaten” or bias the internal and external validity of the evaluation's findings, the chapter explains how to recognize and deal with them.