ABSTRACT

This chapter describes the historical process whereby senior Head Start staff at first sought hard-headed social science evidence about their program's effectiveness but then retreated from this commitment, burned by biased evaluation results that were used against them in the political process. But political calls to learn about Head Start's effectiveness could not be silenced. Initially, Head Start officials answered these calls by sponsoring small, local studies of limited technical quality that could not provide either definitive or general causal answers. Such studies were later meta-analyzed but the technical quality of individual studies still precluded clear answers about the program's causal impact. The officials also redirected research efforts toward developing and monitoring program compliance in order to reduce local variation and to create a more professional-appearing program without avoiding the large-scale summative evaluations whose results might threaten the program's budget and political support. These strategies postponed Congressional pressure to obtain broad-based causal results about program impact, but they did not still it.