ABSTRACT

Four rule-breaking activities are described and rationalized: the rule of objectivity, the rule of measurable outeomes, the rule of avoiding reactivity, and finally, the rule of the final report. The researcher's legitimate need to think in terms of measurement and to operationalize concepts can create serious problems when imposed upon program people and applied to program evaluation. Evaluation based on goals and methods immutably fixed at the program's origin ignores this fact and, in doing so, invites subterfuge, evasion, and mistrust between program people and evaluators. There are two opposing forces which need to be taken into account as one interprets this chapter; on the one hand, there is a common disciplinary bond among members and evaluators. Program improvement is indeed a form of contamination created by the evaluators. Experimental design and the logic of experimentation are sabotaged under such conditions. Some professional evaluators argue that policymakers respect hard data analyzed with precision.