ABSTRACT

Interest in establishing the causal effects of social and educational interventions has drawn attention to the application of the randomized control trial (RCT) in field-based settings (Murnane & Willett, 2011). As in the laboratory, use of a probabilistic assignment mechanism has the advantage of equating study groups in expectation, and thereby offers a straightforward means for providing control over the universe of potential confounding variables. The ease and strength of the inferential logic on which the RCT rests (Heckman, 2005; Holland, 1986; Rubin, 1974) and the historic dearth of RCTs in school-based settings (Cook, 2002) has led to a recent change in the emphasis that federal funding agencies place on applicants seeking extramural support for educational research (IES, 2014; NSF, 2013). By encouraging applicants to propose and implement stronger inferential designs, an increase in the number of small and larger scale RCTs has been realized (see Spybrook, 2013; Spybrook et al., 2013). The results of these investigations have begun to provide sorely need empirical evidence of “what works” in a variety of academic, behavioral, and social domains. However, many questions currently remain regarding the strength and nature of the inference that can be drawn when a field-based RCT is impacted by treatment non-compliance and failures in program implementation (Raudenbush et al., 2012; Sagarin et al., 2014; Weiss et al., 2013).