ABSTRACT

This book has set out a practical approach to designing and implementing field experiments, which acknowledges the problems that researchers and policy-makers often face when carrying them out. This approach is needed because of “field” aspect of experiments: the field offers realism, but it operates in hard-to-predict ways, creating challenges to researchers and policy-makers as well as offering opportunities. The troublesome conditions for implementing trials necessitate robust designs that fully anticipate any pitfalls that can occur. The aim of this book has been to surface the design choices that have to be made, often over the gritty details, when planning and carrying out a field experiment, long before any statistical analysis and reporting is done. By being faithful to the assumptions of the trial through careful design and responsive implementation, researchers can be satisfied that statistical analysis performed is valid. The reader of the paper or report can rely on a set of results that do not have implementation failures behind them. The discussion in this book has sought to acknowledge that, in practice, experimenters spend most of their time planning and managing a trial, reacting to the problems of real-world interventions where researchers are not in full control. When reading the elegant papers produced from these experiments, the

implementation choices tend to get compressed into a concise experimental design section, which does not give a fair impression of the amount of time spent and ingenuity that has gone into solving the practical issues that have arisen. In part, the time commitment might just look like a sunk cost that has to be borne before getting to the exciting and interesting tasks of analysis and write-up. Rather than drudgery, however, these practical choices are intriguing in their own right, and the challenge is to get them addressed so as to complete an effective research project. A researcher can even feel like the escapologist Houdini in facing such constraints; but is still able to get project done on budget with answers to the research questions posed. Moreover, if these tasks were just practical in nature, it would be easy to subcontract the delivery of a field experiment to another organization, or to a large research team, with the investigator waiting patiently for the delivery of the dataset. However, as most researchers know, not a day passes when

some difficult question does not need to be answered. These questions are not limited to ones about administrative matters such as budget planning, but rather are choices that affect the integrity of the experiments themselves and their ability to answer the questions they began with. Cases where experiments are delegated to third parties, such as in some policy experiments, can prove to be fatally implemented, especially when choices are made on the behalf of researchers, whether about how a treatment is delivered, what happens to the control group, how randomization occurs or how data are coded. Poor implementation can often compromise the experiments themselves unless the researchers have a high degree of control over the data generation process. With that control, not only can the researcher protect projects from poor decisions, but also improve the delivery of the projects so that it is clearer what is being tested and they are constructed to improve the external validity of their claims. Sometimes opportunities arise in the implementation process which can improve projects or test new claims that can only be seized upon when the investigator is in greater control. The task of the researcher is to think of ways in which designers and implementers can increase external validity through sample selection and reduce study effects by careful attention to the implementation of the treatment and control conditions. As with any research project, the researcher can increase leverage on their research questions by making careful design choices. The book offered ten steps that are intended to be a practical guide and

to show the subtlety of these design choices and how many of them interact with each other. Sometimes this trade-off occurs in obvious ways, such as the decisions about the number of treatment groups and available sample size; at other times it is less obvious, such as decisions about the recruitment of partners and the treatments available. Making choices that trade off with each other are a natural part of the design process, and researchers need to balance them and ensure that one choice does not rule out another. The other subtlety is the stress on the temporal aspects of planning. The quest for the researcher is to write down and commit to as much as possible at each step, while leaving enough flexibility to deal with contingencies down the line. In particular, researchers need to anticipate implementation difficulties in their standard operating procedures and prepare for the likely threats to an experiment, especially the loss of subjects and of statistical power. Overall, the claim is that by thinking through the ten steps a better and more realistic design can emerge. The book has been clear that minor failures of implementation are

common, even normal features of trials. Starting researchers should be reassured that there is no such thing as a perfect experiment and the external world intrudes in various ways into their research projects. The skill of researchers is to make the best choices in a given situation, avoid some obvious traps, and think of ways in which the limitations of an experiment can be overcome in the design as well as in the analysis. The book

highlighted nine common threats to trials that should be anticipated in the design and then watched out for during implantation. If any one of these threats are extreme, then the whole trial can be blown off course. Normally, as the many examples from studies have shown, they are minor limitations that need reporting or dealing with in the analysis phase.