ABSTRACT

This chapter discusses how to implement an experiment once the planning stops, examining different aspects of the process and the threats and challenges that emerge. It is, of course, difficult to know when planning actually ceases and the exact moment when the experiment starts. The start might be when the last planning document has been agreed upon between the partners. But the moment is probably when the sample is drawn and the first measurements are taken. Even at that point, however, it is possible to add more people or places to the sample, particularly if the starting measures consist of official data. On the other hand, if the baseline measurement comes from survey data, then this action really has to happen at the same time for each individual. The point of no return is probably at randomization: at this point, it is hard to start the experiment again once this action has been performed, although even then is sometimes possible to add new participants by randomly allocating them to the control and treatments at a later stage. Great care has to be taken with units enrolled into the experiment at a later stage. They have to be handled differently including having different baselines. Perhaps the point when the trial really has to start is when the treatments start to be administered. Whatever the exact point in time, once the experiment has begun, the

researcher’s job does not stop. Problems arise that may or may not be communicated to the researcher but that the researcher should keep tabs on. The researcher should expect a barrage of emails and telephone calls that reveal these problems and require decisions to be made as the experiment unfolds. A good research plan would have anticipated the worst things that could happen and built in contingency measures, what Lin and Green (2016) call standard operating procedures. As discussed in Chapter 2, an experienced researcher introduces cunning aspects to the design that allow for setbacks; but, in fact, there is nothing to prepare the researcher for the scale of problems that emerge during the research process. Even though a given problem may have been planned for, the exact way it unfolds usually surprises the researcher. Usually every day of a trial brings up some kind of snag. The researcher should be aware of being in a different time zone to the research project, for when the email inbox opens, she will see a trail of

alarm and confusion (and decisions being made by partners on the fly that might invalidate the experiment). In fact, it is important to be aware of what is happening in the trial rather than retreat into an academic fastness and await the delivery of the dataset. If a trail of emails and calls do not appear, it is possible that the people or organizations implementing the trial or collecting data are not contacting the researcher, but rather are solving the problems themselves –which may actually put the trial in an even worse state.Worse still, it might be possible to get a dataset back that has been cleaned up but reflects poor project management and low-quality data collection. It is better to see the warts and glitches as the project proceeds as the messy data shows a real experiment in progress and allows for the problems to be solved. A dataset containing a few gaps and odd values can be reassuring in a funny sort of way.

The frequency of implementation challenges is why it is so important to introduce effective monitoring into research planning – for example, checking that randomization has occurred, that the treatment or treatments have been delivered, that the control areas have not been interfered with, and that the data are not messed up. Early monitoring means that steps can be taken to get the project back on track. One good practice is a manipulation check that ensures that people received the treatment. If there is a survey at the end of the trial, it might be possible to ask both control and treatment participants about what happened. This procedure can also be a way to check for cross contamination. The key to successful project management is to ensure at all times that

the original objective of the trial is respected, such as the fidelity to the research questions, and to make sure the integrity and internal validity of the trial are protected. It takes a cool head, and it is easy to make mistakes that have profound and unexpected consequences. The essential question to ask oneself when the call comes from the partner is how the change or new event affects the ability to produce an unbiased estimate. An example is an email that indicates that members of the control group want to access the treatment and the partners do not want to deny them. This decision would be a threat to the experimental design, so the response should be to prevent it – although once contamination has already occurred, it might actually be easier to let some members of the control group take the treatment and allow for this violation of the experimental design in the estimation of treatment effects at the analysis stage. A message about poor implementation in the field might be the prompt to collect more data, such as on how many people received the treatment. The researcher should always have a statistics book or Gerber and Green (2012) on hand with a finger on the index page to craft a solution for these issues in the analysis rather than panicking and thinking that all has failed. In any case, some glitches are to be expected and they will not affect the experiment as a whole. If there are only a few cases

of contamination, they might be safely ignored in the analysis. In fact, working through lost cases and discussing implementation issues should be an important part of any write up, which can be placed in a technical appendix. Most readers know such glitches will be there, as they reflect real world conditions in the field.