ABSTRACT

There are many courses and books that one can consult to learn how to correctly design and analyse a study. It is equally important to know the ‘how not to’ as well. A simple analogy is to look at the way we take instructions on the labels on our shirts before putting them to wash (e.g. do not machine wash, do not tumble dry, etc.) if we want a shirt that looks nice and lasts longer! Grant review panels, for instance, often look for the single statistical or methodological flaw in the proposal, and this is enough to sink the proposal. The problem of sub-optimal design and analysis is prevalent in the various types of health care output, including health care reports, conferences and even peer-reviewed medical journals! Within each domain, the problems are also diverse. For instance, in a survey (Harris et al., 2009) among editors and statistical reviewers of 54 high-impact psychiatry journals to determine the statistical or design problems they encountered most often in submitted manuscripts, the following areas were identified: failure to map statistical models onto research questions, improper handling of missing data, not controlling for multiple comparisons, not understanding the difference between equivalence and difference trials and poor controls in quasi-experimental designs. Most of the mistakes in the design, analysis and reporting of studies are perpetuated from previous published studies using the same bad methods. However, as one eminent statistician put it, ‘precedence is a justification for lawyers not scientists, and it is logic not precedence that has to determine the way we measure’ (Senn & Julious, 2009) or design and analyse the study in our case.