ABSTRACT

The fundamental problem of statistical inference is the same as that facing a jury in a court of law: How should the evidence be weighed? In science, just as in the law courts, the cases where the truth is clear and obvious are rather rare. Usually many discrepancies of unknown importance obscure the facts. In this book we shall explore one of the approaches developed to uncover the truth that may not be obvious from experimental data. Scientists working with animals and plants, and even more so with humans, have to accept that however hard they try to compare like with like, there are differences among individuals and among observation occasions that may affect the measurements. Only in the 20th century did statisticians begin to systematically investigate ways to assess the evidence provided by data that included such variation. The familiar methods of hypothesis testing are the most widely used results of these researches. But another approach, tried out early on but shelved for lack of computing power, led to the randomization tests that are the subject of this book. Now that fast computing is available to everyone, the uses of these tests can be explored.