ABSTRACT

In many situations the research question yields one of two possible answers “yes” or “no” and a statistician is required to choose the “correct” one based on the data. This problem is known as statistical hypothesis testing. This chapter firstly considers simple hypotheses: defines the basic concepts (Type I and Type II errors, power, p-value) and derives the most powerful likelihood ratio test by the Neyman-Pearson lemma. It then extends these notions for composite hypotheses and introduces uniformly most powerful and generalized likelihood ratio tests. In particular, it shows that a variety of well-known statistical tests are examples of generalized likelihood ratio tests. It establishes the duality between hypothesis testing and confidence intervals (regions). There is also a section on sequential testing and the sequential Wald test. Finally, the last section discusses multiple testing problem.