ABSTRACT

Multivariate multiple tests utilize the joint distribution of test statistics or approximations thereof in the definition of their decision rule. Hence, it is possible to incorporate knowledge about the dependency structure among test statistics into the statistical methodology, with the goal of optimizing power. First, we present methods based on multivariate normal distributions. In particular, multiple contrast tests for the parameters of regular parametric statistical models can often (asymptotically) be calibrated by means of quantiles of multivariate normal or Student’s t-distributions. Classical examples are Tukey’s test and Dunnett’s test under analysis of variance models. Second, we present methods based on probability bounds, for instance, by exploiting higher-order Bonferroni inequalities. This leads to the notion of the effective number of tests. This number quantifies the amount of relaxation of the multiplicity correction (in comparison to the case of jointly independent test statistics), which is possible by utilizing the dependency structure. Finally, we present copula-based methods for calibrating multiple tests. Copula functions provide the most general way of expressing dependencies among test statistics. We formalize multivariate single-step multiple tests by means of copula quantiles, and we briefly explain how to pre-estimate an unknown copula, leading to so-called empirically calibrated multiple tests.