ABSTRACT

In many clinical trials, a binary response is measured in several groups, sometimes including a control group. An important question centers around estimating the significance and size of potential group differences, measured by some suitable effect measure, when the groups are compared to each other. In this chapter, we focus on constructing multiplicity adjusted P-values and, more importantly, simultaneous confidence intervals for pairwise comparisons between the groups (such as all pairwise comparisons or

all comparisons to control), using the difference of proportion as the effect measure. Simultaneous here refers to the fact that the set (or family) of confidence intervals controls the familywise error rate, that is, the probability that at least one of the confidence intervals fails to cover the true parameter. This is in contrast to ignoring the multiplicities, which results in error rates that are largely unknown (a conservative upper bound on the familywise error rate [FWER] can always be provided through Bonferroni’s inequality) and that can be quite large. Therefore, it is better to control the FWER at some known level α so that precise (asymptotic) error statements can be given. The goal of this chapter is then to introduce, develop, and demonstrate, through various simulations and real examples, the statistical methods to achieve this. Throughout, wewill assume thatwehave independent binomial observations in K groups.