Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Chapter

# Equivalence tests for selected one-parameter problems

DOI link for Equivalence tests for selected one-parameter problems

Equivalence tests for selected one-parameter problems book

# Equivalence tests for selected one-parameter problems

DOI link for Equivalence tests for selected one-parameter problems

Equivalence tests for selected one-parameter problems book

Click here to navigate to parent product.

## ABSTRACT

In this section it is assumed throughout that the hypotheses (1.1.a) and (1.1.b) [→ p. 11] refer to the expected value of a single Gaussian distribution whose variance has some ﬁxed known value. In other words, we suppose that the data under analysis can be described by a vector (X1, . . . , Xn) of mutually independent random variables having all distribution N (θ, σ◦2) where σ◦2 > 0 denotes a ﬁxed positive constant. It is easy to verify that it entails no loss of generality if we set σ◦2 = 1, θ◦ − ε1 = −ε and θ◦ + ε2 = ε with arbitrarily ﬁxed ε > 0. In fact, if the variance of the primary observations diﬀers from unity and/or the equivalence interval speciﬁed by the alternative hypothesis (1.1.b) fails to exhibit symmetry about zero, both of these properties can be ensured by applying the following simple transformation of the observed variables: X ′i = (Xi−θ◦−(ε2−ε1)/2)/σ◦. In view of the relations Eθ◦−ε1(X ′i) = (θ◦ − ε1 − θ◦ − (ε2 − ε1)/2)/σ◦ = −(ε1 + ε2)/2σ◦, Eθ◦+ε2(X ′i) = (ε1 + ε2)/2σ◦, it makes no diﬀerence whether we use the original sample to test θ ≤ θ◦ − ε1 or θ ≥ θ◦ + ε2 versus θ◦ − ε1 < θ < θ◦ + ε2, or base a test of the null hypothesis θ′ ≤ −ε or θ′ ≥ ε versus the alternative −ε < θ′ < ε on the transformed sample (X ′1, . . . , X ′n), provided we deﬁne θ′ = E(X ′i) and ε = (ε1 + ε2)/2σ◦. To simplify notation, we drop the distinction between primed and unprimed symbols and assume that we have

Xi ∼ N (θ, 1), i = 1, . . . , n (4.1a)

and that the equivalence problem referring to these observations reads

H : θ ≤ −ε or θ ≥ ε versus K : −ε < θ < ε . (4.1b)

Since (4.1a) implies that the sample mean X¯ = (1/n) ∑n

i=1 Xi and hence also the statistic

√ nX¯ is suﬃcient for the family of the joint distributions of

(X1, . . . , Xn), we may argue as follows (using a well-known result to be found, e.g., in Witting, 1985, Theorem 3.30): The distribution of

√ nX¯ diﬀers from

that of an individual Xi only by its expected value which is of course given by θ˜ =

√ nθ. In terms of θ˜, the testing problem put forward above reads

H˜ : θ˜ ≤ −ε˜ or θ˜ ≥ ε˜ versus K˜ : −ε˜ < θ˜ < ε˜ , (4.2) on the understanding that we choose ε˜ =

√ nε. Now, if there is an UMP test,

say ψ˜ at level α for (4.2) based on a single random variable Z with

Z ∼ N (θ˜, 1) , (4.3) then a UMP level-α test for the original problem (4.1) is given by φ(x1, . . . , xn) = ψ˜(n−1/2

∑n i=1 xi), (x1, . . . , xn) ∈ IRn.