chapter  10
18 Pages

The Noncentral t Distribution

You probably recall that, if the null hypothesis (H0: μ = μ0) is true, then the statistic calculated using Equation (10.1) has a t distribution with

df = (N – 1). The t distribution, as we saw back in Chapter 3, generally resembles the normal distribution but has fatter tails. We can think of t as measuring how far M is from μ0 [that’s the numerator in Equation (10.1)], in units of s N/ (that’s the denominator). Take many samples and we get a pile of t values in the shape of the t distribution. For any sample, we can use tables of the t distribution (or the Normal z t page of ESCI chapters 1-4) to calculate the p value and carry out NHST. However, what happens if the null hypothesis is not true and some

alternative point hypothesis is true? By “point hypothesis” I mean one that states an exact value for μ, for example, H1: μ = μ1. Samples from such a population, with mean μ1, have M values likely to be close to μ1 rather than μ0. As usual we calculate t as the distance from μ0, as in Equation (10.1). It turns out that the t values for such samples have a noncentral t distribution. Noncentral t is asymmetric and requires not only df but an additional parameter, the noncentrality parameter Δ (Greek uppercase delta), which depends on the difference between μ1 and μ0. The larger that difference, the more the noncentral t distribution is skewed. In contrast, the t distribution that applies when H0 is true is the distribution we’re familiar with-the central t distribution, which is symmetric and has Δ = 0 and, therefore, depends on only the single parameter df. That’s the brief story of noncentral t, which arises when an alternative

point hypothesis is true. It’s important because, in the world, there often is a real effect and so alternative hypotheses often are true. In addition, statistical power is a probability calculated assuming that an alternative point hypothesis is true, so we generally need noncentral t to calculate power. Also, noncentral t is the sampling distribution of Cohen’s d and so it’s needed to calculate CIs for d, although that’s a story for Chapter 11. If you’d like to skip now to the next chapter, that’s fine, but you may care

to have a peek at Figure 10.8, which shows the shapes of noncentral t for various combinations of df and Δ. You may also be interested to see the s pile, which first appears in Figure 10.3: Just as the mean heap is the sampling distribution of the sample mean, the s pile is the sampling distribution of the sample SD. Finally, you may care to read the section, “A Fairy Tale From Significance Land.” If you wish, you could return to this noncentral t chapter after seeing Cohen’s d and power in the following two chapters.