ABSTRACT

The signal-detection-theory (SDT) statistics d′ and c are approximately normally distributed, provided that N is reasonably large and ceiling and floor effects on the hit and false-alarm rates are avoided. In these cases, the normal distribution can be used to construct approximate confidence intervals and perform simple hypothesis tests. However, exact computational methods avoid the normal approximation and account for any corrections used for observed proportions of 0 or 1. They provide standard errors for SDT statistics and quantify statistical bias. This information yields the mean-square error of the statistic, which indicates the usefulness of the measurement. Pooling data across stimuli, sessions, or observers may be necessary to avoid observed frequencies of zero. Estimates of sensitivity obtained in this way are biased, but the amount of bias is small unless estimates of very different bias or sensitivity are combined. Generalized linear models (GLMs) permit statistical evaluation of hypotheses about sensitivity and bias parameters derived from detection theory or Choice Theory. They are particularly valuable for testing hypotheses about the many main and interaction effects involving these parameters that arise in factorial experiments.