ABSTRACT

The results of a yes-no or single-interval discrimination experiment can be described by a hit and a false-alarm rate, which in turn can be reduced to a single measure of sensitivity. Good measures can be written as the difference between the hit and false-alarm rates when both are appropriately transformed. The sensitivity measure proposed by detection theory, d′, uses the normal-distribution z-transformation. The primary rationale for d′ as a measure of accuracy is that it is roughly invariant when response bias is manipulated; simpler measures such as proportion correct do not have this property. The use of d′ implies a model in which the two possible stimulus classes lead to normal distributions differing in mean, and the observer decides which class occurred by comparing an observation with an adjustable criterion.