ABSTRACT

In research on language and speech behaviour, participants are often asked to rate or judge characteristics of speech and language samples. Various kinds of judgements on a large number of aspects can be elicited, and they can be expressed in various ways. Literature on the concept of reliability and its assumptions often focuses on a specific situation: a test consisting of ‘items’, which may vary in content. Reliability analysis can be approached along two lines. One may define reliability as the ratio of the ‘true variance’ of objects and the sum of that variance and a number of other variance components, often called the ‘error variance’. More precisely, reliability analysis can be said to be concerned with the relative value of the variance of the true scores of the objects. Although Cronbach’s alpha is a widely used index of reliability, it is probably not the best or most robust one, at least in the context of questionnaires and test design.