ABSTRACT

One of the critical standards in quantitative, scientific research is the reliability of measures. Most basically, reliability is the extent to which measurement error is absent from the data (Nunnally, 1978). A widely accepted definition of reliability is that of Carmines and Zeller (1979): the extent to which a measurement procedure yields the same results on repeated trials. This chapter considers a type of reliability known as inter-coder reliability, which is a central concern in most content analysis research utilizing human coders. Inter-coder reliability assesses the consistency among human raters involved in a content analysis of messages. For such human coding, reliability is paramount (Neuendorf, 2002). If a content analytic measure is dependent upon the skills of a particular individual, the investigation has not met the standards of scientific inquiry.