ABSTRACT

This chapter introduces the simplest of all α-agreement measures applicable to the categorization of given units of analysis. The motivations to start simple are three. The first is to counter the claim that Krippendorff’s alpha is too complicated to calculate by hand. The second is to provide total novices, undergraduates and researchers who have never dealt with reliability issues, a basic understanding from which to go on to other chapters which indeed introduce complications not everyone needs to handle. The third is to provide what it takes to understand the following two chapters.

It begins by defining what is common to almost all other chapters: The canonical or basic form of reliability data, here exemplified by how two independent observers replicate the categorization of a set of phenomena. The observed disagreement which sums all mismatching pairs of categories of units, is conveniently tabulated in a matrix of observed coincidences. Its marginal sums give rise to an estimate of the expected coincidences, which could be observed if the phenomena of interest were unrelated to the data in hand. From these two disagreements the agreement coefficient is defined by: α = 1–Observed disagreement / Expected disagreement.

All steps taken are demonstrated by easily traceable numerical examples. The chapter reduces the initial example to the simplest possible α for binary judgments and extends α to more than two observers.