ABSTRACT

This chapter shows how use of the term 'reliability' to apply to coding schemes in safety management has led to confusion, due to conflicting definitions of the term. A discussion of the use of statistical measures of categorical agreement, and an approach that avoids unwarranted assumptions about 'chance' agreement is recommended. The chapter outlines the steps necessary to validate coding taxonomies is proposed. The data are of general interest because they negate the assumption that replicable patterns are evidence for consensus in code or category assignment. Within the paradigm, the reliability of the taxonomy may be defined as the extent to which the codes generated 'agree' with the standard. A stated purpose of the system is to codify events to enable identification of trends and patterns within accumulated event data. High correlations between general frequencies of codes do not mean people will agree when asked to assign codes during event analysis.