ABSTRACT

Within the last decade, the forensic sciences have undergone a groundswell of scrutiny and criticism of their methods, practices, and standards. Among those leading the charge were Saks and Koehler (2005), who insisted that “sound scientic foundations and justiable protocols” replace “untested assumptions and semi-informed guesswork.” Four years later, the long-anticipated, congressionally mandated report by the U.S. National Research Council (NRC 2009) concurred. The report found that forensic disciplines that had come from the biological or chemical sciences (e.g., nuclear DNA analysis, toxicology) had conducted more experimentation and validation of their methods than those derived from law enforcement (e.g., ngerprints, ballistics, toolmark analysis). In these forensic identication sciences, evidence is often presented to support conclusions of a match (i.e., to a particular person, rearm, etc.). Except for DNA analysis, however, these disciplines have rarely investigated the limits and uncertainties (i.e., error rates) of their methods or veried the assumptions that undergird their conclusions. Among the 13 recommendations in the NRC report was the need to “develop tools for advancing measurement, validation, reliability, information sharing, and prociency testing in forensic science and to establish protocols for forensic examinations, methods, and practices” (NRC 2009, recommendation no. 6, p. 214).