ABSTRACT

Validation has been practiced within the global healthcare industry since the early 1970s. Its introduction was in response to the detection of microbial contamination in terminally sterilized drug products that had undergone and satisfactorily passed a compendium of sterility tests.* An underlying premise with respect to the initial imposition of validation for sterilization processes as a regulatory expectation was that “quality cannot be tested into a product.”† Quite properly, the risk of potential microbial contamination was too great to be assured by the statistically limited USP <71> Sterility Test.1 To mitigate the risk of microbial contamination passing undetected, validation of sterilization procedures became a regulatory expectation. Thus, validation from its onset has served as a risk mitigation tool, and was employed as such well in advance of any formal consideration of risk management. With the clarity of hindsight, it should have been clear that validation and risk were inexorably linked; however, that connection has only recently been revealed.