ABSTRACT

Over the past 30 years, data collection, recording, analysis, reporting, and regulatory submissions have become greatly dependent on electronic computerized systems. Regulatory agencies worldwide have begun accepting submissions electronically even allowing these applications being signed off electronically. This change in the traditional paper trail system requires signicant changes to data handling and greater emphasis on validating the regulatory submissions. In laboratories conducting bioequivalence studies, the following instances arise where validation of computer systems is required:

1. Record keeping systems including patient databases 2. Software controlling operation of analytic equipment 3. Software used to evaluate data statistics and store data

The current systems proposed to validate the use of computers and software have an interesting historical background that is important to review. Back in the 1970s, there was a reported error in matrix conversion because of numeric overow; in the 1980s, the erroneous use of “n” instead of “n – 1” for degrees of freedom threw the automated analysis out; in the 1990s, credibility of a chip maker was questioned when it was shown that the division by 3 does not yield result that is 3× the value. All of these software bugs have prompted greater emphasis on the commercialoff-the-shelf (COTS) products, which are fully validated. There is also greater emphasis today on collaborative research resulting in such projects as Human Genome Project; cancer Biomedical Information Grid (caBIG™) (an open source, open access, voluntary information network); and the Gates Group requirement of data sharing for the $287 million funding in AIDS research; the conduct of these projects requires robust hardware and software systems across many platforms. Back in the 1960s, sponsors submitted FORTRAN code and Food and Drug Administration (FDA) reviewers poured over each line of code that became so onerous that the U.S. government funded the development of the statistical analytical system (SAS) software at the University of North Carolina. The regulatory requirement for validation and verification has a bias toward the COTS and the recent CDRF draft guidance on Bayesian mentions WinBUGS and CDRH has a LINUX cluster. The Bayesian inference using Gibbs sampling (BUGS) project is concerned with exible software for the Bayesian analysis of complex statistical models using Markov chain Monte Carlo (MCMC) methods. The project began in 1989 in the MRC Biostatistics Unit and led initially to the “classic” BUGS program and then onto the WinBUGS software developed jointly with the Imperial College School of Medicine at St Mary’s, London. The development now also includes the OpenBUGS project in the University of Helsinki, Finland. There are now a number of versions of BUGS, which can be confusing. WinBUGS 1.4.1 features a graphical user interface and online monitoring and convergence diagnostics. The OpenBUGS project is based at the University of Helsinki. Open source version of the core BUGS code with a variety of interfaces and running under Linux as LinBUGS. OpenBUGS is the main development platform

and is currently experimental, but will eventually become the standard version. Just another Gibbs sampler (JAGS) by Martyn Plummer is an open source software and not really a version of BUGS: JAGS uses essentially the same model description language but it has been completely rewritten. Use of all of this software requires good understanding of Bayesian statistical principles. The available software can be classied into three categories; the open source software are programs distributed freely with source code and anyone can modify them and redistribute without any licensing; generally, these programs are technology neutral and include such examples as OpenBUGS and libraries; there are no regulations prohibiting the use of open source software. The General Public License software executables include noncommercial “ freeware” or “ shareware” and examples include the WinBUGS. Finally, there are custom-code and open source compilers such as SAS. The CRF title 21 section 11.10 Controls for closed systems have the following requirements:

a. Validation to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records

b. Accurate and complete copies of records in both human readable and electronic form suitable for inspection, review, and copying by the agency

c. Protection of records throughout the record retention period d. Limiting system access to authorized individuals e. Use of secure, computer-generated, time-stamped audit trails f. Use of operational systems checks, authority checks, device checks g. Education, training, and experience of operators and holding individuals accountable h. Systems documentation

Software validation principles include the following:

a. Good software engineering to support nal conclusion that the software is validated. b. Approach based on the intended use and the safety risk associated with the software. c. Software validation and verication conducted throughout the entire software life cycle. d. Party with regulatory responsibility needs to establish that the software is validated for the

intended use. e. Software validation is a matter of developing a level of condence.