chapter  3
26 Pages

Recent Advances in Cancer Risk Estimation

A variety of possible approaches to the analysis of bioassay data for risk estimation were considered in the early stages of development of this methodology. But its widespread use is usually considered to have started with the development of the linearized multistage model and software to fit this to cancer incidence data by Crump et al. [1, 2]in the late 1970s and 1980s. These methodological developments were supported by recommendations for their use, initially by Anderson and the Carcinogen Assessment Group of the United States Environmental Protection Agency (USEPA) [3]. This was followed by formal risk assessment guidelines for the USEPA [4]. The state of California also developed risk assessment guidelines at this time [5]. The basic methodology was also extended to include fitting of time-to-tumor data to a time-dependent version of the underlying multistage cancer model, which can be valuable in the analysis of datasets with substantial intercurrent mortality or variable dosing schedules [6]. The methods thus established continued in use well into the twenty-first century, although more user-friendly versions of the software were developed in parallel with the exponential increase in the power of personal computers [7]. An alternative approach to cancer dose-response analysis was proposed by Moolgavkar and Knudson [8]. Their model was designed to include a quantitative accounting for cell division, resulting in expansion of a clone of mutated cells, and cell death or terminal differentiation, which removed cells from the pool of those capable of further proliferation. The model in principle allowed for several successive stages of mutation on the way to the final appearance of a fully malignant clone of tumor cells, as has been observed in actual human tumors [9]. However, the mathematical complexity of such cell proliferation models has generally limited their implementation to no more than two stages of successive mutation. This type of model has stimulated a lot of research and discussion of possible mechanisms but has not in practice been widely used in risk assessments for regulatory purposes, because of the large number of parameters required to be determined for the model and the necessity of using independent measures or estimates for some of these, especially cell proliferation rates. Extensive analyses using this type of methodology were developed, for example, for formaldehyde

[10], but this approach has not, at least to date, appeared in a final USEPA toxicological review for that compound. In parallel with the development of these methods and tools, which mainly modeled the underlying toxicodynamic features of can-cer dose response, there has been extensive development of toxicok-inetic modeling, especially the more or less realistic physiologically based pharmacokinetic (PBPK) models. It is often observed that the uptake, metabolism, and elimination of the carcinogenic substance (and/or a procarcinogen and metabolites) is nonlinear, especially at the higher doses employed in experimental animal studies [11, 12]. This nonlinearity, often appearing as a leveling off or “saturation” of the dose response at higher applied doses, presents difficulties in fit-ting the data with the typical multistage model. Starting with initial studies of a number of volatile toxicants such as styrene [13], meth-ylene chloride [14], and perchloroethylene [15], PBPK models were used to determine internal dose metrics at the target site(s) for tumorigenesis. This often resulted in better fit to the multistage model than could be obtained with an applied dose metric. PBPK modeling was also used to inform the extrapolation from animal test species to humans, although this sometimes involved large uncertainties because of the scarcity of reliable toxicokinetic data to parameter-ize the human models. However, extensive use of these techniques subsequent to these early examples has improved the methods and increased confidence in them to the point where it is now more or less standard practice to at least evaluate whether use of PBPK mod-eling and appropriate internal dose metrics is informative when de-riving a cancer potency estimate. PBPK modeling has also been used in many risk analyses to address the question of interindividual vari-ability in the target species (humans) as well as in the test species [16]. In addition to these developments in the quantitative method-ology for risk assessment it is important to recognize the continu-ing expansion of the database of studies that provide the input data for these calculations. The development of the National Toxicology Program’s (NTP’s) cancer bioassays (now with 611 printed long-term study reports according to the current Management Status Report) has provided a key resource for cancer incidence data on compounds of interest, for identification of potentially carcino-genic chemicals, for characterization of both cancer and noncancer

pathology associated with exposure to these chemicals, and for the quantitative data necessary to calculate potency values. This pro-gram [17] has also developed an important quality standard for the design, implementation and reporting of long-term bioassays. It is also important to recognize the contribution of the International Agency for Research on Cancer (IARC). Although the monograph series only addresses hazard identification rather than quantitative risk assessment, this obviously is an essential first step in identifying substances for evaluation from the dose-response perspective. The IARC has also, via its successively updated prologues to the mono-graph series [18], made important contributions to the debate on study evaluation criteria, and the inclusion of supporting data such as genetic toxicity, studies of mechanism and chemical structure-activity comparisons. This evolutionary approach and relatively established position of the linearized multistage method has been considerably revised in the last 10 years. The immediate stimulus to many of these changes was the publication of the USEPA’s revised guidelines for carcinogen risk assessment in 2005 [19]. This document was the final product of a lengthy effort to update the original 1986 guidelines [4], which had previously resulted in “proposed” [20] and “interim final” [21] draft guidelines. Several of the changes in carcinogen risk assessment methodology that have been introduced recently were prefigured in those earlier draft guidelines proposals and have become more or less standard practice since the availability of the final guidelines. Another component of the discussion on methodology was the risk assessment guidelines published by the state of California’s Air Toxics Hot Spots program [22]. Also, various inputs from the National Science Foundation, while not necessarily endorsing specific methodologies, encouraged the updating of guidelines for carcinogen risk assessment methodology [23] and provided comment on specific risk assessments. This affected the form of hazard assessment documents by encouraging the provision of greater detail on systematic literature review [24] and analysis of methodological data. Among the various recent changes and emerging concepts, several are presented here as being of particular interest: • Replacement of the longstanding linearized multistage model for cancer with the benchmark dose (BMD) method as the

standard tool for dose-response analysis of both cancer and noncancer toxicity data • Interest in allowing for greater sensitivity to early-in-life exposures to carcinogens • Development and increasing acceptance of methods for generating an overall potency estimate for cancer incidence after exposure to multisite carcinogens • Incorporation of mechanistic data into risk assessments • Potential for use of data from high-throughput screening methods and other novel experimental methods in risk assessment 3.2 Benchmark Dose MethodDissatisfaction with the statistical inadequacies of the traditional lowest-observed-adverse-effect level (LOAEL)/no-observed-adverseeffect level (NOAEL) method of analyzing noncancer health effect data led to the proposal of an alternative approach that was described by Crump [25]. This method, referred to as BMD analysis, used mathematical models to fit the response data across all dose levels examined in the study and by means of this mathematical fit identified a BMD (and, specifically, the 95% lower confidence limit on this estimate, referred to as the BMDL) corresponding to a standardized response rate, usually 5% or 10% for dichotomous data. Health-protective levels were then selected by application of uncertainty factors to this BMDL in a similar way to their application to LOAELs and NOAELs. This approach was widely tested for a range of noncancer data types, especially in the early stages with developmental toxicity data that have particular statistical problems that are hard to accommodate in the LOAEL/NOAEL methodology. Eventually guidelines [26] for the use of this methodology were developed and applied generally for noncancer risk assessment. Concurrently with this development of BMD methodology for noncancer effects, consideration was given to its use for cancer risk assessment [17]. This was partly prompted by an interest in reconciling the previously very different dose-response analysis methods for cancer and noncancer effects. There was also a concern that although the multistage model as originally proposed

by Armitage and Doll [27] had been fairly successful in describing cancer dose-response curves quantitatively, it was increasingly clear that its assumed correspondence with actual biological mechanisms [9] was very limited. Even somewhat more realistic models in reality fell some way short of fully describing the true biological mechanism of action, and these had not been much used for risk assessment because of their mathematical complexity and uncertainties in the values of the many key parameters. The BMD approach was therefore attractive since it is applicable, with appropriate extrapolation strategies, to both cancer and noncancer incidence data, and the justification of the model used to fit the data is based purely on the quality of fit to those data rather than any assumption a priori that the model corresponds to actual chemical or biological events. The adoption of this methodology as the default approach for quantitative cancer risk assessment has resulted from the adoption of final guidelines [16, 19] recommending its use, and also from the development by the USEPA of software (BMDS) and supporting doc-umentation [28] to implement the method, starting development in 1995 with release of an initial version in 1999 and with many revi-sions and extensions since then. Parallel to the initial areas of ap-plication of this methodology, the initial versions of this software were primarily designed around the needs of noncancer data analy-sis, although a dichotomous multistage model was included from the start, and in fact Crump had pointed out in his original publica-tion [21] that the linear, quadratic, and polynomial models that he evaluated were similar to those used for cancer analysis, although with fewer constraints on possible parameter values. In 2007, a ver-sion of BMDS was released with a multistage cancer model that in-corporates the constraints (in particular, extra risk calculation and non-negative values for β coefficients) detailed by the USEPA 2005 guidelines and provides a unit risk calculation. This represents a re-cent consensus that the multistage polynomial model is in general the best mathematical fit for cancer incidence data. Departures from this state of affairs most commonly can be accommodated best by using toxicokinetic models, as noted previously, and/or by mortality corrections such as the poly-3 correction favored in recent NTP bio-assay reports. It is perhaps worth noting that the model referred to here is specifically the “multistage” model, in contrast to the “linear-ized multistage” model previously used in cancer risk assessment.