Use of cell proliferation data in cancer risk assessment: FDA view.

The possible uses of cell proliferation data in cancer risk assessment can be divided into three categories: direct use of mathematical models that incorporate rates of cell proliferation, use of experimental data on secondary mechanisms produced by cell proliferation, and using studies of cellular growth rates to extend the dose range of bioassay data. These three approaches are briefly discussed and some indication of their potential application to cancer risk assessment is outlined.


Introduction
Our job at the Food and Drug Administration (FDA) is to evaluate the safety of foods, drugs, cosmetics, devices, and biologics. With respect to the potential carcinogenesis of any of these regulated products, our biggest problem by common consent of at least three Centers, Foods, Drugs, and Veterinary Medicine, is how to interpret the conventional high-dose rodent bioassay.
In the foods area, our cancer issues relate either to a) substances that have been on the market for some time that turn out to be animal carcinogens on the basis of newer bioassay data, for example, cyclamate, saccharin, red no. 3, d-limonene, sugar alcohols, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), b) carcinogenic contaminants that appear in foods at low doses, such as polynuclear aromatics (PNAs), dioxins, furans, polychlorinated biphenyls, nitrosamines, and aflatoxins, and c) natural carcinogens in foods, which we currently accept but which may in the future turn out to be a far greater issue. The first category is probably self-limiting, because we will gradually deal with them, and our statutes on food additives will not permit approving new substances with carcinogenic activity. The second and third categories will probably increase as analytical chemists find more and more contaminants at lower levels in food and regulated food products.
Office of Toxicological Sciences, Center for Food Safety and Applied Nutrition, Food and Drug Administration, Washington, DC

20204.
This paper was presented at the Symposium on Cell Proliferation and Chemical Carcinogenesis that was held January 14-16, 1992, in Research Triangle Park, NC.
Our job is to evaluate the potential human risks in a credible manner, so they can be rationally controlled. This means using the relevant scientific information in a way that will enlist the support of the majority of expert opinion. We lose credibility in the scientific community when we fail to consider clear signals or relevant biological activity like mutation and cell division in the interpretation of carcinogenicity data.
Current risk assessment procedures were developed in the days before data on mutation rates and cell proliferation could be obtained. Although in the 1950s and 1960s, there was a general appreciation of the role and importance of mutation and clonal expansion, all one had in the way of data were tumor incidences at high doses in rodents. This was a result of design, i.e., studies in rodents at high doses. In 1979, FDA gathered an illustrious panel of cancer experts to advise the FDA on dealing with the burgeoning problem of new pesticides and novel food additives, which seemed to some to be new and strange chemicals that might be potentially carcinogenic. The panel established the early carcinogen testing and evaluation policy for all agencies established later, such as the Environmental Protection Agency and the Occupational Safety and Health Administration. I believe we never attempted to model cancer risk quantitatively. What we did was attempt to place an upper limit on the cancer risk, using an approach and a series of assumptions that collectively assured that we were being conservative. It was primarily a regulatory exercise, not a scientific one. The fact that you could take specific features, like the statistical modeling of the incidence rates and treat them with great sophistication, while ignoring virtually all of the biological weaknesses, including the impact of high doses on metabolism, toxicity, and pharmacokinetics, did not make it more scientific. This leads to the current subject and the first category of my arbitrary division of how cell proliferation data might be used in risk assessment.
Cell Proliferation and Chemical Carcinogenesis I divide these possibilities of using cell proliferation data in carcinogen risk assessment into three categories: a) direct use of mathematical models that incorporate cell proliferation, b) use of experimental data on secondary mechanisms produced by cell proliferation, and c) use of studies of cellular growth rates to extend the dose range of bioassay data.

Models
Models of the cancer process have been developed that incorporate cellular proliferation rates as well as rates of genomic transition. Examples are the MVK model (1), the Neyman-Scott model (2) and the more empirical model of Cohen and Ellwein (3). In somewhat different ways, these models allow the possibility of including chemical-induced stimulation of cell proliferation in a biologically explicit manner. For example, in an early version of the MVK model (1), an approximation to the cancer incidence at age t is given by: There are three important parameters in the model: the product po0ul, the difference in cell birth and death rates, B1-DI, and the scale parameter of the normal tissue growth curve, CO. The transition rates, k, and go, are multiplicative factors that affect the overall incidence of the cancer in question, but they do not influence the shape of the incidence curve. The shape of the incidence curve is strongly influenced by the growth curve of normal tissue at the site of the potential tumor [CO(u)] and by the cellular kinetics of the initially transformed cells. Cellular proliferation rates enter the model structure in different ways, depending on what is assumed about the mechanism of action. The explicit dependence on the rate of clonal expansion of initiated cells occurs through the exponential (B1-D1) term. If it is also assumed that the chemical causes toxic damage and induces regenerative hyperplasia to replace lost cells, then there will be an increase in the proliferation of normal stem cells, i.e., an increase in the number of cells susceptible to transformation. This mechanism can be factored into the model by allowing CO(u) to be some function of the birth and death rates of stem cells, which are themselves taken to be functions of dose.
The model has implications for the extrapolation of risks to low-dose exposures. For example, if an agent affects only the first transition rate, g,u which is assumed to be a linear function of dose, then the model would predict a linear extrapolation of cancer incidence at a given age. However, if clonal expansion through the (B1-D1) term is assumed to be a linear function of dose, then the incidence at a fixed age is far from a linear function of dose.
It is worth repeating what Moolgavkar and Knudsen stated in 1981 (1): "Sensible extrapolations can be made only when the mode of action of the...agent is known." We can expect that the incidence rates predicted from these models will eventually become more credible than estimates from the present default models because of the inclusion of more biology. We also expect that the predicted low-dose risks have the potential of being lower than the predictions of the default models because of the explicit incorporation of the potentially dramatic effect of dose on rates of proliferation of both transformed and normal cells. Another and more immediate benefit is, and will continue to be, the ability to formulate and to test various mechanistic hypotheses about the cancer process.
However, the actual application of these richer biological models to the direct assessment of human risk requires a more detailed understanding of the biological mechanisms than we now have. It also requires the detection and measurement of initiated cells and their size distribution over time. We also need to know more precisely how the dose of the chemical affects the birth and death rates of cells.
Given the demands for detailed mechanistic information and accurate data measurement of the parameters that these models make, it will be some time before they can be used as substitutes for the current default risk assessment procedures. These newer models are better than the default models, but in their demand for real data on specific mechanistic events, they make it clear how little we really know about cancer risk extrapolation. I think we will be using smaller pieces of the full mechanistic picture as adjuncts to conventional default models well before we rely totally on the newer models.

Secondary Mechanism
Cellular proliferation is often the underlying mechanism producing detectable and measurable toxicological changes in tissue that may be necessary prerequisites to the development of tumors. For example, in the case of d-limonene, the binding of a2 g-globulin with the metabolite of the agent results in the blockage of the kidney tubule, necrosis, cell death, and compensating cellular proliferation (5). Ultimately, tumors arise at these sites in the male rat kidney and they are believed to be a consequence of the preceding events (5). A similar scenario probably takes place with saccharin. The formation of coarse-surfaced bladder stones as a consequence of feeding very large doses of sodium saccharin leads to chronic irritation of the bladder epithelium, hyperplasia, and eventually tumors (3).
In both cases, there are compelling reasons to believe that the lesions observed before the onset of the tumors are both necessary and sufficient to produce them. At the cellular level, the crucial event appears to be the onset of rapid cell proliferation, but in these two cases, there are also prior, associated lesions that can be observed independently. These are toxic lesions, resulting from noncarcinogenic processes and, as such, may exhibit thresholds. We can establish a safe level for the substance by the conventional means of determining the threshold and applying a suitable safety factor.
Of course, there are also some caveats to consider. We need evidence that the proposed lesions actually occur, i.e., the demonstration of preneoplastic kidney necrosis and the demonstration of bladder stones at the appropriate tissue sites before the formation of the tumors. We need evidence that the proposed mechanism is biologically plausible. We need evidence that the tumors are not also being produced by some independent mechanism. As a specific example of the latter, we would want evidence that the substance or its metabolites are not mutagenic.
This seems to be a promising approach for the near term because it, in essence, relies on our ability to identify crucial toxicological events prerequisite to certain cancers without having to explain them at the molecular level. For these cases, we reduce the cancer risk assessment problem to one we already know how to solve, that of organ-specific toxicity.
In some instances, we may be unable to find a lesion associated with abnormal proliferation. For example, high levels of dietary sodium chloride can increase susceptibility to chemically induced gastric cancer, presumably through the regenerative hyperplasia induced in an effort to replace the cells necrotized by the salt. The only visible lesion is the hyperplasia itself. Similarly, BHA produces tumors only in the forestomach of rodents and the only prior lesion is the hyperplasia itself. The rapid onset of the hyperplasia, its dependence on high dose, its reversibility on removing the agent, and the plausibility of the mechanism on theoretical grounds all support the idea that the induced abnormal cellular growth is the cause of the cancer. But the case is more difficult to prove because regenerative hyperplasia alone does not always lead to cancer.

Direct Use of Cell Proliferation Rates
When we cannot identify a secondary mechanism, there still may be the possibility of augmenting the statistical power of the bioassay by using cell proliferation rates directly. If we believe that cell proliferation rates increase during the process of formation of some tumors, then the dose response of the tumors and the -accompanying cell proliferation should be relatable. If we measure the rate at which cell proliferation increases with dose, it should reflect and perhaps predict tumor growth. But the particular cells that need to be measured need to be reliably identified, as there still is controversy over the capability of current enzyme markers to detect "initiated cells" (5).
We may be able to more reliably measure cell proliferation than tumor formation for several reasons. First, the number of cells available for observation and measurement will surely exceed the number of animals typical of the carcinogen bioassay and will give the observations more statistical power. Second, we may be able to apply more technical sophistication to the measurement of cell growth and identify the process earlier and at lower doses. Such techniques may allow us to extend the dose-response curve downward an order of magnitude or so, and from that point on we could apply our linear default with less error.