Response: Science, Public Health, and Objectivity: Research into the Accident at Three Mile Island

small tracts. We then compared cancer incidence and mortality rates in the tracts most heavily exposed with those less exposed, having taken account of both background radiation and routine plant emissions. What Wing et al. (2) themselves did about relative dose is not clear to us. In their paper, no description was apparent, nor did we recognize any consideration of background radiation or routine emissions, both strong features of our overall analysis. We assume that they made use of our estimates of radiation distribution from the accident. In our analysis, we judged observations after the accident to be the critical test in making adjustments for baseline values. We were cautious in adjusting for demographic and other such variables from the situation existing before the accident because of uncertainties in these data. No information was to be had about subsequent migration, and the target population could only be that exposed to the accident and remaining in the district thereafter. In any case, in the matter of cancers as an outcome, our study sought effects of the accident strictly in one direction. On this ground, there would seem to be reason to adjust for the baseline, but only after a positive effect was observed, and this we did. An apparent effect could always be a consequence merely of the previous distributions of cancer existing in the affected areas. Nonetheless, the data were in the main presented stratified by area for postulated exposure level and by time period. (We see no point in the fuss Wing makes about cancer incidence data from 1975-the first of 5 preaccident years-that we concluded were undercounted. In the absence of detectable geographic bias our decision to include them, and Wing et al's decision to exclude them and adjust their results, are equally justifiable.) There is neither mystery nor obfuscation in our presentation of the data. We are not sure we can say the same for Wing et al. They charge that we were constrained in our analyses in respect of emissions estimated by the judge's antecedent order. Certainly, we had no direct access to the records of the TMI Utility, but as far as we know, what was available was published. Of course, in using our models Wing et al. (2) operate under exactly the same constraint. We do not see that they find anything of note not reported by us and, indeed, they report rather fewer results than we do and in a less acessible manner. Contrary to yet another allegation, our recommendation was firm [to the TMI Public Health Fund and also in print (4,8)] that a follow-up was needed, both to allow larger numbers of cases to accumulate in the aftermath of the accident and to collect individual level data on possible exposure and confounding. In sum, then, Wing et al. (2) make assertions about what they take to be proven effects while we are cautious in accepting them as proven. It is a stretch to rate this difference, which your journal has given such prominence, as a controversy. Can it be said, in truth, that by going into contention Wing et al. have advanced the cause of the community or the environment? As we see it, they have done no more than muddy the waters.

small tracts. We then compared cancer incidence and mortality rates in the tracts most heavily exposed with those less exposed, having taken account of both background radiation and routine plant emissions.
What Wing et al. (2) themselves did about relative dose is not clear to us. In their paper, no description was apparent, nor did we recognize any consideration of background radiation or routine emissions, both strong features of our overall analysis. We assume that they made use of our estimates of radiation distribution from the accident.
In our analysis, we judged observations after the accident to be the critical test in making adjustments for baseline values. We were cautious in adjusting for demographic and other such variables from the situation existing before the accident because of uncertainties in these data. No information was to be had about subsequent migration, and the target population could only be that exposed to the accident and remaining in the district thereafter.
In any case, in the matter of cancers as an outcome, our study sought effects of the accident strictly in one direction. On this ground, there would seem to be reason to adjust for the baseline, but only after a positive effect was observed, and this we did. An apparent effect could always be a consequence merely of the previous distributions of cancer existing in the affected areas. Nonetheless, the data were in the main presented stratified by area for postulated exposure level and by time period. (We see no point in the fuss Wing makes about cancer incidence data from 1975-the first of 5 preaccident years-that we concluded were undercounted. In the absence of detectable geographic bias our decision to include them, and Wing et al's decision to exclude them and adjust their results, are equally justifiable.) There is neither mystery nor obfuscation in our presentation of the data. We are not sure we can say the same for Wing et al. They charge that we were constrained in our analyses in respect of emissions estimated by the judge's antecedent order. Certainly, we had no direct access to the records of the TMI Utility, but as far as we know, what was available was published. Of course, in using our models Wing et al. (2) operate under exactly the same constraint. We do not see that they find anything of note not reported by us and, indeed, they report rather fewer results than we do and in a less acessible manner.
Contrary to yet another allegation, our recommendation was firm [to the TMI Public Health Fund and also in print (4,8)] that a follow-up was needed, both to allow larger numbers of cases to accumulate in the aftermath of the accident and to collect individual level data on possible exposure and confounding.
In sum, then, Wing et al. (2) make assertions about what they take to be proven effects while we are cautious in accepting them as proven. It is a stretch to rate this difference, which your journal has given such prominence, as a controversy. Can it be said, in truth, that by going into contention Wing et al. have advanced the cause ofthe community or the environment? As we see it, they have done no more than muddy the waters.
Response: Science, Public Health, and Objectivity: Research into the Accident at Three Mile Island Although controversies over scientific findings are common, the topic of health effects of ionizing radiation has generated an exceptional amount of heat. Despite a century of research since Roentgen's discovery of X rays, fundamental disagreements exist over biophysical mechanisms, dose-response assumptions, analytical strategies, interspecies extrapolations, and the representativeness of studies of select human populations (1)(2)(3)(4)(5)(6)(7). In the United States, the last decade has seen revelations about human radiation experimentation (8) and a shift in responsibility for radia-tion health effects research from the Department of Energy to the Department of Health and Human Services, stimulated by concerns over secrecy and conflict of interest (9,10). These disagreements have been amplified by public and scientific debates over military, energy, and medical applications of nuclear technology (11).
As one of the best known technological failures of the nuclear era, the 1979 accident at the Three Mile Island nuclear power plant has generated its share of controversy, most recently in the pages of Environmental Health Perspectives (12)(13)(14)(15)(16). In his letter, Susser raises a number of important issues related to the context and logic of research on health effects from the 1979 nuclear accident at Three Mile Island (TMI) (17). We would like to follow his lead by giving some background regarding our involvement in the study of cancer incidence in the 10-mile area around TMI and also respond to some of his specific points regarding the logic and methods of the original study and our reanalysis.
Susser notes that he and his colleagues did not seek the opportunity to study cancer incidence around TMI, but were asked to investigate the accident "on behalf of the TMI Public Health Fund" (17). The Fund, financed by the nuclear industry as a result of a legal settlement, was governed by the U.S. District Court for the Middle District of Pennsylvania, which imposed requirements regarding the conduct of research and the review and approval of reports by attorneys for the industry (1i). We do not suggest that this led Susser and colleagues to alter findings or purposefully construct research to support the industry. However, to the extent that all research is influenced by assumptions and beliefs from the framing of questions to the interpretation of evidence, the context of negotiation with industry representatives is important to understanding the research product.
Like Susser, we did not seek out funding for our reanalysis and, like the original research, our work was conducted in a context that is important to understanding the product. We were asked to review Susser and Hatch's data on cancer incidence by attorneys for approximately 2,000 plaintiffs in a class action suit that was before the same court that administered the TMI Public Health Fund. Civil suits may be a poor way to address public health problems; however, in our society, civil action has played an important role by bringing health issues (including asbestos, tobacco, air and water pollution) to public attention, and has provided some recourse to members of the public seeking protection from powerful industries.
We took a number of measures to reduce the potential for apparent or real conflict of interest in working on research that was related to a lawsuit. Rather than accepting funds directly, we made arrangements for the attorneys to support our reanalysis through a grant from the nonprofit John Snow Institute. The grant was received by our University in the same manner as other grants and covered only the usual salary, computer, communications, and other costs associated with research. We were not paid as consultants, we accepted no conditions about the conduct of our research, and we were free to publish whatever we found to be noteworthy.
Emphasizing his commitment to objectivity and rigor in science, Susser states his concern that our paper is not about controversy (12), but is "a situation manufactured from misconceptions, misinterpretations, mistaken logic and simple error" (17). Here we differ. First, accurate research depends on accurate counting of the data. One error in Hatch et al.'s published research resulted from miscounts of cancer cases, which contributed to their underestimate of the radiation-cancer dose-response association for the postaccident period (15).
Second, it is important to critically examine data for sources of bias. While Susser saw "no point in the fuss Wing makes about cancer incidence data from 1975" (17), we were concerned about the undercount of cancer cases in 1975, one of the years that Susser and Hatch used to establish baseline (preaccident) cancer rates. Two hundred seventy-one incident cases were recorded in the 10-mile area in 1975 versus approximately 500 cases recorded annually in subsequent years. The ratio of incident cases to cancer deaths was 0.97 in 1975 versus approximately 1.6 in subsequent years. Susser and Hatch assumed that undercounted cases were "randomly distributed throughout the study area" (17). Given the available data, we quantitatively assessed the effect of the 1975 data on dose-response estimates. If the undercounted cases were indeed randomly distributed, dose-response associations would be the same regardless of whether 1975 was included. We showed that this was not the case and that, in particular, there was no association between radiation dose estimates and preaccident lung cancer incidence when 1975 data were excluded (15). Failure to exclude the undercounted 1975 data led Hatch et al.
(19) to conclude that, "it is apparent from the preaccident gradient that one or more lung cancer risk factors are operating to produce an exposure pattern very similar to the pathway for the radioactive plume." Third, the choice of disease outcomes is critical. Hatch et al. (19) felt there was sufficient prior evidence to limit their primary hypotheses to leukemias excluding chronic lymphocytic leukemia, lymphoma, and childhood cancer. As we noted in our paper, one consequence of the focus on childhood and specific hematapoetic cancers (rather than all such cancers considered as a larger group) was to reduce the sample size used to evaluate the accident effect. Additionally, their analyses of childhood cancer included children who were not yet conceived at the time of the accident among the exposed. Their choice of specific radiosensitive cancers was based on studies of qualitatively different radiological exposures, medical treatments, and A-bomb radiation, for which inhalation of radioactive gases was not an issue. Given the potential for inhalation of radioactive gases as an exposure route, the large magnitude of some release estimates (20), and consideration of the importance of adequate sample size, we chose to focus on the outcomes all cancers, lung cancer, and leukemia (15).
Fourth, confounding can bias doseresponse estimates in either direction. Susser states that their approach was to "adjust for the baseline [cancer rates], but only after a positive effect was observed" (17). The approach of conducting analyses in which confounding is evaluated for positive associations, but not given equal attention as a potential source of bias of negative findings, will obscure positive results. In fact, Hatch et al. (19) reported deficits before the accident in childhood cancer (odds ratio = 0.67) and adult leukemia (odds ratio = 0.59) in areas that were to receive the highest doses from the accident; these deficits could have been eammined as a source ofpotential bias towards the null. This is why we adjusted all analyses for baseline (preaccident) cancer rates (15). With this approach, uncontrolled confounding would imply a distribution of cancer risk factors that appeared only after the accident and that was correlated with the geography of plume travel in the 10-mile area.
Susser (17), as well as academic critics in the popular press (21), have not discussed the evidence upon which our conclusions are based. These results are, we believe, striking for an environmental epidemiology study. Taking lung cancer as an example, consider the ratios of observed to expected incident cases during 1981-1985 in areas ranging from the most upwind to the most downwind: 0.43, 0.68, 1.05, 1.07, 1.22, 1.26, 1.66, 1.69, and 2.34 [our Table 3; 1.0 represents the average for the 10-mile area (15)]. The goodness of fit statistic for this trend (0.082% increase in lung cancer incidence rates per dose unit), interpretable as a chi-squared statistic with one degree of freedom, was 6.58 [our Table 2 (15)]. Readers not familiar with goodness of fit statistics may be interested to learn that this result is associated with a two-tailed p<0.02. The dose-response gradient was stronger in both magnitude (0.103% per dose unit) and goodness of fit (X2 = 8.51; p<0.005) when socioeconomic variables were considered (15). It should be noted that the hypothesis being evaluated is that the accident led to increases in cancer, a one-sided hypothesis, for which p-values should be divided by two. After a hypothesis is stated and a strong design has been chosen that reduces the potential for confounders to explain the phenomenon (in this case, adjustment for preaccident disease rates), such evidence of a dose-response association generally would be considered as support for the hypothesis.
We argued that the previous investigators did not interpret the evidence as supporting the hypothesis because of errors in the analysis (discussed above) and circular reasoning (13,15,16). Susser (17) notes that their "analyses and results ... derived precisely from the use of relative dose"; our concern, however, has been that their interpretation of the findings was largely in terms of absolute dose. Numerous statements in the paper by Hatch et al. (19) indicate that they assumed the absolute doses were too low to produce the effect being investigated. Their prior expectation, "that no excess cancer would be found," was "based on estimated releases and conventional radiobiology" (19). Doses calculated from their assumptions about releases were described as "a fraction of the average U.S. exposure of 0.8-1 mSv from natural background radiation in the course of a year" and "very low, an average of approximately 0.1 mSv, with 1 mSv the projected maximal dose" (19). Their conclusion states, "the possibility that emissions from the Three Mile Island nuclear power plant could have contributed to the observed trends, in lung cancer in particular, must be weighed against . . . the low estimates of radiation exposure" (19). Susser (17) states that to test an a priori hypothesis "precludes circularity" of logic. The problem of circularity, however, arises when a researcher does not accept evidence, collected in the course of research designed to test an alternative hypothesis, as a reason to reject the null hypothesis. It is not "religion" to begin a study with the prior belief that the exposure under study might be a cause of the effect under study; rather it is a necessary part of science. The null hypothesis (that no association exists) must be able to be rejected (that is, one must be able to accept that the exposure could possibly cause the effect), or a study shouldn't be done.
Volume 105, Number 6, June 1997 * Environmental Health Perspectives At our introduction to this study, we felt that if the estimated magnitude of the reported doses was correct, any association between radiation and cancer would be too small to observe with the available data. However, plaintiffs in the civil suit, as well as others, questioned whether the doses received by some people may have been much higher than the reported maximal dose of 1 mSv (20,22). This scenario is elaborated in the letter by Berg (23). Their concerns were supported by scientific evidence collected from atmospheric monitoring and plant and animal studies. Furthermore, dozens of local citizens issued sworn affidavits that described erythema, hair loss, nausea, and vomiting (22,24), all symptoms that can occur following acute exposure to high doses of radiation. Aware that such symptoms may arise from other situations and that this event was highly stressful, we reviewed medical literature on mass hysteria to evaluate whether the reported symptoms corresponded to those typically found in outbreaks of unexplained disease that have been ascribed to psychogenic origins (25-30). Our review of the case reports at TMI suggested that they did not correspond to the classical mass hysteria scenario, which involves people in close proximity to each other, predominately female, and not including erythema and hair loss. Since the publication of Susser and Hatch's work, some of those who complained of symptoms have had tests for chromosomal aberrations, which supported their contention that they experienced acute radiation effects (31,32).
We believe that Susser and his colleagues acted in good faith. However, all scientific research takes place within an institutional context that affects the framing of scientific questions and interpretation of evidence. This institutional context, which includes prevailing professional opinion and judgment as well as views about the utility of evidence generated outside professional channels, is critical to the issue of objectivity versus advocacy raised by Susser (17). Over the past few decades, philosophers of science have described the joint influences of the context and internal logic of scientific inquiry (33-35). Recognizing the inevitable connection between knowledge and the context in which it is acquired challenges the conventional view that objectivity requires the removal of all extraneous influences from the scientific product. This more complex view requires us to seek objectivity through explication of our assumptions and values as well as through avoidance of bias in study design, data collection, and analysis (36).
Although our reevaluation of cancer incidence around TMI was based entirely on data collected by Susser and his col-leagues, our internal logic, including operational hypotheses and analytical strategy, was influenced by our interpretation of what Susser refers to as "public duty." We considered the possibility that the authorities who controlled the TMI Public Health Fund and government and industry officials who investigated radiation releases and population doses were wrong. We gave attention to residents' reports of acute symptoms, acknowledged the history of secrecy and incomplete disclosure of radiation releases in the nuclear industry (8), and considered other supporting evidence of high level radiation assembled by plaintiffs in the civil suit. Much of this information was not available to Susser and his colleagues.
We do not believe that the cancer incidence study, by itself, constitutes proof of the presence or absence of high radiation doses from TMI. However, we do believe that the study designed by Susser and his colleagues has yielded results that demand serious attention, and that the differences in logic, analysis, and conclusions between the original articles and our reevaluation constitutes more than "brouhaha" (17). It is unfortunate, if not tragic, that so many questions remain 18 years after the accident. At the very least, we hope that the follow-up of cancer incidence beyond 1985, to which Susser refers, will be available soon.

Bisphenol A in Food Cans: An Update
The can manufacturing industry and suppliers have followed closely the current research on can coatings and have conducted our own research as it relates to potential exposure to bisphenol A from can coatings. We would like to present new research findings that will amend several conclusions drawn by Nagel et al. (1) in Environmental Health Perspectives.
The paper states that the active level of bisphenol A in rodents was measured at 2 and 20 micrograms per kilogram body weight per day (jg/kg/day) and is "near or within the reported ranges of human exposure." This conclusion appears to be based on human exposure data derived from a paper by Brotons et al. (2) in Environmental Health Perspectives in 1995. New, updated data based on much more definitive analytical methodology supersedes this finding. In late 1996, our industry's Epoxy Can Coating Work Group of the Interindustry Group on Bisphenol A and Alkylphenols completed a second study on potential human exposure to bisphenol A from epoxy lacquer-coated food cans. The first study from this work group (3), completed in 1995, was referenced by Nagel et al. (1). The second study was undertaken using the improved analytical methodology that minimizes the interferences which were observed in the first study and likely occurred in the study of Brotons et al. (2).
The findings of the 1996 report, "Potential Exposure to Bisphenol A from Epoxy Can Coatings" (4), provide new improved exposure data. This 1996 study with more accurate data was not referenced by Nagel et al. (1). These new data, which have now been provided to the U.S. Food and Drug Administration and the National