Skip to content

Environmental Health Perspectives

Facebook Page EHP Twitter Feed Open Access icon  

Editorial Nov-Dec 2013 | Volume 121 | Issue 11-12

Email this to someoneShare on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Share on StumbleUpon
Environ Health Perspect; DOI:10.1289/ehp.1307676

Reporting of Results from Animal Studies

Hugh A. Tilson1 and Jane C. Schroeder2

1Editor-in-Chief, Environmental Health Perspectives,; 2Science Editor, Environmental Health Perspectives,

About This Article open

Citation: Tilson HA, Schroeder JC. 2013. Reporting of results from animal studies [Editorial]. Environ Health Perspect 121:A320–A321;

The authors declare they have no actual or potential competing financial interests.

Published: 1 December 2013

PDF icon PDF Version (233 KB)

Jane C. SchroederHugh A. TilsonOne of the key elements of the scientific process is reproducibility: Experimental results generated in one laboratory must be reproducible by others working independently. Therefore, it is of concern to us as editors of a major scientific journal to learn that only 6 of 53 “landmark” preclinical cancer studies could be reproduced (Begley and Ellis 2012). In addition, Prinz et al. (2011) reported that almost two-thirds of 67 in-house preclinical cancer projects did not replicate data previously published by others. These observations are based on preclinical cancer studies, and it is not known how pervasive this problem may be in other disciplines, including environmental health research.

Vasilevsky et al. (2013) pointed out that the reproducibility of scientific research depends in large measure on whether the “materials and methods” of a paper are described such that other investigators are able to independently replicate effects observed in the original study. Unfortunately, the degree to which sufficient methodological details are provided can be problematic. For example, Kilkenny et al. (2009) published the results of a systematic review of original in vivo research involving laboratory animals. Of the 271 papers they evaluated, only 60% provided information about the number of animals used or described in detail the species, strain, sex, age, or weight of the animals. The authors also noted that about 30% of the papers lacked sufficient details concerning the statistical analyses. Findings such as these indicate that the lack of methodological detail could be a major barrier to subsequent attempts to replicate the work of others.

According to Ransohoff and Gourlay (2010), “any study’s reliability is determined by [the] investigators’ choices about critical details of research design and conduct.” Some choices could lead to bias or the “systematic erroneous association of some characteristic with a group in a way that distorts a comparison with another group” (Ransohoff 2005). Key approaches to reduce bias include the randomization of groups and blinding (Krauth et al. 2013). However, randomization and blinding are not used universally in preclinical research. Van der Worp et al. (2005) reported that fewer than half of the 45 preclinical studies they reviewed included randomization of treatment, blinded administration of treatment, and blinded outcome evaluation. In addition, Kilkenny et al. (2009) found that most papers they surveyed did not use randomization or blinding.

Addressing concerns about the reproducibility of research findings will require cooperation and collaboration from major segments of the scientific community. From our perspective, it seems clear that many authors are not sufficiently trained in experimental design or the importance of controlling for sources of bias in their studies. We believe that courses on the principles of experimental design should be included at the graduate level. Many academic departments and governmental institutions now require students and employees to take courses on ethics. We believe that it is equally important to train students and young investigators about experimental design and how to be transparent in reporting their results in the peer-reviewed literature. In addition, we believe that it is important for study sections and funding agencies to critically evaluate experimental designs described in grant proposals under consideration. Finally, journals can be more proactive by insisting that critical methodological details be included in papers submitted for possible publication. Associate editors and reviewers need to be instructed to consider experimental design as part of the peer-review process.

Kilkenny et al. (2010) suggested that the scientific community would benefit from guidance about the information needed in a research article. As an example, they describe the CONSORT Statement for randomized clinical trials (Moher et al. 2001). Many journals have endorsed the CONSORT guidelines, and there is some evidence that use of these guidelines has improved the quality of papers on clinical trials (Kane et al. 2007).

In June 2009, an expert working group consisting of researchers, statisticians, and journal editors met to develop a checklist that could be used improve the reporting of research using animals (Kilkenny et al. 2010). The product of this effort is the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines ( These guidelines consist of 20 items identifying minimum information required for scientific research reporting results from animal studies. According to Kilkenny et al. (2010), the guidelines were developed “to maximise the output from research using animals by optimising the information that is provided in publications on the design, conduct, and analysis of the experiments.”

With this editorial, EHP joins the growing list of journals that endorse the ARRIVE guidelines for animal research. We encourage authors to review these guidelines when designing their studies and to use them in writing papers for submission to EHP. We encourage our Associate Editors and peer reviewers to keep in mind the principles articulated in the ARRIVE guidelines when evaluating papers involving animal research. We believe that by adhering to the guidelines, the quality of the papers will improve and the potential for reproducibility of findings will increase.

The ARRIVE guidelines focus on in vivo animal research; other types of research not covered by the ARRIVE guidelines also need to be considered. There is a need for clear and complete descriptions of methods across all disciplines.


Begley CG, Ellis LM. 2012. Drug development: raise standards for preclinical cancer research. Nature 483:531–533.

Kane RL, Wang J, Garrard J. 2007. Reporting in randomized clinical trials improved after adoption of the CONSORT statement. J Clin Epidemiol 60(3):241–249.

Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. 2010. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PloS Biol 8:e1000412; doi:10.1371/journal.pbio.1000412.

Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, Fry D, et al. 2009. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PloS One 4:e7824; doi:10.1371/journal.pone.0007824.

Krauth D, Woodruff TJ, Bero L. 2013. Instruments for assessing risk of bias and other methodological criteria of published animal studies: a systematic review. Environ Health Perspect 121:985–992; doi:10.1289/ehp.1206389.

Moher D, Schulz KF, Altman DG, for the CONSORT group. 2001. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357:1191–1194.

Prinz F, Schlange T, Asadullah K. 2011. Believe it or not: how much can we rely on published data on potential drug targets? [Letter]. Nat Rev Drug Discov 10:712; doi:10.1038/nrd3439-c1.

Ransohoff DF. 2005. Bias as a threat to the validity of cancer molecular-marker research. Nat Rev Cancer 5:142–149.

Ransohoff DF, Gourlay ML. 2010. Sources of bias in specimens for research about molecular markers for cancer. J Clin Oncol 28:698–704.

van der Worp HB, de Haan P, Morrema E, Kalkman CJ. 2005. Methodological quality of animal studies on neuroprotection in focal cerebral ischaemia. J Neurol 252:1108–1114.

Vasilevsky NA, Brush MH, Paddock H, Ponting L, Tripathy SJ, LaRocca GM, Haendel MA. 2013. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; doi:10.7717/peerj.148/.

WP-Backgrounds Lite by InoPlugs Web Design and Juwelier Schönmann 1010 Wien