Invited Comments on Cancer

Reproducibility of Preclinical Academic Studies

January 21, 2014

Dr. Bing-Cheng WangThis op-Ed piece by Dr. Bing-Cheng Wang, co-Leader of the GU Malignancies Program of the Case Comprehensive Cancer Center and Professor of Medicine at CWRU and MetroHealth Medical Center, comments on a global issue in scientific publications- concern over reproducibility in lab findings and the complexity of validation studies. This has implications in how we conduct research.

Recently the biomedical research community was shaken by the widely publicized findings that an alarming proportion of peer-reviewed preclinical studies are not reproducible. The problem surfaced after publication of two commentaries detailing the inability of scientists at pharmaceutical companies to reproduce results from the published literature in 67% to 90% of cases (Nat. Rev. Drug Discov.10, 712, 2011; Nature 483, 531-533, 2012). Many of the studies are in the field of oncology. Surprisingly, the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target, or the number of independent groups that authored the publications.

The reaction was strong and swift. Within months, the National Institute of Neurological Disorders and Stroke (NINDS) issued a call for transparent reporting on preclinical animal studies (Nature 490:187, 2012). The guidelines recommend that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. NIH, which funds most of the studies in question, is mulling new rules for validating key results. There are suggestions that key findings in preclinical studies need to be validated before a grant can be funded - perhaps for applications that are likely to lead to clinical trials. Companies such as Science Exchange are already starting to perform validation studies at an estimated cost of $25,000 for each major paper (Nature 500:14, 2013). In an editorial last week, Science announced that the journal will now adopt NINDS recommendations in reviewing future manuscripts (Science 343:229, 2014), following earlier enforcement by Science Translational Medicine. More journals are likely to follow suit.

A larger question is the underlying causes for the irreproducibility problem. Although an occasional fraudulent study is possible, the sheer number of irreproducible papers suggests other factors. The experimental design and data handling, as pointed out by NINDS, are likely contributing causes. Another factor can be the verification process itself. Cutting-edge research can be very hard to reproduce in the run-of-the-mill labs. The less obvious and less tractable factors include the heterogeneous experimental conditions. This latter possibility is partially born out in a new study (Nature 504:389, 2013) specifically designed to investigate inconsistencies between two high-profile papers examining 500 different cancer cell lines for genomic signatures and their association with sensitivities to cancer drugs (Nature 483:570, 2012; Nature 483:603, 2012). While genomic data are found to be consistent between the two papers, significant discrepancies are seen in drug sensitivities and their association with genomic features, even when the similar assays are performed on the same cell types, indicating methodological variables may contribute to the discordant results. Therefore, we are only in the early phase of the evolving controversies surrounding irreproducible results. Finding the causes and resources to remedy them without restricting creative process of scientific research will remain a challenge in the foreseeable future.