Bayer halts nearly two-thirds of its target-validation projects because in-house experimental findings fail to match up with published literature claims, finds a first-of-a-kind analysis on data irreproducibility.
An unspoken industry rule alleges that at least 50% of published studies
from academic laboratories cannot be repeated in an industrial setting,
wrote venture capitalist Bruce Booth in a recent blog post. ...
For the non-peer-reviewed analysis, Khusru Asadullah, Head of Target
Discovery at Bayer, and his colleagues looked back at 67
target-validation projects, covering the majority of Bayer's work in
oncology, women's health and cardiovascular medicine over the past 4
years. Of these, results from internal experiments matched up with the
published findings in only 14 projects, but were highly inconsistent in
43 (in a further 10 projects, claims were rated as mostly reproducible,
partially reproducible or not applicable; see article online here). “We came up with some shocking examples of discrepancies between published data and our own data,” says Asadullah.
Irreproducibility was high both when Bayer scientists applied the same
experimental procedures as the original researchers and when they
adapted their approaches to internal needs (for example, by using
different cell lines). High-impact journals did not seem to publish more
robust claims, and, surprisingly, the confirmation of any given finding
by another academic group did not improve data reliability.
--Brian Owens, Nature newsblog, on publication bias coming up against the real-world test. HT: Marginal Revolution