The Positive Result Bias
This is a pretty well-known non-secret among about anyone who does academic research, but Arnold Kling provides some confirmation that there seems to be a tremendous bias towards positive results. In short, most of these can't be replicated.
A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."...
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."
This is not really wildly surprising. Consider 20 causal relationships that don't exist. Now consider 20 experiments to test for this relationship. Likely 1 in 20 will show a false positive at the 95% certainty level -- that's what 95% certainty means. All those 1 in 20 false positives get published, and the other studies get forgotten.
To some extent, this should be fixable now that we are not tied to page-limited journals. Simply require as a grant condition that all findings be published online, positive or negative, would be a good start.