Posts tagged ‘Glenn Begley’

We Need A Radical Restructuring of Scientific Publishing

I have written a number of times before that having only a few page-limited scientific journals is creating a bias towards positive results that can't be replicated

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

I observed:

This is not really wildly surprising.    Consider 20 causal relationships that don’t exist.  Now consider 20 experiments to test for this relationship.  Likely 1 in 20 will show a false positive at the 95% certainty level — that’s what 95% certainty means.  All those 1 in 20 false positives get published, and the other studies get forgotten.

Actually, XKCD did a better job of making this point.  It's a big image so I won't embed it but check it out.

Also, Kevin Drum links a related finding that journal retractions are on the rise (presumably from false positives that could not be replicated or were the results of bad process).

In 1890, there were technological and cost reasons why only a select few studies were culled into page-limited journals.  But that is not the case today.  Why do we still tie science to the outdated publication mechanism.  Online publication would allow publication of both positive and negative results.  It would also allow mechanisms for attaching critiques and defenses to the original study as well as replication results.  Sure, this partially breaks the academic pay and incentive system, but I think most folks are ready to admit that it needs to be broken.

The Positive Result Bias

This is a pretty well-known non-secret among about anyone who does academic research, but Arnold Kling provides some confirmation that there seems to be a tremendous bias towards positive results.  In short, most of these can't be replicated.

A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."...

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

This is not really wildly surprising.    Consider 20 causal relationships that don't exist.  Now consider 20 experiments to test for this relationship.  Likely 1 in 20 will show a false positive at the 95% certainty level -- that's what 95% certainty means.  All those 1 in 20 false positives get published, and the other studies get forgotten.

To some extent, this should be fixable now that we are not tied to page-limited journals.  Simply require as a grant condition that all findings be published online, positive or negative, would be a good start.