Saturday, October 11, 2008

SCIENCE: Publish and be wrong & most studies wrong

Publish and be wrong

Adrian Johnson

One group of researchers thinks headline-grabbing scientific reports are the most likely to turn out to be wrong

IN ECONOMIC theory the winner’s curse refers to the idea that someone who places the winning bid in an auction may have paid too much. Consider, for example, bids to develop an oil field. Most of the offers are likely to cluster around the true value of the resource, so the highest bidder probably paid too much.

The same thing may be happening in scientific publishing, according to a new analysis. With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false. This would produce a distorted picture of scientific knowledge, with less dramatic (but more accurate) results either relegated to obscure journals or left unpublished.

In Public Library of Science (PloS) Medicine, an online journal, John Ioannidis, an epidemiologist at Ioannina School of Medicine, Greece, and his colleagues, suggest that a variety of economic conditions, such as oligopolies, artificial scarcities and the winner’s curse, may have analogies in scientific publishing.

Dr Ioannidis made a splash three years ago by arguing, quite convincingly, that most published scientific research is wrong. Now, along with Neal Young of the National Institutes of Health in Maryland and Omar Al-Ubaydli, an economist at George Mason University in Fairfax, Virginia, he suggests why.

It starts with the nuts and bolts of scientific publishing. Hundreds of thousands of scientific researchers are hired, promoted and funded according not only to how much work they produce, but also to where it gets published. For many, the ultimate accolade is to appear in a journal like Nature or Science. Such publications boast that they are very selective, turning down the vast majority of papers that are submitted to them.



Picking winners
The assumption is that, as a result, such journals publish only the best scientific work. But Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

Dr Ioannidis based his earlier argument about incorrect research partly on a study of 49 papers in leading journals that had been cited by more than 1,000 other scientists. They were, in other words, well-regarded research. But he found that, within only a few years, almost a third of the papers had been refuted by other studies. For the idea of the winner’s curse to hold, papers published in less-well-known journals should be more reliable; but that has not yet been established.

The group’s more general argument is that scientific research is so difficult—the sample sizes must be big and the analysis rigorous—that most research may end up being wrong. And the “hotter” the field, the greater the competition is and the more likely it is that published research in top journals could be wrong.

There also seems to be a bias towards publishing positive results. For instance, a study earlier this year found that among the studies submitted to America’s Food and Drug Administration about the effectiveness of antidepressants, almost all of those with positive results were published, whereas very few of those with negative results were. But negative results are potentially just as informative as positive results, if not as exciting.

The researchers are not suggesting fraud, just that the way scientific publishing works makes it more likely that incorrect findings end up in print. They suggest that, as the marginal cost of publishing a lot more material is minimal on the internet, all research that meets a certain quality threshold should be published online. Preference might even be given to studies that show negative results or those with the highest quality of study methods and interpretation, regardless of the results.

It seems likely that the danger of a winner’s curse does exist in scientific publishing. Yet it may also be that editors and referees are aware of this risk, and succeed in counteracting it. Even if they do not, with a world awash in new science the prestigious journals provide an informed filter. The question for Dr Ioannidis is that now his latest work has been accepted by a journal, is that reason to doubt it?


http://www.economist.com/science/PrinterFriendly.cfm?story_id=12376658

And a companion piece:


Most scientific papers are probably wrong
NewScientist.com news service
Kurt Kleiner

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.

Massaged conclusions

Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.

Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.

Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.
But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.

"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.

Journal reference: Public Library of Science Medicine (DOI: 10.1371/journal.pmed.0020124)

No comments:

Post a Comment