Science Is Full of Errors. Bounty Hunters Are Here to Find Them

In 2010, two famous economists, Carmen Reinhart and Kenneth Rogoff, released a paper confirming what many fiscally conservative politicians had long suspected: that a country’s economic growth tanks if public debt rises above a certain percentage of GDP. The paper fell on the receptive ears of the UK’s soon-to-be chancellor, George Osborne, who cited it multiple times in a speech setting out what would become the political playbook of the austerity era: slash public services in order to pay down the national debt.

There was just one problem with Reinhart and Rogoff’s paper. They’d inadvertently missed five countries out of their analysis: running the numbers on just 15 countries instead of the 20 they thought they’d selected in their spreadsheet. When some lesser-known economists adjusted for this error, and a few other irregularities, the most attention-grabbing part of the results disappeared. The relationship between debt and GDP was still there, but the effects of high debt were more subtle than the drastic cliff-edge alluded to in Osborne’s speech.

Scientists—like the rest of us—are not immune to errors. “It’s clear that errors are everywhere, and a small portion of these errors will change the conclusions of papers,” says Malte Elson, a professor at the University of Bern in Switzerland who studies, among other things, research methods. The issue is that there aren’t many people who are looking for these errors. Reinhart and Rogoff’s mistakes were only discovered in 2013 by an economics student whose professors had asked his class to try to replicate the findings in prominent economics papers.

With his fellow meta-science researchers Ruben Arsland and Ian Hussey, Elson has set up a way to systematically find errors in scientific research. The project—called ERROR—is modeled on bug bounties in the software industry, where hackers are rewarded for finding errors in code. In Elson’s project, researchers are paid to trawl papers for possible errors and awarded bonuses for every verified mistake they discover.

The idea came from a discussion between Elson and Arsland, who encourages scientists to find errors in his own work by offering to buy them a beer if they identify a typo (capped at three per paper) and €400 ($430) for an error that changes the paper’s main conclusion. “We were both aware of papers in our respective fields that were totally flawed because of provable errors, but it was extremely difficult to correct the record,” says Elson. All these public errors could pose a big problem, Elson reasoned. If a PhD researcher spent her degree pursuing a result that turned out to be an error, that could amount to tens of thousands of wasted dollars.

Error-checking isn’t a standard part of publishing scientific papers, says Hussey, a meta-science researcher at Elson’s lab in Bern. When a paper is accepted by a scientific journal—such as Nature or Science–it is sent to a few experts in the field who offer their opinions on whether the paper is high-quality, logically sound, and makes a valuable contribution to the field. These peer-reviewers, however, typically don’t check for errors and in most cases won’t have access to the raw data or code that they’d need to root out mistakes.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top