BLOG POST

A Better Process for Better Evidence: My Review of Open Review

January 17, 2019

Peer review is an important part of establishing the credibility and quality of research, yet it has been controversial since its inception. Since at least the 19th century, scientific societies and journals (and now websites) have grappled with the tradeoffs that come with refereeing publications in terms of delays, openness to new ideas, the chance of publishing erroneous results, and even the difficulty of finding reviewers. My recent experience with a relatively new approach which publishes reviewers’ comments along with their names—open peer review—led me to reflect on the not so subtle ways it altered the way I refereed a paper. I was originally lukewarm to the idea. But after trying it out, thinking on it, and reading a little further, I've decided I'm all for it.

Open review in action

My opportunity came from the Gates Open Research platform which "provides all Gates-funded researchers with a place to rapidly publish any results they think are worth sharing. All articles benefit from immediate publication, transparent refereeing and the inclusion of all source data." The review process short-circuits a significant delay that is common for most journals because it allows an article—which might otherwise be relegated to the limbo of "revise and resubmit"—to see the light of day. Speedier publication comes with one important proviso: the article is posted online alongside its reviews.

Gates Open Research has modeled its approach on earlier initiatives. It seems that the Medical Journal of Australia may have been the first to give it a try in the late 1990s. Richard Smith forcefully argues in favor of open review by looking at how it has performed for the journal Atmospheric Chemistry and Physics (quite well) and the reasons that encouraged other platforms to adopt it (including the Economics e-journal, Winnower, and Wellcome Open Research).

Among development agencies, The Millennium Challenge Corporation (MCC) may be the only organization to have tried this approach when publishing some of its impact evaluations in 2012—a move I supported vigorously and which Markus Goldstein described as "exciting."  As far as I can tell, it has since backtracked. (As a side note, I hope after the government reopens that MCC will find a way to make its impact evaluations easier to find and resume this way of promoting open commentary on its studies.)

The monument to an anonymous reviewer in Moscow

Source: Moscow’s Higher School of Economics

Open review opens minds

The fact that my comments on the submission would be public affected me in several ways as I set about writing the review (and rewriting and rewriting and editing before hitting "send"). In particular, I realized:

  • the decision wasn't about whether the paper was "right" (i.e., perfect enough for publication), but rather whether it was clear and reasonable (i.e., good enough for publication);

  • my comments had to be important and rigorous enough to persuade not just an editor (who may not be an expert in the specific topic) but the broader community of researchers and readers (that certainly includes knowledgeable experts); and furthermore,

  • my review had to be written clearly and politely enough to meet public standards of probity (which I hope still retain some measure of value even in this age of internet trolls).

I like the idea of speedier publication, but I particularly like the way it creates an opportunity for direct informed discussion among researchers. If it works, readers get an outside assessment of the quality of the research along with a lesson in how someone else read and interpreted the piece.

The discipline on reviewers is also welcome. In my career, I've received many useful reviews, but I've also received some that were unintelligible or poorly justified. The least helpful comments were those in which a referee was so committed to his or her own view on a particular topic that they could not concede that other reasonable interpretations were possible. The public nature of this new review process could attenuate some of these problems.

Will reviewers participate?

But there are many other possible dynamics unleashed by this review process. The one that concerns me most is whether the greater time commitment and lack of anonymity will dissuade reviewers. It certainly took me longer to write my review, which means I'll be more cautious about accepting the next request. On the other hand, I feel better about this review. The extra time made it better and more precise. At one point, rather than taking the shortcut of simply raising a potential problem, I thought about it sufficiently to decide whether it was indeed a problem and why. This required some further reading about a method I hadn't looked at since graduate school—which was a bit of a bother but also provided an opportunity to test my intuition and learn something new.

The issue of time required to do a good review strikes me as one of the most problematic puzzle pieces for improving (and maintaining the quality of) social science research. Indeed, a randomized study in the British Medical Journal found the open review process may dissuade potential reviewers and increase the time commitment, but concluded that the ethical arguments outweighed these disadvantages.

Finding people with the appropriate expertise (and time) to review research becomes more difficult as the number of publication outlets increases and as the degree of specialization and sophistication of research increases. This is probably most apparent across disciplines. For example, journals that specialize primarily in health have been criticized for publishing studies which are sometimes flawed in ways that would have been detected by people with expertise in econometrics. This problem works in both directions. Economists are open to criticism that they have failed to learn from the medical field's experience with rigorous experimental studies or anthropology's understandings of historical data and genetics. Open review will only succeed to the extent it can draw in referees with the appropriate expertise, commitment, and openness to participate.

Along with other improvements like publishing data and encouraging replication, open review could help make research platforms speedier while promoting more informative debate and less biased screening. I would be really excited to see development agencies adopt the approach (or re-adopt as in the case of MCC). If these hopes can be realized, we'll have found a better way to move this grand collective project of social science forward.

Thanks to David Roodman, Sarah Rose, and Justin Sandefur for comments on an earlier draft. Special thanks to David Roodman for sharing a wealth of references, many of which are linked in this post. Sarah Allen is a particularly good editor.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics