BLOG POST

Working Papers ARE Working—and Will Work Better if They Show Their Work

August 09, 2011

It is common in economics, and I guess in the social sciences generally, for researchers to post their work in "working papers" before it has undergone the rigors of peer review at journals. Here at CGD, we researchers see working papers as the heart of what we do. The very first CGD publication was a working paper, by Bill Easterly on inequality. We've put out 260 more since. A month ago, Berk Özler, a senior economist at the World Bank, blogged that Working Papers Are NOT Working. I like the post: it pairs a provocative title with a thoughtful text. His main concern seems to be that if one puts out a working paper, then revises it enough to change its conclusions, people may continue to cite the uncorrected version. As he explains, Özler writes from personal experience. In March 2010, he and coauthors circulated a working paper that found that, basically, paying school-age girls in Malawi to go to school did not raise school attendance one year on. [More precisely, whether conditioning cash grants on school attendance made a difference; see Özler's correction below.] At the same time, the authors submitted the paper to a journal. One reviewer for the journal suggested that they look at impacts two years out instead of one, and change how they assessed which girls were in school. In December 2010, Özler and colleagues revised the public working paper. After all, they found, girls paid to go to school go to school more. But not everyone got the memo:

Earlier this year, I had a magazine writer contact me to ask whether there was a new version of the paper because her editor uncovered the updated findings while she was fact-checking the story before clearing it for publication. As recently as yesterday, comments on Duncan Green’s blog suggested that his readers, relying on his earlier blogs and other blogs, are not aware of the more recent findings. Even my research director was misinformed about our findings until he had to cite them in one of his papers and popped into my office.

I take Özler's point that there is a downside to working in public: sometimes first impressions matter and can be hard to correct. But I disagree with the contention that "working papers are NOT working," and that we researchers therefore ought to shield our work more from the public. I think the research process should be more open, not more closed---more like Linux, Android, Wikipedia and less like Microsoft, Apple, and Britannica. The Linux operating system has been built by an amorphous network of thousands of programmers around the globe through a process predicated on the assumptions that all programs have bugs and the best way to expunge them is through openness...openness "to the point of promiscuity," in the words of Eric Raymond's great essay, The Cathedral and the Bazaar. Despite Linux's somewhat anarchic origins, it proved to be more reliable (less likely to crash) than other operating systems. It claimed big chunks of market share from proprietary competitors such as Microsoft. Exposing weaknesses proved a strength. Likewise, I believe social science findings will be more reliable if arrived at through more open processes. Abandoning the working paper, keeping research under wraps for a typical 1--2 extra years while journals interrogate it, therefore seems to run contrary to the lessons of our age. I think it should become routine for researchers to share the data they analyze and the computer programs they employ to do the analysis. They should post "working data" and "working code" along with working papers. Such openness will put the "science" back in "social science" by allowing researchers to replicate each other's work. Leading economics journals, including the Journal of Political Economy (JPE) and the American Economic Review, have adopted policies in this spirit, which apply to the finalized articles they publish. Last week, the Center for Global Development did too; our policy applies to working papers. Özler's argument for deferring public exposure is based on one data point from his own experience---fitting for a blog post rather than a working paper. And that experience is instructive. But the handful of data points from my own experience contradict his:

  • On July 30, the New York Times home page featured this factoid, as part of the blurb for a powerful piece about mothers in Uganda dying in childbirth: "Uganda put 57 cents less of its own money toward health for each foreign aid dollar it collected..." This is the idea of aid fungibility, that aid recipients reallocate their own spending so that, in effect, money donors given to combat disease or care for mothers in childbirth effectively goes for tanks and planes. Entirely plausible...but the specific number comes from a study put out last year, which I quickly and strongly questioned. The correlations might be real---countries getting more health-related aid may not have higher health spending in a one-to-one ratio---but for technical reasons the causal interpretation that higher health aid is causing governments such as Uganda's to spend less of their own money on health is not nearly as credible. The authors of that study never replied to my requests for data and programming code. They have, in other words, kept their methods secret. More to the point, the study never appeared as a working paper. So it was only subject to scrutiny from an unsolicited peer reviewer like me after it had been etched in stone by the Lancet. Almost entirely shielded from scrutiny, the analysis has nevertheless exercised influence. For me, this example does not instill faith in the rigors of journal peer reviews.
  • I have done other work questioning the foundations of papers that have survived journal peer review. (Example.)
  • An anonymous commenter on Özler's blog brought up my 2009 paper with Jonathan Morduch, which attempts to replicate and scrutinize three studies of the impact of microcredit in Bangladesh. One dramatic mismatch in our paper was that our version of the influential Pitt and Khandker JPE study put negative rather than positive signs on microcredit for women. Seemingly, microcredit increased poverty. (But we go on to conclude that here too causality is not convincingly shown. Maybe better-off families just borrowed less, making it seem like microcredit went hand in hand with poverty.) Earlier this year, Mark Pitt fired back, pointing out two important discrepancies in our "replication." These do not remove our doubts about causality, but they have been very helpful; and they occurred outside the traditional journal review process. In fact, Mark Pitt had earlier reviewed the paper as a submission to the JPE, and missed the significant issues he found in 2011, as did other reviewers, both at JPE and another journal. Moreover, Pitt was able to find the discrepancies because we put out a working paper and posted all of our data and code. Had Pitt and Khandker done same in 1998 (JPE's disclosure policy was not in effect then), we almost certainly would not have been as confused about what they did. The anonymous commenter fingered our paper as one that should have undergone more review before going public. But the full story actually argues for more public sharing, not less.
  • Slightly off-topic, but illuminating: I wrote my microfinance book in public, interacting with my audience as I went. Without doubt, sharing the blog posts and "working chapters" made the book better.

Of course, I'm not suggesting that researchers blog every regression they run. All of us must strike a balance between the risk of putting findings out too soon---generating bad memes and wasting people's time---and the risk of doing so too late---foreclosing opportunities for others to scrutinize and improve our work. And all of us will sometimes make bad calls. But I don't see the inevitable messiness of the process of interacting with our audiences as a strong argument for keeping work under wraps for another 1--2 years, nor for hiding it from public scrutiny until it is frozen.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics