“The 3ie replication process differs in important ways from the standard research community-led peer-review process in academic journals. We have been explicitly instructed by 3ie staff not to discuss our experiences with the replication process at any length in this note, including our views on the weaknesses of their current system and the review standards they employ. We do have a number of observations based on our experience, as well as suggestions for how the process could be improved, and we look forward to sharing these insights with 3ie staff and with the broader research community in the future.”
-Original author response to 3ie Replication Paper 3, Part 2, p. 3-4 written by Hicks, Kremer, and Miguel.
In the last few months, 3ie has launched the 3ie Replication Paper Series—a working paper series for replication studies of international development impact evaluations—and posted the first few completed studies from our Replication Programme. Not surprisingly these studies spark debate, and some of that debate has focused on the 3ie Replication Programme itself. One big question is how replication studies should be peer reviewed. We want to take this opportunity to provide more information about 3ie’s current peer review process and about when and how it might be changed. At the end of our blog, we also clarify our instructions to Hicks, Kremer, and Miguel.
Here is how the process works for replication studies funded by 3ie. The process begins with the applications, which include proposed replication plans. These applications are scored by at least two internal and at least one external (typically two) reviewers. Those studies receiving funding are peer-reviewed at multiple stages by an internal reviewer (3ie staff member) and an external project advisor from the academic community. Replication researchers revise their replication plans based on comments from both peer reviewers, as well as on comments from all the application scorers. We post the finalized plans on the 3ie website.
Replication researchers then begin with the pure replication component of their study. Once that component is complete, that is, once they have completed the process of reproducing (or attempting to reproduce) the results from the original study as published, we require them to send the write up and tables to the original authors and encourage them to carefully consider any feedback they receive from the original authors. After the replication researchers complete their draft final report, which includes both the pure replication and the measurement and estimation analysis and/or theory of change analysis, we send the report to be reviewed by the external project advisor and a single-blind referee from the academic community (double blinding is not possible as the replication plans are public). We also send it to all 3ie technical staff with typically three or more providing comments.
After the replication researchers have revised their final report according to the comments of all those peer reviewers, we send it to the original authors. We give them the option of submitting a response in time to be published online simultaneously with the paper. If we don’t get the response in time, we publish it once we have it. All of this information about our processes is, and has been, available elsewhere on our website.
Hicks and company suggest that this process is not the standard process for academic journals. We do not see that there is an applicable standard here. First, our series is a working paper series, and the peer review processes for those vary widely. Nevertheless, our intent in having both the external project adviser and a single-blind referee for the draft final report is to have peer review similar to a journal. In the case of the replication study responded to by Hicks et al. above, a prominent economist served as the external project adviser and a prominent epidemiologist served as the referee. Our referees follow the current standard in academic peer reviewing in that they do not audit the programming code for the studies they referee. Whether that is the right standard is a much bigger question, and not one for us alone.
Second, we do not see a standard approach for publishing replication research. Miguel’s fellow Berkeley Initiative for Transparency in the Social Sciences leader, Brian Nosek, piloted an approach very different to ours in a special replication issue of the journal Social Psychology. But the jury certainly seems to be out on whether that approach should be the standard. See a detailed discussion here. For this special issue, replication plans were externally peer-reviewed, including by the original authors, and then replication researchers were required to register their studies. After this planning stage, however, original authors were not able to review the replication results until they were published in the journal and thus were not given an opportunity to reply to the findings in the same issue of the journal (see Nosek and Lakens for more information).
Certainly, our policy of giving original authors the opportunity to simultaneously publish their response is not standard. However, we believe it is important. Hicks, Kremer, and Miguel have taken that opportunity, as have all the other original authors to date.
Another unusual element of our peer-review process is the review of the pure replication results by the original authors. Certainly academic journals do not send out half-finished articles for peer review to make sure there are no big mistakes so far. We feel, however, that original author review at this stage is very important. In theory, the pure replication is where any “errors” in the original calculations are uncovered. That makes it the most sensitive part of the study for the original authors. If the replication researchers themselves are making mistakes at this stage, it is bad for everyone. It is encouraging that the replication researchers we have funded have all been eager to share their work with the original authors at this stage of their studies.
Our policies and processes are not written in stone. We plan to review and make any adjustments once we have the experiences from the first several replication studies, including all those from replication window 1 and the current in-house studies. We hope to host a consultation event, including both replication researchers and original authors, where replication methods and processes can be explored. We want to make those changes based on several observations, not just the first few.
Finally, Hicks et al. note that we requested they limit their discussion in the original author response to comments about the replication study. It seems natural that a response to a study would focus on the research at hand. But more to the point, we had already agreed to fund the original authors for writing an experience sharing paper, where they can discuss more generally their experiences as original authors in our programme. This paper will be posted on our website, subject to the review process for 3ie’s regular working paper series. We felt it appropriate to have the response focus on the replication study, and the paper about the replication programme that we are funding them to write focus on the replication programme.