Monthly Archives: January 2015

What’s wrong with evidence-informed development?Part 1

CGIAR climate_14289918934On my reading list as an undergraduate in development studies was Peter Laslett’s The World We Have Lost, This is a social history that challenges the view that pre-industrial England was a stagnant society. Rather, it had many of the features of industrial or even modern Britain. Another influential work was Anthony Brewer’s Marxist Theories of Imperialism, which was important partly because it brought the mainly French debate on the articulation of the modes of production to the attention of English-speaking scholars. For example, the way in which capitalism works in Asia is different to how it works in Europe, as it is embedded in (articulates with) the pre-existing mode of production. What these two apparently disparate works have in common is the theme that continuity is the norm, change is incremental and big bangs (major change events) are rare.

The role of continuity and incremental change was central to my own work on structural adjustment in the 1990s, for example in Programme Aid and Development, which I co-authored with Geske Dijkstra.  Whilst policy change can be affected by external factors, it is largely endogenous, and it is driven by domestic political agendas. The argument that the World Bank and International Monetary Fund successfully imposed policies on unwilling governments was always overstated. Either governments were led by believers in the need for reform, such as President Rawlings in Ghana, or governments agreed to policies they then didn’t implement, such as Kenya for many years.

How does all this matter for evidence-informed development?  It means that ideas that come from outside a country are unlikely to see wide-scale adoption. Since these ideas haven’t grown out of domestic policy debates, don’t have ownership by local stakeholders and so they don’t have local champions.  Aid donors are much less important than they like to think they are (including 3ie). Many of the clever ideas being dreamed up as nudges will fall into this category. They may change incentives or behaviour, and the evidence may show they work. But none of that matters as they won’t be taken up by governments.  Of course there are exceptions, but these are rarer than the more usual cases of no or failed take up.

Impact evaluations of existing interventions, especially those that governments are already undertaking, are far more likely to be policy relevant and get policy traction.  Hence the importance of 3ie’s Policy Window which evaluates programmes selected by the implementing agency, and the World Bank’s new i2i initiative which is following a similar approach.

This isn’t to say that there can’t be any policy innovation. Innovation can and does happen within a government. Innovation is even more likely with NGOs that are more flexibile. (Though many ideas are not new ideas at all – which was a theme in my recent lecture in London).  But for these innovations to be adopted by governments, researchers need to work on building relationships with policymakers. They usually need to get an insider position with policymakers. This is a process requiring more than a few weeks or even months. And then there is the challenge of taking policies to scale, but that is for my next blog.

How to peer review replication research

albyantoniazzi_6871308620“The 3ie replication process differs in important ways from the standard research community-led peer-review process in academic journals. We have been explicitly instructed by 3ie staff not to discuss our experiences with the replication process at any length in this note, including our views on the weaknesses of their current system and the review standards they employ. We do have a number of observations based on our experience, as well as suggestions for how the process could be improved, and we look forward to sharing these insights with 3ie staff and with the broader research community in the future.”

-Original author response to 3ie Replication Paper 3, Part 2, p. 3-4 written by Hicks, Kremer, and Miguel.

In the last few months, 3ie has launched the 3ie Replication Paper Series—a working paper series for replication studies of international development impact evaluations—and posted the first few completed studies from our Replication Programme. Not surprisingly these studies spark debate, and some of that debate has focused on the 3ie Replication Programme itself. One big question is how replication studies should be peer reviewed. We want to take this opportunity to provide more information about 3ie’s current peer review process and about when and how it might be changed. At the end of our blog, we also clarify our instructions to Hicks, Kremer, and Miguel.

Here is how the process works for replication studies funded by 3ie. The process begins with the applications, which include proposed replication plans. These applications are scored by at least two internal and at least one external (typically two) reviewers. Those studies receiving funding are peer-reviewed at multiple stages by an internal reviewer (3ie staff member) and an external project advisor from the academic community. Replication researchers revise their replication plans based on comments from both peer reviewers, as well as on comments from all the application scorers. We post the finalized plans on the 3ie website.

Replication researchers then begin with the pure replication component of their study. Once that component is complete, that is, once they have completed the process of reproducing (or attempting to reproduce) the results from the original study as published, we require them to send the write up and tables to the original authors and encourage them to carefully consider any feedback they receive from the original authors. After the replication researchers complete their draft final report, which includes both the pure replication and the measurement and estimation analysis and/or theory of change analysis, we send the report to be reviewed by the external project advisor and a single-blind referee from the academic community (double blinding is not possible as the replication plans are public). We also send it to all 3ie technical staff with typically three or more providing comments.

After the replication researchers have revised their final report according to the comments of all those peer reviewers, we send it to the original authors.  We give them the option of submitting a response in time to be published online simultaneously with the paper.  If we don’t get the response in time, we publish it once we have it.  All of this information about our processes is, and has been, available elsewhere on our website.

Hicks and company suggest that this process is not the standard process for academic journals. We do not see that there is an applicable standard here. First, our series is a working paper series, and the peer review processes for those vary widely. Nevertheless, our intent in having both the external project adviser and a single-blind referee for the draft final report is to have peer review similar to a journal. In the case of the replication study responded to by Hicks et al. above, a prominent economist served as the external project adviser and a prominent epidemiologist served as the referee. Our referees follow the current standard in academic peer reviewing in that they do not audit the programming code for the studies they referee. Whether that is the right standard is a much bigger question, and not one for us alone.

Second, we do not see a standard approach for publishing replication research. Miguel’s fellow Berkeley Initiative for Transparency in the Social Sciences leader, Brian Nosek, piloted an approach very different to ours in a special replication issue of the journal Social Psychology. But the jury certainly seems to be out on whether that approach should be the standard. See a detailed discussion here. For this special issue, replication plans were externally peer-reviewed, including by the original authors, and then replication researchers were required to register their studies. After this planning stage, however, original authors were not able to review the replication results until they were published in the journal and thus were not given an opportunity to reply to the findings in the same issue of the journal (see Nosek and Lakens for more information).

Certainly, our policy of giving original authors the opportunity to simultaneously publish their response is not standard. However, we believe it is important. Hicks, Kremer, and Miguel have taken that opportunity, as have all the other original authors to date.

Another unusual element of our peer-review process is the review of the pure replication results by the original authors. Certainly academic journals do not send out half-finished articles for peer review to make sure there are no big mistakes so far. We feel, however, that original author review at this stage is very important. In theory, the pure replication is where any “errors” in the original calculations are uncovered. That makes it the most sensitive part of the study for the original authors. If the replication researchers themselves are making mistakes at this stage, it is bad for everyone. It is encouraging that the replication researchers we have funded have all been eager to share their work with the original authors at this stage of their studies.

Our policies and processes are not written in stone. We plan to review and make any adjustments once we have the experiences from the first several replication studies, including all those from replication window 1 and the current in-house studies. We hope to host a consultation event, including both replication researchers and original authors, where replication methods and processes can be explored. We want to make those changes based on several observations, not just the first few.

Finally, Hicks et al. note that we requested they limit their discussion in the original author response to comments about the replication study. It seems natural that a response to a study would focus on the research at hand. But more to the point, we had already agreed to fund the original authors for writing an experience sharing paper, where they can discuss more generally their experiences as original authors in our programme.  This paper will be posted on our website, subject to the review process for 3ie’s regular working paper series. We felt it appropriate to have the response focus on the replication study, and the paper about the replication programme that we are funding them to write focus on the replication programme.