Monthly Archives: February 2015

What’s wrong with evidence-informed development? Part 2

CIAT_53673343143ie’s recent systematic review of farmer field schools (FFS) found that these programmes worked as pilots and small- scale programmes. But the few impact evaluations  of  national-level programmes found no impact.  The evidence suggested that problems in recruiting and training appropriate faciliators impeded the scale-up of the experiential learning model of farmer field schools. It is hard to find people who can facilitate learning rather than just lecture in a top-down style.

Evidence-informed development is in trouble then if evidence of things working as pilot programmes cannot be relied upon as the basis for taking those programmes to scale. How large a problem might this be?

Scalability can be threatened if an impact evaluation has weak external validity. This means that the impact shown with a particular intervention design for a particular group in a specific context may not be achieved in another time, place, with a different group or if there are any changes in how the programme is designed or implemented.   Weak external validity may manifest itself in at least four ways.

1.       Weaker implementation at scale

Great care is often paid to the implementation of small- scale pilots. The researchers themselves, or their grad students, directly oversee implementation or even do it themselves. That won’t be the case once the programme is to taken to scale by local staff.  Scaling up training has its own threats, as we saw with the FFS model.  Local staff often need more training and mentoring, but training efforts do not reflect these needs. They are very possibly implementing the programme without access to required project inputs. And perhaps the project is yet another job for them on top of existing responsibilities, with weak incentives to pay attention to project implementation.

2.       Weak fidelity to programme design

The programme that is taken to scale may not be the same as the pilot. This may happen for budgetary reasons. So, crucial programme components are dropped or there is simply a lack of attention to detail.

3.       Pilot in a specific context

The pilot may have taken place in a very specific context. For example, it may have taken place in a region growing a particular crop that is not grown elsewhere. There is high impact, it  cannot be replicated elsewhere. Project success may also require access to water, electricity, markets or a whole number of things, which if not available, will lead to the project not working.

4.       The programme has already reached all those who can benefit

If the programme being evaluated has already been rolled out and participation was through self-selection in the first phase, then attempts to expand coverage in the next phase will include groups for whom the intervention is less attractive. Participants in the next phase may most likely be those who think they will benefit less from the programme.

What is to be done?  One clear implication is the need to evaluate programmes at scale. Small-scale pilots can be considered as efficacy trials – does the intervention work under ideal conditions?  Evaluations at scale are effectiveness studies – does it work under actual field conditions? Research teams have to pay more attention to possible threats to external validity of their impact evaluation, and be honest about them in their policy recommendations. Finally, policymakers have to read the fine print about the programme design and implementation of pilots going to scale.

Evidence-informed development can lead to better lives. But we need to get it right.

Evidence gap maps: an innovative tool for seeing what we know and don’t know

EGM_pngWhether you are a research funder, decision maker or researcher, keeping up with the ever expanding evidence base is not easy. Over 2600 impact evaluations and 300 systematic reviews assessing the effects of international development interventions have been completed or are ongoing to help answer that question and understand how, why and at what cost.  Despite this increase in quality evidence, more evidence is needed, which is why funders and researchers continue to fund and produce new research.

What reliable tools do we have to help us know what we know and don’t know? How can decision makers get a quick overview of the existing research evidence when it is scattered around different databases, journals, websites and the grey literature? Without having an effective and quick overview, how can commissioners of research ensure that limited resources are spent efficiently for prioritising the production of research that addresses important evidence gaps?

The International Initiative for Impact Evaluation (3ie) is addressing these key issues by introducing today its new, innovative and interactive, web-based platform for 3ie evidence gap maps (EGMs). Our EGMs consolidate what we know about what works in particular development sectors or thematic areas. They provide a graphical display of evidence from systematic reviews and impact evaluations in a given sector or thematic area.

In an easy-to-use way, the maps highlight areas with strong, weak or non-existent evidence on the effects of development programmes targeted for a given EGM exercise. Users can quickly explore an EGM and also go through the links to easy-to-read summaries of all studies. Through this interactive platform, we present research in a format that is useful and accessible for a range of audiences – policymakers, practitioners and researchers.

Funding research that matters

Why do we need EGMs?

EGMs can inform strategic research agendas and ensure that money is spent on studies that really matter. EGMs highlight gaps in the existing evidence base on the effects of development programmes. Identifying these gaps is of particular relevance for funders of impact evaluations who want to target their funding towards important areas where there is no research evidence. By also highlighting areas where we have a lot of evidence, EGMs can help reduce duplication of efforts.

For instance, our EGM on water, sanitation and hygiene interventions finds that few systematic reviews assess effects on outcomes other than diarrhoea, including time use and safety (particularly for women and girls) and economic outcomes. The map also clearly highlights that there is limited prospective impact evaluation evidence from Sub-Saharan Africa, in particular studies that assess sanitation and hygiene programmes at scale.

The World Bank’s Independent Evaluation Group and 3ie are now building on the work done on the EGM by producing briefs based on the identified evidence gaps. These briefs will inform the work of the World Bank’s newly formed Water Global Practice in making decisions on where evidence is most needed for improving lives. The EGM on water, sanitation and hygiene interventions is also being used to inform 3ie’s Sanitation and Hygiene grant programme.

In areas such as climate change adaptation or humanitarian interventions for instance, intervention typologies are not well defined and questions are broad. In such cases, EGMs are a useful first step for taking stock and building the evidence base.

Identifying trends and holes in research

EGMs quickly show you the broad trends in a particular area of research and throws up shortcomings. The 3ie EGM on productive safety nets maps the evidence on the effects of these interventions on poverty. We see from the EGM that the interventions focus on poverty reduction as an outcome, while also seeing that few existing studies actually measure poverty. Those that do often fail to define this appropriately. By highlighting such trends in evidence production, EGMs identify areas where future studies can add most value.

Primary studies are often conducted without sufficient attention to existing research, which means their results cannot be generalised beyond the study context.

This inattention presents a barrier to evidence synthesis and our ability to reach generalisable conclusions. Because EGMs include overviews of included interventions and outcomes, they can make it easier for researchers to review existing research when designing their studies and selecting outcomes. More robust studies with standardised outcome measures will improve the potential for future evidence synthesis.

Supporting evidence-informed policymaking

Policy processes often move quickly and decision makers do not always have time to wait for new impact evaluations and systematic reviews to be completed. In the meantime, EGMs can be conducted relatively quickly and therefore ensure that the best existing evidence is available when policymakers need it.

EGMs can reduce wasted opportunities to improve outcomes by making existing evidence available in an accessible and unbiased way. Since EGMs also highlight the quality of evidence, policymakers can make an informed judgement about what is credible and rigorous research.

An example of this is 3ie’s EGM of systematic reviews of education programmes. It covers a huge landscape of interventions in the primary and secondary education sector and provides users with easy access to a collection of 21 systematic reviews that assess the effects of different education interventions, including school feeding, cash transfers, teacher incentives and deworming.

A new brick in the evidence architectureUNAMID_7789103268

Are EGMs an alternative to new primary studies or systematic reviews? The answer is no.  If decision makers want to know the effects of a relatively well defined intervention or a limited set of interventions, a formal evidence synthesis based on a systematic review is the most appropriate tool.

But if there are no impact evaluations in a development area, there can be no policy findings from systematic reviews. And if there are no impact evaluations and systematic reviews, then the EGM will also be empty.

Our vision is to develop collections of evidence based on the EGM approach across a broad range of sectors and sub-sectors of relevance to low- and middle-income countries. 3ie is currently working on a range of EGMs, including on climate change adaptation, peace and state building and immunisation.

We hope you will join us in developing the evidence architecture by engaging with us on EGMs.