Yearly Archives: 2012

Using impact evaluation to improve policies and programmes

H12

Conditional cash transfers increase school enrolments and use of health facilities. Community-level water supply does not have health benefits. There is emerging evidence that community-driven development programmes do not increase social cohesion.

These statements can be made with confidence based on the considerable body of evidence from impact evaluations undertaken to answer the question of what works in development. 3ie is now adding this body of evidence as more completed studies are becoming available.

Knowing which interventions don’t work can save scarce development resources. And knowing which work best, and most cost effectively, can make sure these resources are better spent. But impact evaluations can do a lot more. They can help inform programme design. This they do in two ways: (i) multiple-treatment arm designs, and (ii) adopting a theory-based evaluation design. (I will discuss theory-based designs in a subsequent blog.)

Studies with multiple treatment arms examine the impact of different programme designs. So one treatment arm will get one design, say supplementary feeding to tackle child malnutrition. The second treatment arm could have nutritional counselling. Just having two treatment arms will let us compare which of the two treatments is most effective. If we have a no treatment control group arm we can also measure the absolute impact and cost effectiveness of the two treatments.

Multiple treatment arms address intervention design questions of interest to policymakers. Do conditions make a difference for conditional cash transfers, and for what? (some evidence related to education says yes). Does it matter when and how often the transfer is paid? (there is evidence that a large transfer just before school fees are due has a larger impact on enrolments). Does it matter who receives the transfer? (there is substantial evidence that women are more likely to use income for their children’s welfare than are men). What sort of administrative arrangements work best? (bureaucratic procedures, including ‘entering offices’, can deter ordinary people). Should payment be in cash or kind?

Take the example of computer assisted learning. Computer assisted learning has been shown to have a substantial, cost-effective impact on learning outcomes notably at the basic (primary or elementary) level. But is this cost effective? How many computers are needed for a class of 30 children? It is plausible that the learning effects are greater for two children per computer than one. The learning effects may be lower with three, but the cost effectiveness still higher.

Multiple treatment arm studies can test the cost effectiveness of different student-to-computer ratios. And what sort of technical support is needed for teachers? Is it sufficient that they know the basics of how to operate the computer, if that, or do they need intensive training to understand the learning objectives of the software?

Well-designed impact evaluations won’t tell us just if computer assisted learning programmes work (we know they do provided they come with appropriate software, and the school infrastructure is sufficient to support them), but how to make them work better?

It is often argued that there are complementarities in development interventions, for example extension services are only effective when combined with input subsidies. A special case of multiple treatment arms are factorial designs. This is a powerful design which is particularly under-used.

Factorial designs explore the impact of interventions A, B and C=A+B, preferably with a ‘no treatment’ comparison group though there may be practical, political or ethical objections to that. For example, A could be improved water supply, B hygiene education and C is the two together. Or A is microcredit, B business support services and C the two combined.

Factorial designs can test this. But only if the study has sufficient statistical power. Because of the large number of combinations possible, statistical power often becomes a constraint. This is because each possible combination requires a group of possible participants within it that is large enough to detect a statistically significant response). So researchers need to build the analysis of the complementarities into the design from the start.

Multiple treatment arm studies apply experimental or quasi-experimental designs to conduct a counterfactual assessment of the (cost) effectiveness of variations in intervention design to inform better designs. The design variations being tested should be the ones of interest to policymakers, ones they will implement if they are proven to be effective.

Further Reading

On the effectiveness of CCTs : Conditional cash transfers and health: unpacking the causal chain, Marie M. Gaarder, Amanda Glassman & Jessica E. Todd, Journal of Development Effectiveness, vol 2 issue 1, 2010

On the ineffectiveness of community-level water supplyWater, sanitation and hygiene interventions to combat childhood diarrhoea in developing countries, Hugh Waddington, Birte Snilstveit, Howard White and Lorna Fewtrell

On the ineffectiveness of community-driven development:
Interventions to promote social cohesion in Sub-Saharan Africa, Elisabeth King, Cyrus Samii and Birte Snilstveit

The GoBifo Project Evaluation Report: Assessing the Impacts of Community Driven Development in Sierra Leone, Katherine Casey, Rachel Glennerster, Edward Miguel

Effects of a Community Driven Reconstruction Program in Eastern Democratic Republic of Congo, Macartan Humphreys, Raul Sanchez de la Sierra, Peter van der Windt

For examples of current research on computer assisted learning in China

Remedying Education: Evidence from Two Randomized Experiments in India, Abhijit Banerjee, Esther Duflo, Shawn Cole, Leigh Linden

On impact evaluation design

Theory-based impact evaluation: principles and practice, Howard White, 3ie Working Paper 3

An introduction to the use of randomized control trials to evaluate development interventions, Howard White, 3ie Working Paper 9

Achieving high-quality impact evaluation design through mixed methods: the case of infrastructure, Howard White, Journal of Development Effectiveness, vol 3, issue 1

How useful are systematic reviews in international development?

10

This thought provoking question was the highlight of the opening plenary of the Dhaka Colloquium of Systematic Reviews in International Development.

Systematic reviews summarise all the evidence on a particular intervention or programme and were first developed in the health sector.  The health reviews have a specific audience: doctors, nurses and health practitioners. The audience is also easily able to find the systematic reviews.

But there seems to be a big difference in the accessibility of evidence between the health and development sectors. Systematic reviews in international development are targetted at policymakers besides other researchers.  However, policymakers are a diverse group and do not routinely look for evidence for making decisions. And even if policymakers attempted to read systematic reviews, they may possibly think that this is a technical document that did not apply to their particular context.

By highlighting some of the challenges in using systematic reviews in international development, the two plenary speakers , David Myers President and CEO, American Institutes for Research and Julia Littell, Bryn Mawr college offered some useful tips for researchers.

An area where we can learn from the health sector is how to develop a well-defined systematic review question. Currently, our reviews are too broad, our scope is too ambitious and often we do not really address the concerns of policymakers and practitioners. If we ask whether an intervention works or not, we will inevitably come to the conclusion that everything works sometimes and not at other times.  We need to therefore look beyond average impact and evaluate for whom the intervention works, when and in what context.

We need to not only ask the right question but also answer the question in a way that makes sense to those who need to know.  “Policy makers do not care about effect sizes,” David Myer said. They want to know for instance whether the education intervention they implemented is keeping girls in school and how many more they can keep from dropping out.

We need to put our efforts into translating our research findings into plain speak and ensure our messages are short and clear but also accurate. At the same time, we as researchers need to manage expectations and educate our audience.  Findings of large scale studies are rarely definitive. To reach out to our audience, we need to educate them about how to be comfortable with less-than-full answers that make incremental progress toward alleviating problems. Are we asking for too much? Julia Littell believes that we’re better served if we know a lot about little than knowing a little about a lot.  We need to be realistic. There are high expectations, lots of issues that need to be explored and addressed but also limited resources to do so.

It is quite difficult making research relevant to the end user.  We grapple with issues of generalisability and applicability of research to different contexts. This is made even more complicated by the fact that systematic reviews pool evidence from a range of different settings and contexts. The jury is still out on how we can overcome this challenge.  The panel provided the useful suggestion of building capacity for not just conducting systematic reviews but also using them.Systematic reviews and their findings need to be interpreted by those that really understand the context on the ground.

Finally, accessibility of evidence remains a significant challenge in international development. There are hundreds of NGOs that are conducting some kind of evaluation. But we do not know where this evidence is stored or how it could be accessed.3ie has been working on overcoming this challenging by creating a database of impact evaluations which currently has around 750 records.  We are also going to soon start a registry of impact evaluations in international development where researchers can register their ongoing evaluations. We now need to move a step further and work on making sure that researchers and institutions in low- and middle-income countries have access to these evidence libraries.

Special feature for World AIDS Day 2012

There has been only a small decline in the prevalence of HIV in the last decade, dropping from 5.9 percent to 5 percent between 2001 and 2009 for those aged 15-49 (UNAIDS, 2010). This decrease, whilst important, does not seem impressive compared to over US$5 billion spent fighting AIDS in low and middle income countries each year (the latest available figure is US$5.1 billion in 2008).

There is a wide variety of HIV prevention interventions including behaviour change communication, biomedical interventions like male circumcision and treatment of sexually transmitted infections, expansion of access to antiretroviral therapy and enhanced prevention of mother-to-child transmission services. Do we know if these interventions work or not?

Behaviour change communication in particular has been considered extremely important given AIDS is a disease that is characterised by ignorance and stigma. Communication therefore seems to have a crucial role in informing, equipping and motivating people to make informed choices about prevention and care.

But the evidence on the effectiveness of behaviour change communication is not good. Systematic reviews, which summarize the available evidence from rigorous impact evaluations, show only a minority of programmes have worked. In one case, only two of nine studies on behavioural interventions showed significant protective effects on HIV incidence among women (McCoy et al., 2009).

What can explain the lack of effectiveness of behaviour change communication?  And what are the implications of this result on our thinking about effective HIV prevention interventions?

Cultural norms and poverty act as barriers to the adoption of safe sex behaviour. “Sugar daddies” in sub-Saharan Africa are an illustration of both these barriers. This is a tradition of sexual reciprocity, where young girls have sex with older men in exchange of money and gifts. Young girls engaging in these relationships do not have much of a choice in negotiating safe sex.

Can structural interventions addressing poverty be a viable approach for HIV prevention? Two recent randomised controlled trials of conditional cash transfers show a significant decline in four sexually transmitted infections (de Walque et al. 2012), and a reduction in HIV infections among adolescent school girls (Baird et al. 2012).

But gender is an important factor in determining impact. Financial rewards can have a negative impact on men. In rural Malawi, conditional cash transfers offered to men led them to engage in more risky sex behaviour (Kohler and Thornton 2011). But when conditional cash transfers were combined with individual and group counselling in Tanzania, the incidence of curable sexually transmitted infections reduced among both young men and women (de Walque et al. 2010).

This initial evidence suggests that structural interventions like conditional cash transfer programmes should be tried and rigorously evaluated to assess if they are viable complements to biomedical interventions. Given that behaviour that is rooted in culture may be harder to change, it is imperative to find interventions that work better.

Field notes on implementing impact evaluations

11

3ie is currently funding 100 impact evaluations in low and middle-income countries spread across Africa, Asia and Latin America. We are now in a unique position to learn a lot about what’s working well in designing and conducting impact evaluations and what can be done better to ensure that research produces reliable and actionable findings.

But as grant makers we usually ‘see and experience’ our projects only on paper. We miss out on listening to the voices and perspectives of field workers, project staff and junior research staff. To get a sense of what has been happening on the ground, we recently carried out a field monitoring visit to four 3ie supported projects in diverse sectors in one African country. Most of the visits were to the field site and the meetings mainly with the implementing agency staff.

And we did learn a lot through this field trip, particularly about the relationship between implementing agencies and researchers, challenges involved in implementing a project and an impact evaluation, and the work being carried out to engage stakeholders and disseminate research findings. Some of the lessons we learned raise further questions.

Local researchers listed as ‘Principal Investigators’ on the 3ie grant application had little engagement in the impact evaluation.
This finding was true for all the projects visited. At 3ie, research teams that include developing country researchers receive higher scores in their grant applications. Not surprisingly then, many of the grant applications we receive usually have researchers from the country of the evaluation listed as Principal Investigators. But on the ground it was a different story.

Local ‘Principal Investigators’ may have been involved in determining the main evaluation questions and giving inputs on the context. However, their involvement was certainly not substantial. In one case, the local ‘Principal Investigator’ was significantly involved in the implementation of the intervention but not the impact evaluation.

So why were the local researchers not involved? Were the local researcher names hurriedly added just to meet 3ie’s requirements? Did the lead Principal Investigators think the researchers lacked the capacity to contribute to the impact evaluation? Funders of impact evaluations like 3ie need to address these questions so that they can tweak their own requirements from grantees. We need to reflect more on the ways in which lead Principal Investigators can involve local researchers and build their capacity to conduct impact evaluations.

Researchers need to work with implementing agencies/governments to address challenges related to the implementation of a Randomised Controlled Trial.
Implementing a Randomised Controlled Trial was quite challenging for one implementing agency. During the course of the implementation, some programme participants in the control group felt that they were ‘discriminated’ against. In one particular instance, the discontent among the participants led to clashes with the project staff. Overall, the implementing agency staff felt that they had to compromise on their integrity to safeguard the integrity of research.

So what are the ways researchers can assuage the fears and concerns of both beneficiaries and implementing agency staff?

Getting the buy-in and involvement of implementing agencies is important for generating actionable evidence from an impact evaluation.

One of the projects visited was a striking illustration of how the disconnect between the researchers and the implementing NGO could well have been the main reason for no take-up of the impact evaluation findings. The impact evaluation did not have a clear theory of change. The intervention that was evaluated was poorly designed. But what makes it worse is that the evaluation does not pick up on the fact that intervention was unsuccessful. The NGO has now changed track and moved on to a new programme. The end result: an impact evaluation with no actionable or credible evidence.

If there are many implementing agencies and stakeholders involved in an impact evaluation, getting them to agree on all aspects of the project could lead to delays in the project.
Getting a project off the ground can be a serious challenge if it requires considerable amount of time and diplomatic effort in getting stakeholders to agree on the design of the intervention. In one case, the delay in implementation reduced the duration of the programme. The delay has implications on the findings of the impact evaluation.

Delays in getting an impact evaluation article published can be an obstacle to using evidence

The long wait (as long as a year or more) to get published in academic journals can be an impediment for implementing agencies since there is an embargo on releasing the findings. Implementing agencies want to cut to the chase. They want to go all out, discuss, disseminate and use the evidence from an impact evaluation.

And finally, some implementing agencies think that by conducting an impact evaluation, they will appear as accountable and credible organisations.
An impact evaluation is seen as a gateway to more project funding. While this is not necessarily bad news, the benefit of conducting an impact evaluation should ideally also extend to the production of evidence that is used for designing more effective policies and programmes.

Impact of daycare interventions in Latin America

mexico_daycare_1.jpg__300x200_q80_crop_upscaleUrbanisation and increased female labour market participation have led to increased demand for daycare services, which in developing countries is partly met by government daycare programmes. Some of these programmes offer subsidised community daycare services, in which women from the community provide full time childcare in their home, food and some recreational or educational activities for the children. Other programmes offer public preschool education to children between 3 and 5 years of age. But do these daycare interventions benefit the child’s development?

Impact evaluations of these programmes were undertaken to assess their effectiveness by comparing the wellbeing of children cared for at daycare (or preschool) to those cared for at home. To synthesise the evidence, researchers of the National Institute of Public Health in Mexico (myself and Jef Leroy, currently at the International Food Policy Research Institute) and the Center for Research and Teaching in Economics (Maite Guijaro), undertook a systematic review. The study, The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review examined the effects of daycare interventions (formal out-of-home care) on the health, nutrition and development of children under five years of age, in low- and middle-income countries.

The systematic review identified 13,190 studies, but only six, based in Latin America, met the inclusion criteria in terms of scope, type and quality. Four studies evaluated community-based interventions and two looked at preschool interventions.

The findings showed that attending daycare had positive effects on language skills, social and emotional development of children in the short run. In the medium term, school attendance, student behaviour and test scores witnessed a positive trend. In fact, the effects on the children were more pronounced depending on the exposure to the programme. For example, the Bolivia daycare programme had a positive effect (2-11% increase) on bulk (gross) and fine motor, language and psycho-social skills for children with more than seven months of exposure to the programme. On medium-term outcomes, the Argentina study found that one year of preschool increased mathematics and Spanish test scores at third grade of primary education by eight percent. In Uruguay, it was found that children who attended at least one year of preschool, increased their schooling by nearly one additional year by the age of 15.

On child health outcomes, only one study from Colombia evaluated the impact on prevalence of diarrhoea and acute respiratory infections. Although this study found reductions in the prevalence of both diseases with longer exposure to the programme, it is not clear if the results are a true health effect of the programme or if the comparison group of children with less than one month of exposure to the programme, might have suffered from a steep increase in infections right after joining a daycare centre.

However, no conclusions could be drawn with respect to the nutrition outcomes. One study from Guatemala analysed child dietary intake and found positive impacts, a study from Bolivia found no impact on child growth, and two additional studies from Colombia found inconsistent results on child anthropometrics, such as height and weight.

Finally, the reviewed studies did not provide a good description of the type and quality of care children receive in the absence of the programme. This represents an important limitation of the reviewed studies since the potential impact a daycare programme might have is determined by the “net” treatment, which is the difference in the type and quality of care between daycare interventions and the alternative forms of child care in the absence of the programme. For instance, a positive “net” treatment effect can be expected if daycare interventions provide a high quality childcare alternative to mothers who take care of their children while working. However, a negative “net” treatment effect could be anticipated if children who receive adequate family care are enrolled into a low-quality daycare programme.

Policy implications

The evidence shows that daycare interventions in Latin America, community-based or school-based, have had a positive impact on child development. However, there is not enough evidence to conclude that these programmes have improved child health and nutrition. Based on this information, should policymakers decide not to implement daycare interventions until there is conclusive evidence about its impacts?

Considering the proven impact daycare interventions can have on improving child development in the short and medium term and the increasing demand for out-of-home care, these programmes should be implemented if they provide a high quality alternative to the care children normally receive.

However, it is crucial that new programmes are evaluated and closely monitored, not only to add to the very limited knowledge base of programme effectiveness and pathways of impact, but also to guarantee that unintended negative effects are identified and corrected.

(Paola Gadsden is the Coordinator of Analysis and Evaluation of Public Policies for the State of Morelos, Mexico)