Monthly Archives: December 2013

M&E: A tale of two brothers

flickr- eriktorner-5547408742_d4f4d73cf7_nMonitoring and Evaluation (M&E) are always mentioned together but in practice these two disciplines pretty much evade each other. This is despite the fact that they could be highly beneficial to each other, and if carefully combined, also to the intervention.

Howard White in his opening remarks at 3ie’s recent Measuring Results conference emphasised the need for the ‘harmonisation’ of M&E.  Evaluators, he argued, can use monitoring data to verify the theory of change of an intervention. An impact evaluation could do this by using monitoring data to identify internal implementation bottlenecks. But in reality, impact evaluators rarely go beyond using monitoring data on take-up or attendance.

So, how can we step up our monitoring game? Monitoring information systems could go beyond capturing information on standard outputs and collect data that tests whether the intervention is working.

Let’s look at an intervention aimed at teaching children in India about gender equity.  There are a number of ways that both monitoring and evaluation can supplement and complement each other. The NGO already uses its monitoring system to capture children’s enrolment and attendance.  But it could also capture relevant information on the field staff.  The teachers could be tested for their familiarity with the training material. The monitoring system could also use vignettes to assess whether the attitude and behaviour of the teachers are in line with the prescribed code of conduct. (Most of the discussion on the importance of mixed methods has focused on impact evaluation but monitoring could also benefit from a little more qualitative investigation, as Bamberger et al. claim.)

By expanding the set of questions that the monitoring system collects, the NGO could get a whole lot of relevant quantitative and qualitative information. This information would enable monitoring staff to identify the teachers that require extra training and the areas where they require it. From an impact evaluator’s perspective, this information can also feed into the investigation of the causal chain of the intervention. And any impact evaluator would be happy to access data on possible heterogeneity across teachers.

The collaboration between monitoring and impact evaluation thus does not have to be a one-way street. An impact evaluation can also offer supplementary data that can be used to validate the monitoring information system’s data. But for this to happen, monitoring and impact evaluation need to use identical indicators.

Naturally, Howard White is not the only one who is convinced that an end to the brotherly quarrel will help advance both disciplines. There are several who point out the practical problems preventing this harmonisation between monitoring and impact evaluation. Duvendack and Pasanen argue that monitoring does not provide sufficient information on intermediary outcomes. They also say that the lack of collaboration between M&E can be blamed on the poor quality of monitoring data and the lack of insights that the data produce on causality.

I agree and disagree with them. Yes, monitoring does not always produce high quality data. But several implementing agencies are setting up reliable computer systems and overcoming this hurdle. This means that data is becoming available in an easily digestible format for impact evaluators. Although one should not overstate the contribution of digitization to data quality, it is still a big leap from storing files in boxes.

While it may be true that the lack of exogenous variation in monitoring data prevents us from making any causal statements, it can be argued that this is not part of monitoring’s mandate. But we can however use monitoring data to draw interesting conclusions. Another presenter at 3ie’s conference, Prof. Laxminarayan from the Public Health Foundation of India, gave an example that illustrates this. The study he presented featured a monitoring system for the storage of vaccines.  Temperature loggers are placed in vaccine cooling storages to report on temperature fluctuations and send warning messages if the temperature rises above or drops below a level that is detrimental to the vaccines. If the cooling is excessive or insufficient, it causes damage to the vaccines.

The data produced by this system is highly reliable and the results of the study will be important. The loggers will reveal that the vaccines undergo large fluctuations in temperature – a fact that would have otherwise remained unrecognised

So what is the way forward in bringing about harmony between M&E? In their working paper ‘It’s all about MeE’ Pritchett et al. suggest hammering out an operational plan of having different combinations of monitoring (M), rigorous impact evaluation (E) and experiential learning (e), that feeds back results into the intervention design. It however remains to be seen, as well as tested, if this approach will be applied or will succeed in breaching the gap between M&E.

There are also ways that 3ie can contribute: For example, we could require grant applicants to describe the monitoring data that is available to them for the intervention they want to assess. We could also ask how that data will be used and what additional questions the monitoring system will pursue in order to support and improve the evaluation. For all of this to happen, the study team and the intervention staff would need to sit down together and think up a strategy of how to integrate monitoring and rigorous impact evaluation.

Does development need a nudge, or a big push?

EUhumanitarian-flickr-9457830618_623f4df17d_nSending people persuasive reminder letters to pay their taxes recovered ₤210 million of revenue for the UK government. Getting the long term unemployed to write about their experiences,  increased their chances of getting a job. Placing healthy choices of food –like fruit instead of chocolate- in obvious locations improves children’s eating habits. These are all success stories from UK’s ‘nudge unit’, which were recently featured in the New York Times.  Founded by David Cameron, the ‘nudge unit’, whose proper name is the Behavioural Insights Team, draws on behavioural psychology to introduce small changes to prompt people to make the right choices.

I have nothing against this idea. Indeed I strongly support the approach by which all ideas are rigorously evaluated. The fact that most nudges can be randomised at the individual level means that evaluations of these ideas are relatively quick and easy to do.  But what is new about this approach?  And what are the limitations of this approach?

Ever since governments have existed, they have been seeking to change behaviour. All development policies, programmes and projects seek to change behaviour. If people’s welfare is to improve then the beneficiaries of the welfare measure have to behave in a  different way. Even if we give the poor cash, they have to go out and spend it in welfare-enhancing ways. If we give them food parcels, they have to consume the food.

Some interventions are more explicit about their intentions – behaviour change communication programmes being the most obvious. But just providing people information of how they should behave is just one way in which we seek to change behaviour. We also do it through prohibition and exhortation.  We do it through taxes,  subsidies and other financial incentives. We do it by changing the environment, for example, by building infrastructure. So behaviour change has always been at the heart of development. The failure to recognise this – and so seriously consider whether people will indeed change their behaviour in the way we hope and expect – has been behind many development failures.

Nudges are different in their scale. Small nudges are intended to bring about big changes. But can development be achieved by nudges? Doesn’t development require a big push rather than a nudge?

This isn’t really an ‘either or ‘ option. There is room for both. But we should realise that the major structural changes required for development require bigger pushes than a nudge.  Writing about their experiences may help the long term unemployed get into work, but it doesn’t create jobs. Only policy reforms and technological developments to enhance opportunities and productivity can do that.

And there is room to debate the right balance. A J-PAL study in rural Rajasthan, India, looked at the impact of offering non-financial incentives on the immunisation of children. The intervention involved offering parents a kilo of raw lentils per immunisation, and a set of metal plates for completed immunisation. The results of the study show that offering such modest incentives can significantly increase the uptake of immunisation services.

At a presentation of this study in Delhi, Esther Duflo was criticised for just giving away ‘a few plates to parents’ and ignoring the bigger problem of fixing the dysfunctional public health system . Her response was that such reform would take years. So, what was wrong with giving away a few plates now to save some lives? There is nothing wrong with that. But perhaps what is also not right is that focusing on just this approach will draw attention and resources from fixing the underlying problem.

Nudges have their place. And that place is in identifying and piloting small-scale interventions which, if they are cost effective, can be institutionalised on a sustainable basis. It is just about better public administration.  As for the insights from behavioural psychology, many of these insights simply say that economists’ assumptions about homo economicus are mostly wrong. I think most of us knew that already. If people don’t behave according to theory, you need to fix the theory not the people. That’s a topic for another blog.

So, back to the question:  a nudge or a big push? The answer is both. Both have their place, and both need to be subject to rigorous impact evaluation.

Evidence Matters and so does blogging

2013-05-10 17.13.17

Why does 3ie have a blog site? And why should 3ie staff spend their time writing blogs?

Although we are launching this newly redesigned blog site this week, 3ie staff have been blogging for some time now, covering topics as diverse as vampires in Africa, why the UK police are right to not investigate crimes and why systematic reviews are not sausages.  Behind these provocative titles lie serious blogs that explore all that we need to do to move towards evidence-based policymaking.

Our role as a grant maker has given our staff a unique and well informed view on the best ways to evaluate the impact of development interventions, and to promote the use of evidence in policy and practice.

3ie is not just a grant-making institution. As a knowledge broker, we promote theory-based, and policy-relevant impact evaluations and systematic reviews. Blogs are an increasingly important way that 3ie can be communicating its messages more widely.  Our methods blogs have covered the importance of mixed methods and participatory approaches, various perspectives on causal chain analysis (see here, here and here), and how to promote randomised control trials effectively.

As the producer of global public goods, such as the evidence database, registry and replication studies, there is a lot that 3ie can contribute to the international evidence infrastructure.  Our emphasis on quality drives us to promote high standards in the conduct of research, including ethical research practice.

Given all the areas of work we are involved in, there is a lot that 3ie staff can and should say. 3ie has a small, dedicated staff that is fully committed to its vision and mission, in part because of the organisation’s participatory management style that promotes staff involvement in developing 3ie’s strategy and policies.

We are keen on promoting conversations and debates with the research and policymaker communities, as well as other key policy influencers, about the production and use of evidence, and would very much want to know how 3ie can play a more supportive role.

Not least, for an evidence-based policymaking agency, we did check on what evidence there is to show the benefits of blogging. David McKenzie and Berk Özler use a range of rigorous methods to find that bloggers are better known than non-bloggers, the papers they blog about get more downloads, and – most importantly for us – bloggers can help change opinions about the subject of their piece.

I look forward to the debates which 3ie will continue to bring to our growing and relevant field of evidence-based development and welcome your comments.

The HIV/AIDS treatment cascade

Image - AIDSvaccine_flickr_9314254013_1b0fb2c05b_m.jpgOne of the reasons we appreciate international days is that they prompt us to pause and reflect on what we’ve been doing in the past year, as well as think about what the next year will bring.  On this International AIDS Day, our first reflection is realising how much we have grown our HIV/AIDS programming in 3ie in 2013.

This time last year, we were about to launch two new evidence programmes for HIV prevention. One funded six formative research studies on HIV self-testing, the other funded seven impact evaluations of innovations in demand creation for voluntary medical male circumcision (VMMC).  Both of these programmes are now well underway.

This year we can again mark this important anniversary by announcing a new HIV/AIDS evidence programme. Our new programme is designed to produce evidence on effective interventions along the HIV/AIDS treatment cascade.  The thinking behind this new programme was summed up nicely by a speaker at 3ie’s recent Measuring Results conference in Delhi, India, who posited that we don’t need new ideas.

In line with her point, our new evidence programme is very different from 3ie’s self-testing and VMMC programmes.  Self-tests are a new technology for HIV prevention and treatment.  Interventions incorporating them involve innovation by design. Medical male circumcision is not a new technology, but VMMC–as an HIV prevention strategy, has hit a road block.  High uptake in the beginning has given over to flagging demand in the long run. This means that reductions in HIV prevalence are not coming close to national targets, as 3ie’s scoping report discusses. We need innovation in this case to reinvigorate uptake of this quick, simple, cost-effective means of reducing HIV transmission.

But along the HIV/AIDS treatment cascade, many ideas are currently being implemented that have never been tested. The Bill & Melinda Gates Foundation has responded by awarding two grants that aim to identify, test, and catalogue the good ideas that we’ve already had for addressing the treatment cascade. One is a best practices research project being implemented by Pangaea Global AIDS Foundation. The other is 3ie’s new HIV/AIDS treatment cascade evidence programme.*

The World Health Organization depicts the treatment cascade and explains the challenges along the cascade in its 2013 update.  We are focusing on three of the main challenges they identified: linkage to care, loss to follow up and adherence. We want to find the best ways to get folks treated, have them continue their treatment, and make sure they take their treatment correctly. For each of those three challenges, the new 3ie evidence programme asks three questions:  (1) what are we already doing; (2) of those interventions, which work the best; and (3) for those that work the best, how can we replicate them most effectively in other HIV/AIDS programmes?

To answer the first question, 3ie will conduct a variety of stocktaking activities, and we’ll be reaching out to many of you to help us. We need to disentangle the activities within the large HIV/AIDS projects in different countries to find the interventions targeted specifically at the three challenges.  Once we’ve found them, we’ll award grants to conduct impact evaluations of those interventions so that we can learn which ones work the best. By best we don’t just mean good results, we also mean cost-effective. The studies we fund will need to measure both.

To turn that evidence into practice, we also need to understand how the good interventions are implemented and what implementation factors contribute to their success. Development researchers call this implementation science.  Evaluators call this process evaluation. Either way, the objective is making good ideas work, which is what our 3ie colleague, Heather Lanthorn, talks about in her recent blog.

The studies we fund under this newest 3ie evidence programme will also need to examine implementation. To assist our grantees, we will produce a concept paper in the next few months on integrating impact evaluation with implementation science (and we are looking for a good implementation science co-author in case any readers are interested).

We hope to involve many different stakeholders along the way so that the lessons from this programme—what to do and how to do it, can be turned into practice right away. Let us know if you’d like to be involved. If you will be at the International Conference on AIDS and STIs in Africa (ICASA) next week in Cape Town, please do stop by our reception and share your thoughts. (Email ndiaz@3ieimpact.org for an invitation.)

*We haven’t figured out a good name for the programme yet.  We welcome your suggestions for it, along with your comments.

 

Upcoming 3ie seminars on HIV/AIDS

2 December 2013, London: 3ie-LIDC seminar

What we know and don’t know about HIV/AIDS intervention effectiveness: A systematic review evidence gap map by Martina Vojtkova, Research Officer, 3ie Systematic Reviews Programme.

To view 3ie’s systematic review gap map of HIV/AIDS interventions, click here.

4 December 2013, London: 3ie-LIDC seminar

Male circumcision for HIV prevention: from evidence to action by Helen Weiss, Reader in Epidemiology and International Health and Head of Tropical Epidemiology Group, London School of Health and Tropical Medicine.