During a meeting at the Inter-American Development Bank (IADB) last week, I mentioned the UK Department for International Development’s moves toward recognising failure, and the part recognizing failure has in learning (see Duncan Green’s recent blog on this). Arturo Galindo, from IADB’s Office of Strategic Planning and Development Effectiveness, responded by picking up a copy of their latest Development Effectiveness Overview, and opening it to show the word FAILURE emblazoned across the page in large yellow letters. The word failure appears 27 times in the report, compared to just 22 times for the word success. This doesn’t mean that IADB is in any way a failing institution. Quite the opposite. They are putting into action the same principle emphasised by Ian Goldman from South Africa’s Department of Performance Monitoring and Evaluation: incentives should not penalise failure but failing to learn from failure.
Impact evaluations have an important part to play in learning from failure. Eighty per cent of new businesses fail in the first five years: a fact that holds true generally across the world. Do we really think that public sector and NGO programmes do any better? Failing development programmes survive because they don’t face the bottom line in the way unsuccessful businesses do. So, how does one estimate the bottom line for development programmes? Traditional process evaluations are subject to various biases which make them less likely to point to the harsh reality of failure (as discussed in my 3ie working paper with Daniel Phillips). Impact evaluations are not subject to these biases. Impact evaluations are the bottom line for development programmes.
In the past, development agencies have shied away from acknowledging failure. They have cherry picked the projects and programmes to learn from best practice. In doing this, they ignore the fact that we should also learn from our mistakes. But now, some agencies are doing impact evaluations on a serious scale. A few years ago Oxfam GB, instituted a new results system which includes 30 new impact evaluations a year on a random sample of their projects. Last year, close to 60 new IADB projects – that is nearly half of their total projects – included impact evaluations. The systematic collection of results from these studies as they are completed will start to give more accurate pictures of the agency’s performance.
The intention to learn from failure signals an important step in the use of impact evaluation. Agencies have started producing impact studies, but not really thought about how they fit into their overall learning and accountability frameworks. In an earlier blog I wrote about the unhappy marriage between results frameworks and impact evaluation. In fact, they have not even been dating. So, systematic attempts to draw lessons from impact evaluations – as IADB does in the 2013 Development Effectiveness Overview – should be lauded. These attempts take us further down the road of improving lives through impact evaluation.