Governments want results. Tax payers want results. Beneficiaries want results.
The results agenda gained momentum in development circles during the 1990s, becoming firmly established with the widespread adoption of the Millennium Development Goals. This focus on results is welcome. Simply measuring success by the volume of spending, or even the number of teachers trained, kilometres of road built and women’s groups formed, is not a satisfactory approach. Input monitoring does not ensure that development spending makes a difference to people’s lives. Spending that makes a difference; that is what we mean by a result. So we would expect this agenda to go hand in hand with impact evaluations. But that has not been the case.
The response of the development community to the results agenda has largely been outcome monitoring. So indicators like infant mortality, business profits, and female empowerment are tracked. USAID was the first to adopt this approach in the mid-90s. And the first to abandon it, when the Government Accountability Office (GAO) objected that such outcome monitoring did not tell us anything about whether observed changes in outcomes were the result of the interventions supported with US dollars. Yet the use of outcome monitoring remains widespread amongst those claiming to be interested in results. There remains a view that ‘attribution is difficult’. But attribution is precisely what impact evaluation is about.
It is not being suggested that impact evaluations be carried out for all development programmes. But they should be in place for pilot projects and other innovative intervention, for large scale and flagship programmes, and for a sample of representative programmes of the sort which the agency typically supports.
Only with the widespread adoption of impact evaluation across development agencies can we truly demonstrate results. And, at the same time, create the evidence base about what works and why to get even better results in the future.
Brazil, under the leadership of President Lula, has already made some headway in demanding evidence to stop spending tax payers’ money on programs that don’t work and committing to evaluation.
In the case of the flagship social safety net program Bolsa Familia, now reaching around 40 million poor Brazilians with a budget of over USD 6 billion, evaluation has been an integral part of the program since its conception. The establishment of a monitoring and evaluation system was one of the main pillars of the program. The evaluation effort succeeded in legitimizing the intervention, so that it was no longer seen as Lula’s program. It was owned by Brazilians and most of those originally opposed to its implementation began advocating instead for its continuation.
There have been encouraging signs of a growing focus on evaluation also in other BRICS countries. India is establishing an Independent Evaluation Office to assess the impact of Indian government flagship programs. China is taking a more experimental learning approach by testing innovative policies in select districts before launching them at a national scale. South Africa adopted a government-wide mandatory framework for monitoring and evaluation.
In every decade, there is a breakthrough and we see something new that makes a difference to poor people’s lives. Used correctly, impact evaluation has proven that it can revolutionize the way we do development. Mexico was the first country to introduce mandatory impact evaluations for all its federal social programs, spurred on by its success with evaluating the conditional cash transfer program Progresa. The legitimacy it gained through evaluation allowed it to survive a change in administration although it was renamed Oportunidades. The program was scaled up and fine-tuned based on solid evidence. It now helps improve the lives of one in four Mexicans.
Between them, the BRICS are home to nearly half of the world’s poor people. By championing evaluation, these countries could be moving to the forefront of evidence based development. There is no unique solution for strengthening and institutionalizing a monitoring and evaluation system. It all depends on the political will and the championing of evaluation. “Mind the Gap: from Evidence to Policy Impact” is the opportunity to work together to build on what we have learned so far. Let’s make evidence matter.