In 2014, global humanitarian assistance totalled US$24.5 billion. The World Humanitarian Assistance Report (2015) noted that there was still a shortfall of 38 per cent in terms of unmet need. The UN Secretary General’s new report for the World Humanitarian Summit, finds that this gap has increased to 47 per cent. Put another way, humanitarian assistance needs to double to meet current needs.
With the current refugee crises, although resources are moving from traditional official development assistance to humanitarian aid, this funding gap is likely to increase. Arguably this shortfall is unlikely to be met in the next decade. It is therefore not surprising that aid and donor agencies are concerned about the effectiveness and impact of their assistance. Currently, 93 per cent of people in extreme poverty live in countries that are affected by humanitarian crises. Clearly, each dollar of aid needs to help alleviate the suffering of people affected by humanitarian crises.
The need for evidence – and the relative absence of it so far – is in fact one of the concerns voiced by not just 3ie but a host of organisations in the humanitarian sector as a whole. Indeed, evidence of different kinds can help answer the question of what is effective. If it is effective, is there a large effect or a small effect? Or are there different ways in which the same effect can be achieved?
The state of evidence in the humanitarian sector
Doesn’t the humanitarian sector already support a whole host of tools and initiatives to support good evidence?
Yes and no.
Currently, the humanitarian sector uses a variety of tools and methods that help inform programming: rapid assessments are conducted; routine programme data is collected; and perception surveys and real-time monitoring and evaluation are undertaken. All these contribute critically to understanding and implementing humanitarian assistance activities.
But this is not enough.
Let me illustrate. What if we want to know what works to provide immediate relief while also improving food security and nutritional outcomes among conflict-affected populations? There are a host of alternatives available to agencies. These include cash transfers, non-food item programmes, food vouchers and in-kind food distribution. So, which of these should be used?
Shannon Doocy and Hannah Tappis’ systematic review that 3ie quality assured shows that cash transfers and vouchers are effective in both improving and maintaining food security among conflict-affected people. But there is a difference in the kinds of effects transfers and vouchers have. Unconditional cash transfers lead to greater improvements in dietary diversity and quality than food transfers. On the other hand, food transfers are more successful in increasing per capita calorie intake than unconditional cash transfers and vouchers.
The evidence also shows that cash transfers and vouchers are more cost efficient than in-kind food distribution. That is, unconditional cash transfer programmes have a lower cost per recipient than either vouchers, in-kind distribution or both, to achieve the same effect.
This is indeed the kind of evidence that 3ie and other organisations are rooting for: high-quality, programme and policy-relevant evidence that not only measures causal change but also helps us learn about the limitations and the challenges of different programmes, help us compare them and hopefully inform future directions for programming.
Is humanitarian aid doing the right things?
Evidence to inform these questions for humanitarian assistance has been coming in very slowly. In 2014, when 3ie conducted a scoping study, we found less than 50 studies that we could call robust and high-quality evaluations of humanitarian assistance programmes. This was after the world had spent more than US$100 billion in just over a decade!
To fill this gap, 3ie has taken on the challenge.
In collaboration with USAID, UKaid, Danida and the World Food Programme (WFP), 3ie is supporting new evidence to inform programmatically important decisions. We are working with teams in countries, including Chad, Democratic Republic of Congo (DRC), Mali, Niger, Pakistan, Sudan and Uganda to assess what are the effective measures to prevent and treat moderate and acute malnutrition. We are also looking into how these measures interact with each other. In DRC, for instance, we are working with teams to assess whether and to what extent multi-sectoral programmes implemented by the UNICEF and by Mercy Corps are helping to reduce the vulnerability of displaced populations as well as host families. In the case of the other studies, evaluations are being conducted to assess the impact of programmes implemented by WFP, World Health Organization, ACTED and other agencies.
A call to action
I contend though that given the opportunity the zeitgeist presents, we need to be more ambitious and need to push the frontiers to leverage the power of data and evidence. To paraphrase Einstein, we are still collecting pebbles on the shores of this sea (of data and evidence). Here’s my four-point action plan.
First, data for rapid action. We need to rapidly collect data for answering several important questions. What are the needs we are trying to fulfil with humanitarian assistance? Who are the most vulnerable? What capacities exist locally? Where should assistance be directed in a humanitarian crisis? What can help inform the scale and scope of the crisis?
During the Ebola crisis in West Africa, there was a big push for changing the speed and quality of data collection to improve the quality of response. However, this focus on collecting baseline post-emergency data was ad hoc and we have not been able to sustain the big push made for that data to help address vulnerability and resource requirements. We need to make this systematic and par for the course.
Second, collecting and standardising data for measuring impact. We need to start measuring if and how much humanitarian aid is making a difference. This needs to be a standard ask of programmes that are large and that are dealing with protracted crises. But impact evaluation study teams need to also talk to each other so that there are standards for data and comparability of indicators that are being used for measuring impact.
Third, data for improving delivery. For assistance to reach the most vulnerable, we need data to inform and assess field realities and last-mile delivery challenges. This means we need data and evidence on the effectiveness of different logistics models and coordination mechanisms.
Fourth, we need to start creating the incentives for better data quality. Sandefur and Glassman note that in Africa, there is a large and systematic divergence between administrative, state-collected data and survey data. They argue that one this may have happened is that there is an incentive for government and government-supported agencies to exaggerate performance, particularly in health and education, because donors have started to link aid with performance.
It is clear that we need to offer incentives for data collection agencies to collect and report on data in an honest and transparent way. And in a way that is not influenced by government mandates. The examples of how Demographic Health Survey and Living Standards Measurement Study data were and are collected provide some important insights – standardise questions but focus on training – training for field teams, training for analysis teams. And also, I’d add, training for policymakers on how to use the data.
We are at a sweet spot of our times. There are policies on one side and there is the intent to make them effective on the other. Data is the missing middle. We now have the opportunity to provide this middle and make a difference.
Note : The title of the blog has been adapted from the Secretary General’s new report that calls for the ‘Istanbul moment’.
Listen to Jyotsna Puri’s podcast where she talks for the need for rigorous, high-quality evidence in the humanitarian sector.