International humanitarian assistance reached a record high in 2014, with global contributions totalling US$24.5 billion,almost a 19 per cent increase since 2013.However, this isn’t good news for the sector as the demand for humanitarian assistance continues to outstrip supply. Humanitarian financing is now fraught with new challenges as the nature and number of emergencies that come under the realm of humanitarian action is constantly changing. This has resulted in a shift in planning and resources for response and resilience efforts across different contexts.
This World Humanitarian Day, the question that we need to ask is: Is there rigorous evidence (from impact evaluations and systematic reviews) available on the impact of a multitude of interventions aimed at addressing the multi-dimensional needs of over 58 million people displaced by conflicts and over 107 million affected by disasters caused by natural hazards?
The reality looks a bit grim. A recent 3ie scoping study on the evidence landscape in the humanitarian sector showed that there is a lack of reliable and robust evidence from impact evaluations of humanitarian assistance. In other words, we lack sound evidence about what programmes and policies are working or not, how and why they are working or not and at what cost. However, it becomes particularly difficult to fill these evidence gaps because there are several challenges that need to be tackled in humanitarian contexts. We identify a few of these challenges here and explore ways to address them.
Balancing the need to deliver aid speedily versus generating evidence
The need to act quickly and deliver aid to those affected by a crisis often overshadows the need for evidence-informed decisions. But rigorous evidence can help answer crucial questions such as: Is the assistance reaching the target populations in right doses at the right time? Is it being delivered through the right channels in cost-effective ways and with the resources available to an implementing agency? And has the intervention achieved the outcomes described in a programme’s theory of change?
At the recent Global Forum for Improving Humanitarian Action, ALNAP flagged the significant shortfall of high-quality evidence on what works,and argued that the absence of evidence inhibits the delivery of the most effective and efficient interventions in response to particular needs and particular forms of crisis.
Tackling the notion that impact evaluations are unethical
Impact evaluations are often equated with randomised controlled trials, where it is assumed that a certain section of the population is denied access to programmes and assistance. This is a mistaken notion because evaluation researchers can harness the potential of how a rigorous study can be designed and applied that is ethical and does not deny anyone any relief they would be getting from a programme. For example, most humanitarian agencies deliver assistance packages that comprise multiple interventions for various sectors. Rolling out programmes with small changes while keeping the basic assistance package the same (a factorial design) or phasing out the roll out of interventions can help assess differences in outcomes and welfare associated with a particular intervention without denying individuals the basic assistance.
In contrast, delivering interventions that are unproven or ineffective can pose a great risk to the lives of people living in contexts where humanitarian assistance is required. In extreme cases, they can even be unethical if those interventions make communities or individuals worse off instead of rebuilding their lives.
Impact evaluations address questions on effectiveness and efficiency unlike others forms of evaluations, which may not be able to assess attributable impact. Impact evaluations can also answer questions on the differential impact of humanitarian assistance on vulnerable populations, such as women, children or chronically poor people, which can provide valuable insights into whether aid is reaching those most in need of this assistance.
Dealing with the dearth of baseline data in humanitarian contexts
In most fragile contexts,census data at the local levels is often not available or reliable. Even if humanitarian agencies collect data, it is often not available to researchers for analysis. Collecting information in humanitarian contexts is challenging. Issues around accessibility, ethical constraints, security concerns and the changing nature of baseline populations constantly present themselves. It thus becomes necessary to make the best possible use of any administrative and monitoring data that are routinely collected by humanitarian agencies. In the absence of baseline information, research teams may use data collected by interviewing respondents, but it, too, is prone to recall errors. Information bias can be reduced by using mixed methods that can help triangulate and validate results.
Impact evaluations that use a range of data sources, such as satellite imagery, geographic information systems and data from other surveys can help address the ethical constraints of collecting information during emergencies.
Lack of incentives and support for generating evidence
Donors can play a key role in incentivising the generation and use of evidence. The International Rescue Committee recently called on donors to incentivise the generation of evidence by ‘fund[ing] only those programmes that are based on the best available evidence or, in cases where impact evaluations (or their equivalent) have yet to be conducted, that are supporting the generation of evidence.’
3ie launched the Humanitarian Assistance Thematic Window in 2014 with the aim to bridge the evidence gap by funding feasibility studies, impact evaluations and systematic reviews across a range of areas in humanitarian contexts.
In the run up to the 2016 World Humanitarian Summit, 3ie along with seven major actors in the humanitarian sector is calling for financial and resource investment in processes for producing and using high-quality, rigorous evidence.
While discussions are now focused on increasing and diversifying funding for humanitarian aid, generating evidence on what works in complex humanitarian situations will mean that scarce resources are invested in programmes and policies that can save millions of lives.
Video lecture: Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation at 3ie, uses case studies to show how data and methods can be innovatively used for getting around the challenges associated with conducting impact evaluations of humanitarian relief programmes.