Monthly Archives: September 2018

How qual improves quant in impact evaluations

Qualitative and Quantitative impact evaluationsBridging divides, be they across ethnicities, religions, politics or, indeed, genders, is never easy.  There have been many books written about them, including some that made millions – for example, John Gray’s idea that men and women come from different planets, Mars and Venus respectively, is apparently the best-selling hard cover non-fiction book ever. One shouldn’t begrudge them because the payoffs – domestic or planetary peace – are high indeed.  I don’t think that essays on quantitative and qualitative techniques in evaluation will ever make anyone rich or resolve existential problems. But they can improve evaluations. How do they do so?

In a recent paper that has just been published by CEDIL, my co-authors and I ask: what are the characteristics of quantitative impact evaluations which have successfully integrated qualitative research into their analysis? How does the integration improve analysis? To get the answer, we chose recently completed impact evaluations and developed a tool which assessed the rigour of their quantitative and qualitative analyses. Some 57 studies were identified from impact evaluation repositories (DFID, 3ie, World Bank and JPAL) spanning 20 countries, including five studies in fragile and conflict-affected contexts.

Developing a rigour assessment tool for qualitative and quantitative analyses integration

Our rigour assessment tool drew on evaluation criteria from a number of sources including, for quantitative analyses, 3ie’s own quality assurance standards. For the qualitative work, the criteria covered the domains of confirmability, credibility, transferability and utilisation which are also well articulated in several existing studies. There are less well-established criteria for integration. Indeed, there are studies which may do quantitative and qualitative analyses well, but don’t do such a good job in showing how the two can be blended for a better result. A common approach to integrating qualitative data collection in impact evaluations involves using these data to triangulate quantitative results on effects or mechanisms described in an intervention’s causal pathway. It involves checking for mechanisms that are harder to capture through quantitative measurements and documenting any unintended intervention consequences. We thus focused the tool on triangulation, complementarity, development, initiation and expansion. The tool was reviewed by subject matter experts on mixed-method research and underwent rigorous testing by independent reviewers and feedback from experts in the field, prior to finalisation.

What characterises well-integrated, mixed-method evaluations?

While there was great diversity across the studies, we found that the impact evaluations that were done had four consistent characteristics.

First, studies which scored highly on quantitative and qualitative rigour, also scored highly on integration. Moreover, when qualitative rigour was high, it was easier to discern how well a study had integrated qualitative and quantitative components. In contrast, studies that scored highly only on quant did not necessarily integrate the two methods well.

Second, it helps if a clear rationale for the integration of qualitative and quantitative methods is presented.

Third, using a multi-disciplinary team that has a shared framework with a clear delineation of tasks that transcend individual disciplines can help bridge gaps and lead to more robust, fully integrated mixed-method research.

Fourth, a common element among our exemplar studies is the provision of adequate documentation. This could be within a report, or through supplementary reports and/or appendices.

How does successful integration lead to better evaluations?

Successful integration led to better evaluations in four distinct ways.

  1. Integrating qualitative and quantitative lines of enquiry lies in the use of different methods of data collection, and how they inform study design and findings. For example, a study evaluating the impact of humanitarian cash transfers used participatory techniques of data collection in conflict-affected communities to identify target beneficiaries, which, in the absence of qualitative data might not have led to nuanced findings in the Democratic Republic of Congo.
  2. Successfully integrated studies enabled the teams to validate the results. In some cases, this was simply providing ‘ground-truthing’ to the quantitative studies. But in some studies, divergence of findings often results in more nuanced interpretations than might be afforded by using a single method alone.
  3. The use of qualitative methods can enhance the understanding of quantitative results by providing the context or background necessary to situate the findings. For instance, a handwashing intervention on women successfully reduced child diarrhea. However, the triangulated qualitative findings highlighted an important negative impact of the intervention – the ultra-poor in the sample were not only unable to take-up the intervention but they also suffered social censure from those in the sample who participated in the intervention.
  4. Successful integration can help inform contextually relevant policy recommendations. For example, in an evaluation of a nutrition programme in Bangladesh, when the quantitative methods were not able to detect significant impacts of the intervention, the qualitative evidence pointed to specific nodes in the intervention pathway that were problematic. The policy recommendations focused on resolving those issues.

Did these studies resolve all the divides across quants and quals in evaluation? Of course not.  But they did show that these ‘methodological tribes’ can resolve their differences in mutually productive ways. 3ie has been advocating for mixed-method studies for some time, as in this blog  published five years ago. Relative to then, there are now more examples of what characterises successful integration which I hope is helpful for those planning such evaluations. 3ie will continue to encourage and support the production of well-integrated, mixed-method studies that provide useful and policy-relevant evidence.

Not lost in translation: ethical research communication to inform decision-making

When the authors of a huge study with a sample size of 1.1 million people of European descent were asked about policy lessons gleaned from their study, they said: ‘None whatsoever’. At a time when funders insist that researchers show the impact of their studies on policies and programmes this blunt answer seems rather baffling. That this exchange is in a list of frequently asked questions (FAQs) longer than the study itself is another surprise.

The genome-wide association study (GWAS) examined genetic factors associated with educational outcomes, and the large sample and results showed a significant positive association. So, why did the researchers say that there were no policy lessons?

The authors themselves answer the question best in FAQ 3.7 – ‘In our view, responsible behavioral genetics research includes sound methodology and analysis of data; a commitment to publish all results, including any negative results; and transparent, complete reporting of methodology and findings in publications, presentations, and communications with the media and the public, including particular vigilance regarding what the results do –and do not– show (hence, this FAQ document).’

Knowing what to communicate and not

The researchers were clearly very wary of the repercussions of their findings being misinterpreted as genetic factors predicting education attainment. Such a misguided interpretation could lead to discrimination through genetic screening of students. Striking directly at misleading media headlines that their study might produce, the authors clarify that the study was not about ‘genes for educational attainment’. They write: ‘most of the variation in people’s educational attainment is accounted for by social and other environmental factors, not by additive genetic effects’.

Such care in linking findings to policy implications is the ethical and responsible thing to do, no matter what the area of research. We also need to be careful not to compare apples and oranges. The type of research, the questions asked and design can affect policy relevance.

But not all researchers are careful. At 3ie, we have had varied experiences. There are research reports that, rightly, avoid making policy recommendations when none seem to arise from the data analysis. However, we also see studies in which findings and policy implications don’t necessarily link to or follow from the data that has been collected and analysed.

The value of a good research communication plan

translation-blog2Interestingly, although the GWAS authors insisted that no policy lessons should be drawn from their study, they made sure they didn’t underplay its significance for designing future social science research. An easy-to-read and accessible article on this study in the monthly magazine The Atlantic brings in the perspectives of the authors and sector experts on how the study findings could be used. The article shows that preventing the misuse of research findings does not necessarily mean tucking them away. The research communication strategy included carefully phrasing findings and implications to prevent misuse and using appropriate communication tools to convey the message. Its robust communication strategy is what makes the GWAS a compelling example for others to emulate.

Using the most appropriate tools to reach your audience

Policy briefs are a popular and effective research communication tool, as are infographics and other products that communicate key messages in plain language and in an accessible way. However, in some cases, brief messages can leave the door open for misinterpretation. Using multiple tools for different audiences, the GWAS authors communicated research findings and implications effectively.

FAQs as an effective communication tool

A list of FAQs explaining study results could seem like asking for more of the readers’ limited time and attention. The advantage is clearer communication on takeaways from the findings, including how not to interpret them for future research, policy or programming. Questions and answers around the findings put them in perspective without the baggage of headlining direct impact on programmes or policy. However, extremely dense and technical language in FAQs could defeat the whole purpose of translating findings in ways that help understanding but that do not overreach. With the GWAS, a news article that quoted heavily from the authors and the FAQs was the most effective translation of the study for a general audience.

At 3ie, we are committed to monitoring and measuring the impact of the studies we support. We also strongly believe that high-quality research studies should carefully and correctly draw and convey the implications of findings for decision-making. Using the most appropriate communication tools to translate technical research information accurately and accessibly for different audiences is an integral part of our communication and advocacy efforts. The GWAS study shows us an underused tool that helped researchers prevent misunderstanding about their findings and promote interest at the same time.

With input from Radhika Menon.