This thought provoking question was the highlight of the opening plenary of the Dhaka Colloquium of Systematic Reviews in International Development.
Systematic reviews summarise all the evidence on a particular intervention or programme and were first developed in the health sector. The health reviews have a specific audience: doctors, nurses and health practitioners. The audience is also easily able to find the systematic reviews.
But there seems to be a big difference in the accessibility of evidence between the health and development sectors. Systematic reviews in international development are targetted at policymakers besides other researchers. However, policymakers are a diverse group and do not routinely look for evidence for making decisions. And even if policymakers attempted to read systematic reviews, they may possibly think that this is a technical document that did not apply to their particular context.
By highlighting some of the challenges in using systematic reviews in international development, the two plenary speakers , David Myers President and CEO, American Institutes for Research and Julia Littell, Bryn Mawr college offered some useful tips for researchers.
An area where we can learn from the health sector is how to develop a well-defined systematic review question. Currently, our reviews are too broad, our scope is too ambitious and often we do not really address the concerns of policymakers and practitioners. If we ask whether an intervention works or not, we will inevitably come to the conclusion that everything works sometimes and not at other times. We need to therefore look beyond average impact and evaluate for whom the intervention works, when and in what context.
We need to not only ask the right question but also answer the question in a way that makes sense to those who need to know. “Policy makers do not care about effect sizes,” David Myer said. They want to know for instance whether the education intervention they implemented is keeping girls in school and how many more they can keep from dropping out.
We need to put our efforts into translating our research findings into plain speak and ensure our messages are short and clear but also accurate. At the same time, we as researchers need to manage expectations and educate our audience. Findings of large scale studies are rarely definitive. To reach out to our audience, we need to educate them about how to be comfortable with less-than-full answers that make incremental progress toward alleviating problems. Are we asking for too much? Julia Littell believes that we’re better served if we know a lot about little than knowing a little about a lot. We need to be realistic. There are high expectations, lots of issues that need to be explored and addressed but also limited resources to do so.
It is quite difficult making research relevant to the end user. We grapple with issues of generalisability and applicability of research to different contexts. This is made even more complicated by the fact that systematic reviews pool evidence from a range of different settings and contexts. The jury is still out on how we can overcome this challenge. The panel provided the useful suggestion of building capacity for not just conducting systematic reviews but also using them.Systematic reviews and their findings need to be interpreted by those that really understand the context on the ground.
Finally, accessibility of evidence remains a significant challenge in international development. There are hundreds of NGOs that are conducting some kind of evaluation. But we do not know where this evidence is stored or how it could be accessed.3ie has been working on overcoming this challenging by creating a database of impact evaluations which currently has around 750 records. We are also going to soon start a registry of impact evaluations in international development where researchers can register their ongoing evaluations. We now need to move a step further and work on making sure that researchers and institutions in low- and middle-income countries have access to these evidence libraries.
Tags: databases, policymakers, registries, systematic reviews