Monthly Archives: August 2016

Is HIV self-testing passing the test?

UNITAID-Eric-GaussAt the 21st International AIDS Conference (AIDS 2016), a biennial conference that was held in Durban, South Africa last month, there was a loud and clear call for HIV self-testing to play a key role in reaching the goal of 90 per cent of people living with HIV knowing their status – the first of UNAIDS’ 90-90-90 goals. Two years ago, at AIDS 2014 in Melbourne, Australia, the tone was different. HIV self-testing was discussed hopefully but cautiously. Several of the presented studies looked at acceptability, including this one for Kenya, but only a few studies provided evidence from the use of HIV self-tests, and none were impact evaluations. However, at this year’s conference, several studies, including three funded by 3ie, presented exciting new evidence from impact evaluations showing that HIV self-tests can indeed significantly increase HIV testing rates.

What is an HIV self-test?

An HIV self-test is a kit that people can use at home, collecting a blood or oral fluid sample to test, and reading the result themselves. Package instructions and health providers advise that if the test returns an HIV positive result, the individual should get re-tested at a health facility. The self-test process generally takes about 20-30 minutes. In a few countries, HIV self-test kits are available over the counter at pharmacies or through the internet or at health facilities. The hope is that self-testing will appeal to people who have been hesitant to access services through health professionals, even those using door-to-door methods. But widespread availability of self-test kits is still rare, as many country governments are still wary.

What’s the new evidence?

At AIDS2016, researchers presented the results of four separate impact evaluations estimating the effect of making HIV self-tests available, and all showed significant increases in HIV testing. 3ie funded three of these studies, all of which were implemented in Kenya. 3ie has been working closely with the Government of Kenya’s National AIDS and STI Control Programme, which is using this evidence to develop their new self-testing guidelines.

Two of the Kenyan evaluations assessed whether providing the HIV self-test kits to women attending postnatal and/or antenatal care clinics to give to their partners would increase partner testing. Both studies used individual randomisation to create comparable groups and reported large increases in partner testing. In one study, researchers found that, in the group where partners were offered self-test kits, overall HIV testing was 39 percentage points higher (91% vs. 52%). In the other, the offer of self-testing increased testing 54 percentage points compared to control (83% vs 28%). Some of the strongest results were among partners who had never tested before or not recently. The results of these two impact evaluations suggest that secondary distribution through a partner can reach individuals, particularly men, who have been reluctant to access standard HIV testing.

The third study done in Kenya reported that self-tests increased uptake in HIV testing by truck drivers, a traditionally hard-to-reach population. In the study, truck drivers were offered the options of supervised HIV self-testing, or self-testing later (taking a test kit with them), in addition to standard testing and uptake was compared to those offered standard only. While the difference was smaller than the other studies (14 percentage points or 2.8 times the odds), it is worth noting that these were men who were already at a physical clinic.

second-pngThe fourth study was the FORTH study. Mohammed Jamil of the Kirby Institute in Australia presented results from an intervention that offered HIV self-tests to gay and bi-sexual men in Australia at higher risk of HIV. The intervention led to a two-fold increase in testing rates overall, and close to a four-fold increase in previously less-frequent testers.

What’s next for self-testing policies and programming?

The results of these pilot programmes should inform the efforts of governments and other implementers as they design programmes at scale, which they are starting to do. The Daily Nation reports that Kenya will be rolling out HIV self-testing by December and that the head of the National AIDS Control Council, Nduku Kilonzo, mentioned that HIV self-tests are one way to help close the testing gap to reach the first 90 of the UNAIDS goals and self-tests target “people who feel stigmatised and discriminated against, when they go to health facilities.”

There is still a need for more evidence, however. Some stakeholders continue to express concern about linkage to care from HIV self-tests and about the possibility for self-harm. Linkage to care is a challenge for all HIV testing. Currently, there is no evidence that linkage is worse for self-testing than for other testing modalities.

The concern about social or self-harm is often expressed by those who are hearing about self-testing for the first time. Brown, Djimeu and Cameron’s (2014) review of studies covering several kinds of self-tests found no evidence of social harm. In the few cases of intimate partner violence that have been reported, it is difficult to disentangle behaviour arising from testing positive from behaviour only arising because of self-testing.

Demonstration projects that carefully roll out HIV self-testing at scale and more impact evaluations of different implementation designs will be crucial for helping governments to approve and adopt HIV self-testing.

Forthcoming evidence

Several studies are underway now that will provide additional valuable evidence for policy and programming around HIV self-testing. Population Services International and partners are running demonstration projects and conducting research in Malawi, Zimbabwe, and Zambia, as part of the STAR project. They will expand to South Africa in the project’s second phase. 3ie is currently funding four additional HIV self-testing impact evaluations, two in Uganda and two in Zambia. In each country, one of the studies targets female sex workers with an emphasis on reaching sex workers in the places they frequent. In Uganda, the second study targets partners of antenatal-care clients. In Zambia, the second study targets the general population but using a door-to-door community health worker programme that can reach many hard-to-reach populations.

Prince Harry’s recent live-streamed HIV test resulted in a five-fold increase in demand for HIV self-tests from the Terrence Higgins Trust, an organisation in the United Kingdom that is piloting a testing initiative to reach men who have sex with men and black African people. We are, however, going to need more than that to reach the first 90 of the 90-90-90 goals. Supported by the exciting new evidence on HIV self-testing presented at AIDS 2016, many believe that HIV self-testing has the potential to access key populations at higher risk of HIV and other hard-to-reach populations and to boost HIV testing rates world-wide.
// ]]>

Is impact evaluation still on the rise?

Photo--AgStock-Images-CorbisSince 2014, 3ie’s impact evaluation repository (IER) has been a comprehensive database of published impact evaluations of development programmes in low- and middle-income countries. We call the database comprehensive because we build it from a systematic search and screening process that covers over 35 indexes and websites and screens for all development programme evaluations or experiments that use a counterfactual method for estimating net impact. The IER that is available since 2014 was comprehensive through the end of 2012 (hence referred to here as IER 2012).

Today we announce the full update of the IER, which is now comprehensive through the end of September 2015 and holds 4,260 records of published development impact evaluations.

Those who use the IER on a regular basis know that we have been adding studies on an ad hoc basis since the release of the first comprehensive repository in 2014. In October 2015, we started the second systematic search and screening process. For the update, we used a revised protocol based on the lessons learned from the first such exercise. For these revisions, we were greatly aided by those who responded to our challenge and found many articles we missed. As of today, the repository is comprehensive through September 2015.

We also added screeners to the team who speak several languages. Thus, we now include studies written in Spanish, French, and Portuguese. The repository is not comprehensive with respect to studies in these languages, but there are some included.

Estimating the rise in impact evaluations

The number of studies in the updated comprehensive database (IER 2015) is 4,124. This is an increase of 1,889 from IER 2012. Not all of that increase is from studies published since the beginning of 2013 though, as shown in figure 1. Our new search uncovered many studies published earlier that we did not find in the original search. We also cleaned out some records incorrectly included in the IER 2012, so in fact, the number of new studies included is slightly more than 1,889.


It is clear that the total number of impact evaluations continues to rise. The real question is whether the number published each year continues to increase, as Cameron, Mishra, and Brown (hereafter CMB) find in IER 2012. Figure 2 presents the numbers by year for IER 2012 and IER 2015.


Figure 2 suggests that annual publication of impact evaluations peaked in 2012 and has plateaued since. Note however, that there is a big gap between the number of impact evaluations published in 2012 that we were able to find in 2013 and the number for 2012 that we found in 2015. This gap is consistent with a known lag between when articles are published and when they make it into all the relevant indexes. We expect that the next update will uncover more studies for 2013 and 2014 than we found this time around. Regardless, the number we found for 2015 is greater than for 2013 and 2014, even though the 2015 number only reflects the first three quarters of the year. Thus, the answer to the question whether impact evaluation is still on the rise turns out to be a bit uncertain, but the current numbers suggest that the trend is flattening.

Examining the rise in randomised controlled trials

We can also see whether the trend for just randomised controlled trials (RCTs) reported by Dean Karlan in the testimony to the U.S. Congress persists. Figure 3 presents the number published per year by method and shows that the trend for RCTs is similar to the trend for all impact evaluations and thus also appears to be flattening.


When we revised the original protocol, we expanded the search terms in order to find quasi-experimental (QE) studies better. However, as shown in figure 3, there are still more RCT-based evaluations published each year than QE evaluations. In the last five years that gap has widened. There are currently 2645 RCTs in the repository as compared to 1615 QE studies.

Notable trends in development impact evaluations across countries

The prevalence of impact evaluations by country looks about the same as it did in IER 2012. Figure 4 shows the global heat map of development impact evaluations for IER 2015. (See figure 5 in CMB for the map for IER 2012.) The top 10 countries are India (390), China (281), Mexico (247), Kenya (233), Bangladesh (197), South Africa 194), Brazil (193), Uganda (173), Pakistan (105), and Peru (105). For some countries, the number of impact evaluations has more than doubled from what we found in IER 2012. These include Brazil, Cambodia, Cameroon, Costa Rica, Malawi and Mozambique. We also found impact evaluations for a few countries that did not have any in IER 2012. These are Antigua and Barbuda, Dominica, Saint Lucia, Somalia and Yemen. Meanwhile, ​​we continue to notice a lack of impact evaluations in regions like Central Asia and Central Africa.


Keeping the momentum going

There is much more we can learn from analysing the data in IER 2015. We are working on a full-length article now, which we will present at the What Works Global Summit in September 2016. Also, this fall we will revise the search and screening protocol again based on lessons learned this time around. The protocol for IER 2015 will be available on the 3ie website in the next couple of months. We will start the search and screening for the next comprehensive update in January 2017. For the next update, we also plan to more systematically search for studies published in other languages.

 Contest Alert: Have you already looked for your study and it is not there? Fret not, we are announcing the  IER ‘What did we miss?’ contest today, so that you can submit your entries  for  a gift card. And anyone can submit a study for consideration in the IER at any time to

Implementing impact evaluations: trade-offs and compromises

Neil Palmer-CIAT

In June this year, 3ie and the International Fund for Agricultural Development organised a workshop where we had several productive discussions around two key questions: Are impact evaluations answering policy-relevant questions and generating useful evidence? What are the challenges faced in designing and implementing impact evaluations of cash transfers and agricultural innovation programmes?

The workshop brought together a fairly diverse group of participants: researchers, implementing agencies and donors involved in impact evaluations of different kinds of development programmes. What was clear from the conversations at the workshop was that there are several stakeholders who are invested in impact evaluations. And many of them differ in their motivation for and expectation from impact evaluations. So, coordinating between several stakeholders to address different interests is a challenge at different stages of the impact evaluation cycle. There are several lessons to be learned.

 Balancing rigour, feasibility and policy relevance

There are several trade-offs to be made when it comes to choosing the method and the research questions of an impact evaluation. How does one balance methodological rigour and implementation feasibility in an impact evaluation? And how does this balance get translated in the evaluation questions? Should an impact evaluation only answer sexy research questions or focus on only policy-relevant questions?

If we were to take the example of a farmer field school programme, it may well be that only farmers with basic knowledge or entrepreneurial skills end up participating in the programme.  Selection bias is a threat to the quality of the impact evaluation. Controlling for such unobserved characteristics that affect the choice of participating in the programme can be more challenging when quasi-experimental methods are used in an impact evaluation. However, the use of experimental methods is not always feasible. For aligning the interests of researchers and implementers or for asking policy-relevant questions, impact evaluators may need to be flexible and choose methods other than randomised controlled trials. There are thus tough choices to be made about rigour, feasibility and policy relevance.

Balancing interests of stakeholders

 The choice of evaluation method and research questions is a complex exercise as it requires aligning the interests of different stakeholders: researchers, implementing agencies, donors and so on. Interventions can also be very complex. They can have multiple components and different activities within these components. This means that at the time of designing an impact evaluation, the evaluation of the programme as a whole may not be feasible. For the purpose of the impact evaluation, particular components need to be chosen. This, in turn, imposes restrictions on the types of research questions that can be answered, and limits the evaluator’s ability to make recommendations about the scale-up of the programme.

14520719359-Jessica LeaManaging multiple timelines may come at a cost

The timeline for programme implementation may be subject to delays. This would naturally also lead to a change in the timeline of the evaluation. In the case of agricultural programmes, timelines also need to be adjusted with agricultural seasons. These sort of constraints could shorten the follow-up period of the impact evaluation, which could mean that the evaluators need to change the outcome measures being used to assess impact.

Short-term outcome measures may not give a true picture of the programme’s impact as several interventions tend to take time to produce results. Cash transfers, for instance, may improve children’s enrolment in schools in the short term. However, it is unclear whether the increase in enrolment will translate into improved learning and labour market outcomes in the long term.

Dealing with the challenge of measuring long-term impacts

While it may be important to assess the long-term impacts of programmes, this may not align with the priorities of donors or implementing agencies. Assessing long-term impacts can be costly and designing an impact evaluation that lasts longer than four years may not be appealing to donors or implementers. Sometimes evidence is needed quickly for informing development programmes. Another hurdle to assessing long-term impacts is the attrition of programme participants. In the case of conditional cash transfers, impact evaluations have often focussed on short-term impacts. A possible reason for this is the reluctance of evaluators to deal with high attrition rates.  For example, in the case of the Mexican cash-transfer programme Oportunidades, the attrition rate after 10 years was 60 per cent. The attrition rate therefore needs to be an important consideration while assessing the impact of a programme.

Are impact evaluations still worth it?

 Given the number of challenges outlined in this blog, are impact evaluations still worth the time and effort invested? The answer is yes. An impact evaluation can be challenging. It may take time and effort to align interests of stakeholders and reach an agreement among different parties involved. However, the evidence that impact evaluations generate can inform so many different aspects of policy and programme design. Impact evaluations can also improve the efficiency of projects, influence the scale-up of good projects and improve the quality of data systems. However, balancing the interests and expectations of implementers, researchers and donors requires effort, commitment and compromises. The engagement of implementers through the entire cycle of the impact evaluation is a decisive factor that influences not just the quality of the impact evaluation but also the relevance and the use of the evidence generated.