Monthly Archives: July 2011

Evidence-based development: lessons from evidence-based management

reap-classroom.jpg__300x200_q80_crop_upscale (1)Evidence based development is treading in the footsteps of evidence-based medicine: innovating, testing, and systematically pulling together the results of different studies to see what works, where and why. Other disciplines as diverse as sports science and management have been going down the same route. Hard Facts, Dangerous Half-Truths, and Total Nonsense: profiting from evidence-based management by Jeffrey Pfeffer and Robert Sutton contains valuable insights for practitioners of evidence-based development.

a striking parallel is their emphasis on looking at the underlying assumptions of any new idea, just as we do in theory-based impact evaluation.  Their first example is from education, not business: paying teachers for pupil performance, a popular policy with a very weak evidence base. As they point out, a clear assumption here is that teachers are motivated by financial incentives. But someone who is mainly interested in money won’t pick teaching primary (elementary) school as their career of choice. This example also nicely illustrates the importance of context. In Accra, if not in Austin, or in Delhi, but not Detroit, financial motivation in fact is a strong factor in becoming a teacher. As high rates of teacher absenteeism show, the pleasure of bringing knowledge to young minds seems not to matter so much for a significant proportion of teachers.

A very common question is “how long does an impact evaluation take?”. Or programme managers just complain that impact studies take too long. Well, they take as long as it takes for the intervention to have an impact. So many have to wait two, three years or more to carry out the endline survey. And post endline surveys to check sustainability and longer run benefits are also a good idea. ButREAP’s study of iron-fortification to reduce anemia in rural schools in China found positive impacts on learning outcomes in just two months. And, as described by Pfeffer and Sutton, Yahoo gets millions of hits an hour, so directs a couple of hundred thousand to a version of the site in with some design modification, such as ad placement, getting the results of the impact of the change in an hour or less. So, an impact evaluation can take just an hour… if your intervention works that quickly.

Evidence-based management works, as examples from DaVita, operating kidney dialysis machines, to Harrah’s casinos show. Like many other successful businesses, such as McDonalds, 7-eleven, Intel and Amazon, Harrah ran field experiments (randomized control trials) of different business practices to see what worked, and then started (scaled up) or stopped (closed) their approach according to the evidence. A trend still resisted in many quarters of the development community has been wholeheartedly embraced by some of the world’s most successful managers.

One thing the evidence tells us is that most things in business don’t work. Nearly two-thirds of new firms fail in the first five years. A study of 700 firms found that 46 percent of the money spent on product development resulted in products that fail. One of the things that don’t work are mergers. Around 70 percent or more of all mergers don’t deliver any benefits and reduce the economic valuation of the firm. A review of 93 studies covering 200,000 mergers found that the negative impact on company value happens in a couple of months and persists. But the response to this evidence is not to say “let’s not merge”. In 30 percent of cases they do work. So we should ask: what do we learn from the successes and failures about how to do a successful merger? This was precisely the route taken by Cisco which has built up profitability through nearly 60 mergers. The same is true for development interventions. We shouldn’t say “behavior change communication doesn’t work”, which indeed is what a lot of evidence suggests. But, rather, ask, “when, where and how can we make it work?” But, and here’s a lesson for donors rushing to fund systematic reviews, look at the evidence base they have: evidence from over 200,000 mergers. A typical development systematic review can draw on evidence from just a dozen interventions or even fewer. We need more primary studies, and lots of them, of the same intervention in different settings. This is of course why we have 3ie.