Making good ideas actually work

| October 28, 2013

6882776444_96ebc9ed97_nAt 3ie’s recent Measuring Results conference in Delhi, one of the key speakers proposed a ban on new ideas in India. This speaker was not a Luddite, nor was she on a crusade against creativity, nor did she represent the tourism industry to claim that India had already achieved perfection in all dimensions. Rather, she was echoing a sentiment that has been bouncing around classrooms and corridors and cafes and corner-markets in which people discuss improving human welfare: we need to figure out how to implement the ideas we already have.

Sometimes, even a rigorously evaluated programme is found to have no impact not because of sample size woes or because researchers were too hasty in measuring impact. The reason there is no impact is because the programme was simply never implemented in some places or it was not implemented properly.

Issues related to implementation and operations, like procurement, are often not always seen as interesting enough to receive much attention. This may be changing. The World Bank is extolling deliveryObamacare is showing the world what implementation failures look like; and critics in countries like India are asking why government writes new initiatives into law when so few have been enacted. At the conference, Manisha Verma, representing India’s National Advisory Council, noted that the implementation of existing programmes and policies is the major challenge facing India today.

The speaker who was pushing for a national new-idea ban called upon all available creative and calculating brain cells in India to figure out how to implement nominally operating programmes and policies. Recognising the inadequacy of a single blog to address this question, I offer a few thoughts on recent conversations on implementation that may show the way forward.

  • “Policies are not self-implementing.” This comment, made by one of the speakers, drew laughs at the conference because it sounds absurdly obvious. But, reading through many existing plans of programmes and policies, one could be forgiven for thinking that the designers did, to an extent, believe in self-implementation. Details are left vague, out of political necessity or oversight or lack of planning capacity. Architect and designer William McDonough says that design is the first signal of human intention. If the plan for a programme lacks implementation details, it does seem like the designer of the programme had no intention of seeing it actually implemented.
  • Realistic design. Good implementation (and good evaluation) starts in the programme design stage. Planners should make explicit the hypothesized causal chain, working backwards from the intended impacts to the activities planned to bring about improvements in the well-being of intended beneficiaries. The planner must explicate all the assumptions about how inputs and activities will translate into outputs, outcomes and impacts. In their recent working paper “It’s all about MeE,” authors Pritchett, Samji and Hammer stress that planners must make clear their assumptions about why the implementing agents will actually use inputs for the planned activities. Similarly, planners need a clear concept of whether beneficiaries actually value the programme’s intended impact and why they will make the effort required to participate, rather than ‘fall off’ along the causal chain in a funnel of attrition.
  • Monitoring & Evaluation based on programme theory. Until we start diligently collecting and sharing information on implementation process and challenges, it will be hard to figure out what is working or not working and examine why or why not. This means bringing rigorous mixed methods – in terms of data sources and analytic techniques – to bear on evaluating the assumptions at each step of the results chain. These assumptions must inform the development of monitoring systems that are, in turn, harmonised with rigorous evaluation plans.
  • Learning from implementation. In order to learn from how to address specific implementation challenges in specific contexts for implementation, we need to report on these processes with candour and detail. Too often, details of implementation, let alone implementation challenges, do not often feature in academic publications. However, not all implementers are created equal, even in the same geographic context (e.g. here). Those who implement during a trial phase may be quite different from those that would be tasked with taking a promising programme to scale. Woolcock (e.g. here), among others, have called for more details about implementation to be reported, so that we can better consider how the experience in one situation may translate to another.

In the call for a new-idea ban, the speaker at the conference recognised that we have not only good intentions but good ideas about improving human welfare. Now we need to exercise our brain cells for generating evidence about taking the good ideas we already have off paper and into practice. Which are the critical incentives and safeguards to ensure that implementers implement, participants participate and beneficiaries benefit? How can we use mixed methods to investigate along the hypothesised, complex results chain (as required in 3ie supported research) to elucidate when, where and why incentives, enabling factors and safeguards break down? How can we design good monitoring systems which are critical for ensuring good impact evaluations? What information about implementation processes and challenges should researchers report (as they are asked to do in 3ie final reports) in order to allow others to learn from what actually happened on the ground? By giving attention to these questions, we may begin to build the evidence needed to improve implementation of the good ideas we already have.

Tags: , , , , , ,

Leave a Comment:

Your email address will not be published. Required fields are marked *