Monthly Archives: April 2013

Placing economics on the science spectrum

Where does economics fit on the spectrum of sciences? ‘Hard scientists’ argue that the subjectivity of economics research differentiates it from biology, chemistry, or other disciplines that require strict laboratory experimentation. Meanwhile, many economists try to separate their field from the ‘social sciences’ by lumping sociology, psychology, and the like into a quasi-mathematical abyss reserved for ‘touchy-feely’ subjects, unable to match the rigor required of economics research. However, the dismal science’s poor (and thin) replication record does little to lend credence to the claim that economics is more rigorous than the other ‘social sciences’.

The blogosphere and media are abuzz with news of the most recent case of flawed economics research. Herndon, Ash, and Pollin’s replication study discovered coding errors, unconventional weighting, and selective data inclusion in Reinhart and Rogoff’s influential work on the effects of national debt on GDP growth. Krugman explains here and here how these errors obscure the previous causal relationship findings between debt and growth. As others (Fox here; Konczal here; Krugman here and here) discuss the validity of the original study, a bigger question should be asked of the field in general: When will we economists rigorously hold ourselves accountable for what we publish?

This latest controversy shines a bright light on a major difference between economics and the ‘hard sciences’. The scientific method demands replication and accountability from ‘hard science’ research. Replication, or the independent reexamination of data and code to ensure that original results are reproducible, deserves an equally important place in economics. Economists directly advise policymakers, who make life-altering decisions based on the results of sometimes fallible research. Shouldn’t these researchers be held to the same replication standard?

Every couple of years a major economic finding is questioned because of a coding error, limited data ranges, or inaccurate datasets. In response to these events, there is usually a call for increased data sharing and replication in economics (see the famous Journal of Money, Credit, and Banking example ). But eventually the commotion dies down, the attention fades, and we return to the insular world of empirical economics research.

There are some notable exceptions to the dearth of data documentation and replication in economics. The American Economic Review, for example, requires authors to publish data and code along with their research (with mixed results). But the list of journals requiring data and code to be submitted as a criterion for publication in economics is a fairly short one – Econometrica and The American Journal of Agricultural Economics immediately come to mind. And unearthing raw data, which have not gone through a (typically undocumented) cleaning process, is still extraordinarily rare. Until providing raw data and code becomes a publication requirement, replication will remain the exception rather than the rule in economics.

Efforts are underway to increase the amount of replication being conducted of influential, innovative, and controversial economic research. The University of Göttingen’s Replication in Economics program warehouses a large number of replication studies in one repository. The International Initiative for Impact Evaluation’s Replication Programme (small plug for the currently open Replication Window) incentivizes researchers to begin filling the replication gap in development economics by reexamining existing studies. But unless economics broadly embraces the importance of replication, we will continue to stumble from one occasional retraction to another, never reaching the standards required of the ‘hard sciences’.

Home Street Home


On 12th April 2013 street children, NGOs, celebrities, policymakers, businesses and individuals will observe the International Day for Street Children. The theme for this year is Home Street Home – highlighting that for many children across the world the street is their home.

The International Day for Street Children provides a platform to call for governments to act and support the rights of street children across the world. But a recent study shows that we have very little evidence on the ways in which we could act most effectively to address the needs of street children. We need such studies as this is the only way we can avoid spending on ineffective programmes and channel funding to programmes that do work. We call for more impact evaluations to inform programmes aiming to improve the lives of street-connected children.

Programmes for street-connected children

Programmes for street-connected children aim at reducing the risks these children face and improving integration with mainstream society. But which of these programmes work?

A recent 3ie-supported systematic review based on an extensive search of published and unpublished literature identified 11 studies evaluating 12 different interventions which met the inclusion criteria. But despite the existence of many relevant programmes in low- and middle-income countries, all the included studies were from high- income countries (Korea, UK and US). There were no sufficiently rigorous studies that could be identified from low- and middle-income countries.

The studies from high-income countries compared new therapeutic interventions, such as group based cognitive behavioural therapy and behavioural family therapy, with the usual services offered at drop-in centres or shelters such as rooms, free meals, clothes, health and counselling services. On the whole, these studies were found to be of low to moderate quality. Overall the “new” programmes did not prove to be better at helping street-connected children and young people than usual services. A reason for this finding might be that young people using services are choosing to accept support and have therefore already made a decision to change their lives. This goes to show that we need to learn and understand more about the nature of strategies used to promote services to children and young people and what factors make them want to take up services that are available. So the main finding from this systematic review is that there is a big gap in our knowledge.

Nevertheless, by reviewing the literature systematically the research team identified a significant body of primary research in low- and middle-income countries focused on who the children and young people are, their views of the world around them, in addition to some process and participatory evaluations focusing on interventions. To utilise the insights from this existing research, 3ie is funding a follow up review to synthesise some of this evidence, particularly on strategies for promotion of interventions and engaging young people, which will be available later this year.

Lack of evidence

In the past decade, there has been a rapid increase in the number of impact evaluations of a range of different and often complex development programmes. But the increased funding and demand for impact evaluations and high quality evidence has not yet reached the sector working to support street-connected children around the world. The lack of evidence in this area is a significant challenge. By wasting money on what may possibly be unhelpful programs, we are denying street-connected children access to services that are most effective in realising their rights to food, education and security. Poorly planned or forced interventions may also have negative effects. This highlights the importance of evaluations which look at a range of potential outcomes – both positive and negative.

We recognise that the relevant interventions in this area are complex and that evaluating such programmes rigorously can present methodological, practical and ethical challenges. But this does not justify not making an attempt at least. And solutions to address many similar challenges are being implemented in other sectors.

An example which carries transferrable lessons is an evaluation (Berry and Linden 2009) which assesses the effects of an active recruitment intervention on the attendance of out-of-school children in a community based programme in urban Gurgaon in India. They used a randomised controlled trial design to select 25 of every 60 children from a household census to receive active recruitment to the programme, with the remaining children still being eligible to attend.

A similar evaluation design could be adopted to evaluate educational projects, vocational training or drop-in centres targeting street-connected children. We could also conduct a process evaluation and participatory research involving beneficiaries to enhance the usefulness of the research and include a perspective from programme participants. Only with many more such studies can we really undertake programmes to help street-connected children around the world.

Listen to the podcast of the Systematic Review on Interventions for promoting reintegration and reducing harmful behaviour and lifestyles in street-connected children and young people.

(Esther Coren is a Senior Researcher at Canterbury Christ Church University and was Principal Investigator on the systematic review discussed in this blog.)

Uganda shows its commitment to evaluation

© Amos Gumulira/Save the Children

Uganda’s cabinet has just approved a new monitoring and evaluation policy, which will be officially endorsed and disseminated next month. It comes as a positive signal after several donors suspended aid to the Government and provides a solid foundation to boost the country’s commitment to evidence informed policy-making.

While Uganda reports significant gains in terms of human development indicators since 2000 and was praised for its success story in turning the tide in the fight against HIV, there remain major constraints in terms of economic development, corruption and weak accountability. To address these constraints, it is important for the Government to be able to assess how its services are being delivered, and to receive feedback on where changes and improvements need to be made, and where resources need to be allocated.

The Office of the Prime Minister recently hosted its first evaluation week under the theme of “Enhancing evidence-based evaluations for policymaking: the need and role of effective evaluation systems for service delivery”. Much of the discussion was around the need to understand that evaluations are useful and worth the cost to help improve policies. In fact, during the event the Minister of General Duties, Professor Tasis said that “evaluation is an investment for transformation, especially if you are engaging with people”. He also acknowledged that “there are tensions in being transparent and creating demand from citizenry when there is capacity to respond”.

The Minister also launched a new evaluation database. This step is an important one in increasing transparency, and making evidence available for use – for example country-specific systematic reviews of programmes in particular sectors.

As part of this 4-day event which was inaugurated by the Prime Minister, 3ie supported a peer learning event where international delegates from: Benin, Colombia, Fiji, Pakistan and South Africa, as well as organizations including DfID, Giz, World Bank’s CLEAR initiative, PSI, HED and others, were invited to learn from the Uganda experience and share good practices in monitoring and evaluation. Overall, the question guiding the discussion was how to stoke the demand for evaluation and ensure that the supply is relevant and of quality?

The main lessons have been captured in a report by the peer learning team, of which the following are some highlights:
• A strong monitoring system but a weak evaluation practice: The Government has introduced a new “Traffic light system” to identify best and poorly performing ministries, based on indicators from the budget framework. The data gets published bi-annually in the Government Performance reports, which are discussed at Cabinet retreats. The challenge now is to rebalance the E in M&E, unpack and sensitize the new M&E policy to all stakeholders and development partners.
• To stoke the demand for evaluation, the team also identified the need for: a mechanism to engage stakeholders throughout the evaluation cycle to ensure feasibility, policy relevance; a comprehensive dissemination strategy including tracking of implementation milestones and follow-up actions on recommendations; and integrating evaluation with budgeting and planning
• On the supply side, the government needs to show its commitment to finance its M&E agenda; more coordination between OPM’s M&E functions and M&E functions of other departments; more focus on a broad based capacity development agenda; and establishing national standards and guidelines.

This was the first pilot peer learning of 3ie members which is part of our commitment to evaluation project. The next pilot will be hosted by the South African Presidency. Read more about the project at: