Is independence always a good thing?

How the bid for independence can compromise the quality, relevance and use of evaluation
| May 1, 2014

flickr_takomabibelot _5623936091Evaluation departments of development agencies have traditionally jealously guarded their independence. They are separate from the operational side of agencies, sometimes entirely distinct, as in the case of UK’s Independent Commission for Aid Impact or the recently disbanded Swedish Agency for Development Evaluation. Staff from the evaluation department, or at least the head, are often not permitted to stay on in any other department of the agency once their term ends. The reasoning is that their work should not be compromised by wanting to please future colleagues or bosses in other departments. The main argument for independence is that it is necessary to ensure high-quality evaluations that tell it like it is.

But the argument for independence is overstated for many reasons.

Institutional independence does not necessarily safeguard against biases toward positive evaluation. These can stem from poor quality evaluation designs, contract renewal bias, and ‘friends’ bias (an evaluator forms friendships with those responsible for the programme). These biases are best overcome by having a good quality assurance system that includes external peer reviewers.

And independence comes at a cost. If the evaluation agency is totally outside of the agency it is evaluating it can lack access and influence. That is, it will not have the same understanding of how the agency works.  It will also not have complete access to internal project documentation for the purposes of the evaluation. I have evaluated World Bank programmes from both inside and outside the World Bank and can testify that there is a huge difference in the documentation to which you have access under these different circumstances.  Influence is greater as an internal agency can keep up sustained communication during and after the study.  And all agencies have a tendency to take reports from their own agency more seriously than they do those by outsiders, who can readily be written off as not really understanding the programme or even the institution.

And finally, what agency evaluation departments do is only a small part of the evaluation story. The vast majority of evaluations in all governments and agencies are those commissioned and managed directly by line ministries and operational departments.  If independence means having nothing to do with these ‘operational’ or ‘self’-evaluations, then that is possibly a great loss for the quality of evaluations.

Luckily, some agencies see this differently. At the Inter-American Development Bank, the Office of Evaluation and Oversight reviews the proposed monitoring and evaluation (M&E) systems of projects at the design stage, rather than waiting around to criticise them once the project is over. At the UK’s Department for International Development the role of the evaluation department has evolved considerably in the last few years through the creation of an accredited evaluation cadre across the agency and running a quality assurance panel for operational evaluations.

Going further still, Mexico’s CONEVAL (National Council for the Evaluation of Social Development Policy) is responsible for reviewing the M&E plans of all government agencies implementing social programmes.  CONEVAL sits institutionally in the Ministry of Social Development, thus seeming to compromise the principle of independence. When I asked its director, Gonzalo Hernández Licona, where he stood on the independence versus influence issue, he replied it is far better to be inside to get access and possible influence. I agree with him.



Tags: , ,

5 Comments on “Is independence always a good thing?

  1. Matt Smith

    Interesting timing of your article as I see the Australian Government have just produced a review that argues for increasing the independence of the old AusAID Office for Development Effectiveness by transferring it to another department:

    …http://www.ncoa.gov.au/report/phase-one/part-b/7-14-reforming-foreign-aid.html

  2. Howard WhiteHoward White Post author

    Thanks for sending this link. Indeed I think there are more important things to ensuring quality evaluation of Australian aid than chasing independence.

    I was struck by the statement that “an important gap is the absence of a single, easily comprehensible scorecard on the effectiveness of the Australian aid programme as a whole”. Many agencies are seeking a better way of assessing results, having at last seen the fallacy of outcome monitoring as “results”.

    Impact evaluation can and should play an important part in any results system. This doesn’t mean that we need to measure impact of every programme and project, in the same way we don’t measure the output of every farm and factory to measure GDP. But agencies need to be doing more impact evaluation than most currently are.

  3. Nigel Thornton

    Howard, thanks for the blog. I’d say there are other reasons for independence than effectiveness and quality. In the case of the UK’s Independent Commission for Aid Impact, the primary purpose of the Commission is, in reality, political. ICAI was created to demonstrate to the UK public that there was a dedicated safeguarding agency for the 0.7% of GNI spent by the UK on ODA. In reality (in spite of some public comments to this effect) it was not intended to replace the internal evaluation function; indeed DFID itself now spends more on evaluations than it did prior to ICAI’s creation, both in countries and from the centre. On the evaluation spectrum between accountability and learning, ICAI’s very much towards the former. On safeguarding bias, ICAI also has a different model than other agencies; there is peer review (of methodology and findings). The key difference of the ICAI model are the independently appointed Commissioners. It is the Commissioners who own, direct and eventually present the reports to the UK Parliament, through the International Development Committee. They are appointed as knowledgable but independent representatives of the UK public and their positions are time limited. As a result they are not beholden to anyone in the aid industry. Technical teams support them, gathering evidence on their behalf, but in the end it is the Commissioners who score the reports, draw conclusions and make recommendations. That’s unique I think. You’ll also be interested to note that ICAI has access to all DFID’s files and information and that, where possible, it has a policy of using DFID collected data first; if possible using DFID’s own evaluations and reports. ICAI has sought to validate the quality of DFID’s own reporting in several of its enquiries, often with positive results. At this point I declare the all too obvious interest that I’ve been part of the team delivering these reports for ICAI over the last three years, leading several reviews (including the recent one on How DFID Learns…).

  4. heather

    Thanks for this post, Howard:

    I would think we would want to strive for something like ‘analytic independence’ rather than ‘institutional independence.’ It seems that main things that we *do* want to maintain independence on is (a) to whom and when the evaluators can speak and the questions they can pose, (b) the actual analysis, and (c) the reporting. In addition, I suspect it *is* important that participants in the evaluated intervention see the evaluators and implementers as separate and independent, no?

    Three questions:
    1. On (b), pushing for blinded analysis should help, no?
    2. Maybe you can do another post on how Policy Windows (and similar Thematic Windows) strive for the right balance?
    3. At what stage will external peer review be particularly helpful? In approving the design and scope of the evaluation? It’s not clear to me that most academic papers provide enough information to a reviewer on how an evaluation team and implementation team selected each other, let alone how chummy they were throughout the process. Conflict of Interest statements are usually related to monetary relationships but not other types. Absent a pre-analysis plan or similar, a reviewer of the final paper doesn’t know the original questions or scope of data collected, so it’s hard to assess what the evaluators opted to leave out.

  5. Kate

    Thanks for an interesting blog. Just wondering how you see this with individual evaluations (rather than evaluation departments) – I think independence is listed as a principle to be followed in applications for 3ie funding? I’d understood that meant 3ie preferred having an external organisation/researcher leading (or at least taking a strong role in) the evaluation, rather than it being done by the implementers. Wouldn’t that independence have some of the same costs you outline above?

Comments are closed.