Wednesday, 14 September 2011

Exploring evaluation approaches: Are there any limits to theories of change?

By Chris Barnett

I’m in Brussels co-facilitating a course on evaluation in conflict-affected countries, with Channel Research. We are exploring new and alternative approaches to evaluation, building on recent experiences of multi-donor evaluations in South Sudan and the Democratic Republic of Congo (DRC). The South Sudan and DRC evaluations are part of a suite of evaluations that sought to test the draft OECD Guidance on Evaluating Conflict Prevention and Peacebuilding Activities.

While the context is very specific, I’m hoping that the discussions will raise some interesting issues around the way we approach evaluation and particularly how we use theories of change. The term “theory of change” is a much overused phrase at the moment, and one that seems to have different meanings to different people. In this case it is being defined as, “the set of beliefs [and assumptions] about how and why an initiative will work to change the conflict” (OECD Guidance, page 35). Duncan Green, in his blog, also helpfully points out the difference between a theory of change (a classic, linear intervention logic, or results chain, used as a basis for logical frameworks) from theories of change (such as a range of theories about the political economy of how and why change occurs).

Photo courtesy of Jon Bennett
Working in conflict-affected states poses many challenges for evaluation, not least the changing context, instability and insecurity. It most cases it is not feasible to set up a controlled experiment and maintain it over a reasonable period of time. Not only are there the cost and ethical issues of distributing benefits randomly, but also the sheer technical difficulty of maintaining a robust counterfactual in a context where there is so much change. It is not impossible of course (e.g. IRC’s evaluation of Community Driven Reconstruction in Liberia); just often not appropriate or feasible.

Hence, the OECD Guidance focuses on a theory-based approach to evaluation (NB: Henry Lucas and Richard Longhurst, IDS Bulletin 41:6, provide a useful overview of different evaluation approaches). At the heart of the OECD Guidance is the need to identify theories of change, against which to evaluate performance.

But in South Sudan and DRC we found a number of limitations to this approach:

1. Firstly, we found it challenging to apply a theory of change approach to the policy or strategic level. Most donors did not articulate a transparent, evidence-based rationale for intervening – sometimes intentionally so, given the dynamic and sensitive context. This meant that reconstructing theories of change for evaluation purposes became highly interpretive and open to being challenged – particularly when drawing out differences between actual and de facto policies.

2. Secondly, we found that different theories of change existed at different levels. As one moved down from the headquarters level to the capital city, and onto local government and field levels, then views differed about the drivers of conflict and the theories of change necessary to address these. This presented the evaluator with a dilemma – and sometimes wrongfully placed as arbiter to different perspectives and realities.

3. Thirdly, while lots of activities contribute to conflict prevention and peacebuilding, many were not be explicit about such objectives. Again, the reconstruction of the de facto theories of change against which to assess performance becomes highly interpretive and more open to being challenged.

So do we hope to do this week? We will be exploring alternatives to such Objective or Goal-Based evaluations that seek to assess performance against the stated (or reconstructed) theories behind an intervention. Rather, we’ll explore some Goal-Free alternatives – where data is gathered to compare outcomes with the actual needs of the target audience, using reality as the reference point rather than a programme theory. After all, in many walks of life we do not “evaluate” performance against the stated objectives: When we assess whether a car is good or not, we do not consider whether the design team fulfilled its objectives! Rather, we are interested in whether it fulfils our needs.