Monday, 24 October 2011

Beyond happy sheets: outcome-focused event evaluation

By Penelope Beynon

Since joining the knowledge for development sector in June last year, I have participated in no less than 2 international conferences, 3 regional workshops and a host of cross-organisational meetings (and sent apologies for three times as many of each). Some cost money (for international or intercity travel), all have opportunity costs (being here instead of there) and they all cost time. 

As a participant, I find there is something innately attractive and energising about being together in a room with experts and peers that just cannot be simulated through online alternatives; but as a taxpayer I can’t quite shake that uncomfortable question – was it worth it?

In my role as M&E advisor I am occasionally asked how to evaluate events – while I haven’t yet found a tried and tested method that fits every event, I thought I’d share a few things I have learnt along the way.

With a few notable exceptions (e.g. A Process and Outcomes Evaluation of the International AIDS Conference, Lalonde et al 2007), most organisers fail to evaluate their events beyond a cursory feedback form that gauges audience satisfaction (commonly referred to as a ‘happy sheet’). But, if an organiser did want to push their evaluation to a new level and address the ‘uncomfortable’ question of worth – where would they begin?

In its most simplistic form, I propose that a worthwhile event evaluation needs to gather three types of information:
  • Costs 
  • Outcomes 
  • Reasonable alternatives

The full financial cost of events is rarely included in evaluation

The table below shows a summary of some areas where events incur costs. Unsurprisingly few organisers publish even the full financial costs of their events (grey box) or even add up their own financial and time costs (grey + purple boxes) for purposes of evaluation, let alone start to consider the sectoral costs of their event to participants and contributors.

Focusing on desired outcomes
Learning events may benefit all of these groups (P. Beynon, IDS)

1.    Spread your net wide when looking for outcomes

A common short coming of most outcome-focused event evaluations that I have unearthed (of which there are few to begin with) is a narrow concept of where benefits will occur and an almost exclusive focus on participants as the subjects for evaluation. Just as there are at least three groups who can incur costs for an event, these same groups could feasibly incur benefits (see diagram).

2.    Tailor your evaluation tools to match desired outcomes

Like all interventions, face-to-face events do not happen in isolation, they are usually part of a wider set of strategies intended (implicitly or explicitly) to contribute in some way to a programme's overall theory of change. Unfortunately, more often than not this link is not properly explored and event objectives read like either a) a less-than-ambitious list of activities, or b) an overly ambitious set of development aspirations well beyond anything the event could possibly deliver. Work closely with organisers to get to flesh out their theory of change and to situate the conference objectives within the wider programme context - then you will be able to tailor your evaluation tools to match the desired outcomes. While some organisers are coming up with interesting tools and approaches for outcome-focused event evaluation (e.g. network mapping (PDF), 3-test self-assessment) which I explore along with a few of our own attempts in a forthcoming ILT Practice In-Brief paper, most still limit their data sources to attendance records and the standard ‘happy sheet’.

3.    Follow through on your follow up!

The biggest limitation for most event evaluations is a lack of meaningful follow up. Change takes time, and unless you follow up with participants when they are back in their workplace you will only be able to capture intended behaviour change or the initial step towards an extended network. Be disciplined – schedule event follow ups for 3, 6 even 12 months after the fact.

Is there a cheaper way to achieve the same outcomes?
Well, this really is the million dollar question, and without a clear picture of our costs and benefits it just cannot be answered. But when you do have this level of information for one event, you will be able to start comparing that event with another and maybe even progress on to comparing all your face to face events with other strategies that use different tactics to achieve similar aims: such as ongoing rather than one-off events; online rather than face to face convening; 1 to 1 rather than convened events...

To conclude
As the saying goes, “If it’s worth doing at all it is worth doing properly” - so I urge organisers to go beyond ‘happy sheets’ and really scrutinise the worth of their events for their own sake and for the sake of the sector.

Friday, 14 October 2011

Early headlines from research on policy makers and ICTs: "persistent and curious enquirers" (with smartphones)

By Simon Batchelor

Just to keep you up to date on the country studies that I mentioned in my first blog….(in which I spoke about research we were conducting on policy makers and their use of ICTs).... a lot of data is in. Some countries found it easier to get interviews with senior policy makers than others, so some countries have still to deliver their full quota.

However we have now begun analysis and we begin to find some interesting headlines. As I write, my colleagues Jon Gregson and Manu Tyagi are presenting some headlines back to a portion of the intermediary sector in India and Nepal, and Chris Barnett presented last week in Ghana.  I would like to acknowledge the work of our partners ODC Inc in Nepal, and Delink Services  in Ghana.

So what are some of those headlines?

We will upload the slideshare soon, but in brief here are some of the things that attracted my attention:-

Policy actors have access to ICT, and a considerable number of them have smartphones, and to my mind more importantly, know how to use them!

Image from:

Of course they almost all have access to computers and the internet, and cellphones.  But Ghana 52%, Nepal 49% and North India 35% of the samples have smartphones.  In Ghana 25% had more than one smartphone!  And of those that have a smartphone, almost all in Ghana and Nepal have explored sending emails, surfing in the internet on the phone, recording a video and instant messaging.  Only in North India did it seem that there were a significant portion of people who had a smartphone and yet did not explore these ‘features’ (about 50%).

What does this mean to us in the intermediary sector? It suggests that if you are developing an app to push research into the policy environment, then the baseline of smartphone use is there.

Policy actors are surfing the internet themselves – the idea that policy makers wait for an assistant to brief them seems to be diminishing.  

In all three countries, the majority of policy makers agreed with statements surrounding their own use of ICT and surfing the internet.  They described themselves as ‘a "persistent and curious" enquirer’ and noted that they ‘often "discover" other relevant information when searching’ (Phrases used by the PEW Internet studies in USA).  They also agreed to a lesser extent with ‘I tend to get my briefings face to face officially, in meetings’.  In Ghana, where there was a significant portion of private sector executives, there were a significant number who actually disagreed with the idea that they got their information from ‘official briefings’.

What does this mean to us in the intermediary sector? It suggests that policy actors are looking for information themselves, and, I presume, therefore need to find it easily, in an accessible form, and I guess, quickly.  

Yes, I know that searching for information online is evolving, and that social networks now tend to push information within the network.  This changes the way those of us who are well connected get our information.  We did investigate whether the policy actors are connected to social media networks and to some extent looked at their searching behaviours, but we are not there yet in the analysis to be able to comment on it.  Watch this space.

Policy actors do have an appetite for research – or at least they say they do 

There was a consistent strong agreement with the need for facts and figures, and that these need to be up to date.  We explored what information they were actually looking for and we looked at whether they trust the sources and channels for the information.  Again, these details will come out as the analysis proceeds.  However there was an interesting difference between the three countries.  In India there is a strong trust for ‘local research’ (as opposed to international research), however in Ghana  and Nepal they rate international research much higher than local research.
What does this mean to us in the intermediary sector ? In our MK4D programme, we are working on the idea that local intermediaries understand the context of research and policy in their location, and therefore have a strong ground with which to communicate research to policy makers. However, we also work with the idea of ‘co-construction’ working alongside and with our colleagues in the South.   If ‘local research’ is trusted less by policy actors, then that would seem to endorse the approach of co-construction – where local and international bodies work together to provide quality insights.   It also suggests that our programme to support the exposure of research published in the South onto the global internet is heading in the right direction.

Anyway, those are some insights from the first week of analysis.  More to come.

Friday, 7 October 2011

Getting serious about the evidence in policy making

By Nick Perkins
Earlier this year, the International Initiative for Impact Evaluation - better known as 3ie - convened a conference in Mexico called
Mind the Gap: From Evidence to Policy Impact.

I liked the idea of dedicating 3 days, dozens of presentations and hundreds of blog posts to that little ‘leap of faith’ which characterises so many theories of change about what research can do for development.

The problem that we are faced with is that the normative idea about how policy should be made – based on objective evidence – is seldom the reality that we are faced with - i.e. policy through political expediency. Political expediency is understood to be a range of contextual influences on the decision-making process. When described this way, there is something inevitable about it.

Current thinking is that this expediency can be addressed through mediation of research knowledge. This has given rise to the research mediation sector- institutions and individuals within institutions who seek to frame research in a way that it is accessible and relevant to people working in key policy spheres.

What this reveals is a kind of contradiction at the heart of the development knowledge sector. While we call for evidence-based policy making, there is also increasing investment in the complex process that shapes decision making. A way through this may have been revealed through a closer look at what research mediation actually entails.

A couple years ago, IDS held a series of ‘influencing seminars’ which revealed how different disciplinary communities nuanced their approaches to policy influence depending on how they understood change happened. None of them declared disdain for value of quality evidence. Instead they all expressed differing views of what constitutes ‘quality’ evidence and how to gain traction with those who might need it.

What emerged was a framework of four different ways of building an effective relationship between research and quality policy making.

The first is about generating as many policy options as possible. This emphasises the use of repositories to allow users to sift through the options for themselves.

The second is evidence-based and prioritises the familiar idea that the quality of the research evidence is what will best inform the quality of the decision. Systematic reviews are seen as crucial in the research mediation process here.

Third is the value-led idea of policy-making. There are many examples of this leading to bad science, but it is by far the most common type of public policy making. Networks and epistemic communities are critical to the mediation process in this case.

Finally we have the relational model of influence, which maintains that no amount of research will influence a policymaker if there is not a relationship which reflects equity and a balance of power -where a researcher or a mediator are themselves subject to some influence.

Clearly though, none of these frames are mutually exclusive. Perhaps the point is that we can support the complex reality of policy influence which draws on these without losing sight of the where we ultimately need to get to. In fact using a little political expediency ourselves can go a long way to cross what is too often seen as a small gap.