Monday, 30 January 2012

Change is hard but not impossible with a little help from ELFs (part 2)

Guest post: We welcome back Elise Wach, Evaluation Consultant at the Institute of Development Studies (IDS)

To follow on from my post last week on how change is hard to achieve, even when you know an approach isn't working, here's an update on the Learning Retreat that the Impact and Learning Team facilitated for the IRC (International Water and Sanitation Centre) Triple-S Initiative

We kicked off the 2-day retreat by hearing reports back on the variety of learning streams that had taken place so far and assessing progress.  This led into discussions about the approach of the initiative as well as the approach of the evaluation and learning.  Were course corrections needed in order to achieve the desired outcomes of the initiative?  Were adjustments to the evaluation and learning methodologies needed in order to better capture initiative progress?

In terms of the approach of the initiative, one of the primary concerns was related to external communication - getting the right information out to the right people in a timely manner.  IRC’s key staff immediately made commitments to improve this:
  1. Set up an organisational blog to get discussions started and put information out on the web (see Emilie Wilson’s entry about the merits of blogging).  
  2. Post more information and resources on the website, even if not yet polished.  A key phrase throughout the retreat was to ‘not let the perfect be the enemy of the perfectly good’.  Resources and information may take the form of videos, slide decks, and less formal reports.
  3. Improve the layout of the website to make it more user-friendly, and the search engine tags to make it easier to find.
Other commitments were also made to improve the approach of the initiative, including a re-visit to the Theory of Change.  This will surely be discussed in detail at the next Learning Retreat which will go into more depth into the data and its synthesis and also include the advisory group for the initiative (scheduled for the end of April).

See more at  

In terms of the evaluation and learning approaches, a few changes will be made to the timing and scope of data collection and analysis.  For example, given the disconnect between policy and practice discussed in last week’s entry, it was decided that in addition to analysing the policy documents of key agencies and organisations in the water sector, IRC and ILT will also analyse documents that might indicate that shifts in policies are being reflected in practice, such as calls for proposals and project reports.  That information will of course be posted on the Triple-S website in an effort to give these agencies a little extra nudge towards sustainable services that last.  

It was also determined that impact weighting for different outcomes and milestones might prove useful, along the lines of DFID’s revised approach to logframes (PDF) (though Triple-S is looking to move away from a linear/tabular format and towards more mind mapping and video).  

What was interesting to me about the retreat was the fact that none of the data discussed revealed anything incredibly new or surprising to the Triple-S team.  But for some reason, getting a group of people in a room together and setting aside time to specifically discuss progress and obstacles can be extremely effective for getting decisions made…especially if External Learning Facilitators (endearingly referred to as ELFs) are there to help the process along.

IRC has committed to starting their blog in the next week and will soon be posting more resources on their website, including the full report from the Learning Retreat (which will cover much more than I’m able to include here). As for making it easier to find the Triple-S website, you can try to Google it for yourself, but I think they’re still working on this one (unfortunately there’s a clothing company that goes by the same name!), so just in case, the "Water Services that Last" website is here.

Tuesday, 17 January 2012

Change is hard

By Elise Wach

Elise Wach is currently working with the Impact and Learning team to support the IRC International Water and Sanitataion Centre through a learning process around its Triple-S Initiative 

Penelope Beyon’s blog about failure and learning brings up some interesting and very valid points about the recent attention to failure, evaluation and learning in development. While it is essential (and quite difficult) for the development community to know when their programming is unsuccessful, and admit this, it doesn’t do any good if we don’t then learn from our failures and change our approaches so as not to repeat them.  

But how does change happen?

The IRC International Water and Sanitation Centre is in the middle of a six-year initiative which is attempting to shift the rural water sector away from a one-off infrastructure-based approach towards a Service Delivery Approach; what they term, Sustainable Service Delivery at Scale (Triple-S). How to enact change is exactly what they’re trying to figure out.

For decades, most development organisations and agencies in the rural water sector (like most sectors) did not know about, or did not want to know about, their failures. They were oblivious to the fact that the majority of their boreholes fell into disrepair within five to seven years, or that many were never even used at all.  Without this knowledge, it is easy to see why development organisations and agencies charged ahead with the same unsuccessful approaches.  

However, in a recent round of interviews I conducted with key stakeholders in the sector (as part of the Impact and Learning Team's support to IRC on this initiative),  it was overwhelmingly apparent that now, everyone in the water sector knows that the standard approach to rural water supply has been ineffective and unsustainable. It is common knowledge that the sector has been failing.  

So the rural water sector has overcome that essential, but difficult first hurdle of finding out about and acknowledging failure. 

Source unknown, but widely available
And in the most recent round of interviews, key stakeholders in the sector generally agreed that the discourse at the top – the policy-level – is starting to reflect these revelations.  But funding practices and implementation on the ground seem to have continued relatively unchanged: the same infrastructure-focused, unsuccessful approaches continue to dominate. Why?

Because change is hard.

One interviewee explained, ‘We’ve been engineered to do small-scale piecemeal interventions…so of course shifting to more of a sustainable approach at scale (vis-à-vis financial flows, regulations, norms, and standards) is going to take time.  There will be resistance to change.’

Changing approaches to realising change

To date, IRC ‘s Triple-S initiative has been attempting to accelerate changes within the sector through three main approaches:
  • Relationship-led (i.e. using champions to mobilise change)
  • Value-led (i.e. leveraging peer pressure and creating coalitions for change)
  • Evidence-led (i.e. providing proof that the current approaches don’t work and proof that other ones do)
The initiative has also been exploring the relationships between policy, funding and practice.

This week, IRC and the Impact and Learning team are holding a learning retreat to go over the findings from the most recent round of stakeholder interviews and other evaluation data.  Based on this, IRC may refine its Theory of Change and tweak its approach to help maximise the efficacy of the initiative moving forward. 

We’ll report back on the outcomes of that learning retreat next week.

In the meantime, a final thought. If more development actors followed a similar approach to IRC (i.e. if they thought through Theories of Change for their approaches and periodically revised them based on real-time evaluations and analysis), it’s not unrealistic to think that the way we work in ‘development’ would be quite different. That is, the phenomenon that Penelope termed as ‘reinventing broken wheels’ might not be as common. Change is hard, but not impossible, and it is certainly needed.

Wednesday, 11 January 2012

Are we reinventing broken wheels? Let’s talk about the ‘F’ word

By Penelope Beynon

A common saying goes "The only real failure in life is the failure to try."  I disagree.

I think the worst failure in life (and in knowledge brokering) is the repetition of an established mistake. That is to say, the worst failure is the failure to learn.

In recent months, I have come across an increasing number of websites, discussions and articles that almost celebrate failure, in an effort to foster a culture of sharing and learning from others’ mistakes. The Engineers Without Borders (EWB) website Admitting Failures is a good example. In their own words:

"By hiding our failures, we are condemning ourselves to repeat them and we are stifling innovation. In doing so, we are condemning ourselves to continue under-performance in the development sector.

Conversely, by admitting our failures – publicly sharing them not as shameful acts, but as important lessons – we contribute to a culture in development where failure is recognized as essential to success."

While I agree with the premise, often times it is not fully realised.
Image from:

Ironically, perhaps, several of the ‘failures’ admitted on the EWB website are, in fact, examples of people’s failure to learn from past mistakes – their own and those of others. That is, they are reinventing broken wheels, sometimes under the guise of 'innovation'.

Innovation is important for progress, and with innovation comes a certain level of risk. But I think these risks need to be calculated and one of the key considerations should be a thorough investigation of whether this particular experiment is truly an innovation or whether it has already been tested elsewhere. That is, an honest commitment to learning before doing as well as learning after doing. I hear the echo of Catherine’s recent blog where challenges knowledge brokers to practice what they preach .

Lessons identified or lessons learnt? 

Learning is a big theme for the Impact and Learning Team at IDS  and we have recently been thinking a lot about the difference between a lesson identified and a lesson learned.

In our view, a lesson is only really 'learned' when the implications of the lesson are acted upon. Far too often we see After Action Reviews and evaluation documents that recite from their own experience ‘lessons’ that are insights long established internally and already documented in the experience of others (e.g. developing partnerships takes time, communication matters, etc.). Very seldom does anyone pick up that the worst failure here was not the failure to communicate but the failure to identify ahead of time that communication matters and to learn from others’ experiences about how to do it well.

One outstanding example of a lesson that was learned (albeit the hard way) is retold by Lieven Claessen, a researcher from the International Potato Centre (CIP),s  in two short videos produced the Consultative Group on International Agricultural Research (CGIAR)'s ICT-KM programme.

In the first video, Claessens identifies the lessons by bravely telling a rather sobering story about his failure to communicate research findings in a way that people likely to be affected could understand and use for decision making. Had the findings of his 2007 research been acted on, the devastating effects of the 2010 mudslides in Eastern Uganda could have been mitigated, potentially saving the lives of hundreds of people and the livelihoods of hundreds more.  In his second video, Claessens evidences his learning by telling how he has changed his approach and commitment to communicating research to ensure he does not repeat this same mistake.

I find Claessens' story deeply moving for two reasons.

Firstly, I take my hat off to anyone who owns up to their part in a failure with such devastating consequences. Especially where that failure could as easily have been passed off to someone else.

Secondly, I find the story unique in its clarity about the link between research communication and wellbeing outcomes. Or, in this case failure to communicate research and negative outcomes. Often that link is much less clear for knowledge brokering. In fact, just as it is difficult (if not impossible) to evidence attribution of development outcomes to knowledge brokering work, it is equally difficult (if not impossible) to evidence negative development outcomes to failure in the same area. Perhaps this provides something of a safety net that allows us to distance ourselves from consequences, or maybe it is one of the reasons that it is apparently so hard to talk about failure in the knowledge brokering arena.

Tuesday, 3 January 2012

Buzzing about brokers: knowledge brokers reach across silos

By Catherine Fisher

Early in December I found myself in the unusual situation of being in a room full of people talking about knowledge brokering at a conference entitled "Bridging the gap between research, policy and practice: the importance of intermediaries (knowledge brokers) in producing research impact" * organised by the ESRC Policy and Research Genomics forum .

The event brought together people from UK universities, NGOs, public bodies ranging from health to education and a sprinkling of upbeat Canadians. The development sector was well represented, with DFID the best represented of UK government departments, perhaps reflecting the emphasis placed on evidence-based policy and research impact by DFID itself and within development sector more broadly.

It was the first time I had attended a conference of this kind in the UK so I was unsure what to expect. We know that knowledge about knowledge brokering seems to be silo-ed, not crossing between sectors. There are also differences in terms used to describe this kind of work. So as a presenter I was nervous I would be stating the obvious to a crowd who knew far more than I did. As conversation and coffee flowed, my fears were allayed: I had a lot to learn but, as I reflect below, the debates in the development sector I have been involved in are not miles away from debates elsewhere and in fact have something to add.

I presented as part of a panel exploring Knowledge Brokering in Development Contexts, alongside Kirsty Neman from INASP, Ajoy Datta from ODI and Matthew Harvey from DFID ( All presentations are available on the conference webpage, our session was 3E).

Here I share 5 of my reflections from the event:

The term "knowledge brokering" encompasses a wide range of action
I was not the only person to reflect that the term "knowledge brokering" was being used differently by different people.  Many people were using "knowledge brokering" to describe what I understand to be “research communication” that is, trying to ensure a piece of research is effectively communicated so that it has impact. This is in contrast to how I understand knowledge brokering, which I see as about helping to ensure that people are able to access research when they need it and that decision-making processes are informed by a wide range of research.  Put simply,  it's the difference between seeking to change a policy or practice to reflect the findings of a piece of reserach (research impact)  as opposed to seeking to change the behaviours of those in policy processes so that they draw on a wide range of research (evidence informed policy). There are of course grey areas between these extremes, for example, knowledge brokers within universities who seek to ensure that the knowledge of that university is mobilised for the community in which they are located: the Knowledge Mobilisation Unit at York University in Canada is a great example of this kind of practice that effectively sits between the extremes I have described.

Why we need labels (even if we hate talking about them)
Which brings me to my next point! People resent the term "knowledge brokering" as much as they resent talking about labels: for an interesting debate about the value of a label see KMBeing blog. Personally, I feel that without a term to describe this kind of work we would be unable to come together to discuss it (what would you call the conference/network?!). Conversely if we use the same term to discuss totally different things we risk confusing rather than clarifying our work.  The summary of the Knowledge Brokers Forum discusssion about terms and concepts is a good attempt to clarify and understand terms.  I still feel that language is the main tool we have to communicate our ideas and that it matters!

Consideration of power and politics: development sector has something to add
I was a little nervous that the debate about knowledge brokering would be very advanced, and the insights I shared in my presentation would be stating the obvious. Yet this did not seem to be the case, many of the issues raised during plenary and earlier sessions were familiar (e.g. pros and cons of the policy brief as a communications tool, how to motivate researchers to communicate their work, etc). The presentations from development sector raised two areas in particular that did not appear in other presentations I attended. Firstly, an attempt to understand politics with big and small “p”: looking at the contexts and motivations around decision-making. Secondly, a consideration of power and equity within knowledge brokering and asking “whose knowledge counts?”

What is a good knowledge broker? A fleet-footed, cheerleading, creative therapist! 
Image credit: Mick Duncan

A highlight for me was the presentation by David Phipps (York Uni) and Sarah Morton (Centre for Research on Family and Relationships) exploring the qualities of a good knowledge broker (pdf). From their experience it is someone who is fleet-footed, a cheerleader, creative, and a therapist. That is they have soft skills or competencies rather than specific technical capacities (although they will need these too!) plus a passion for the area, tact, negotiation and commitment. Like David and Sarah, I think the soft skills of knowledge brokers are key;  a paper I wrote last year entitled Five Characteristics of Effective Intermediary Organisations (PDF) explored how these soft skills can be supported and enabled at an organisational level.

Why don’t knowledge brokers practice what they preach?
As part of a devastating critique of the ESRC “Pathways to Impact” toolkit, Dr Simon Pardoe pointed out how little reference it made to evidence from social science that is relevant to the art and science of effective knowledge brokering. This observation that knowledge brokering somehow has no need to be evidence-based itself has emerged a number of times, for example, in the summary of the Knowledge Brokers Forum discussion which recognised the need for “greater linking of theory and practice”. I wonder whether the hybrid nature of the role means there are so many potential bodies of knowledge to draw on that people don’t draw on any! Sarah Morten and David Phipps talked of their practical ways of addressing this through “practice what you preach” Community of Practice and “learning days” respectively. They have a forthcoming paper to watch out for.

Any of these areas could be a blog posting, a paper or indeed a PhD themselves – I have just skimmed the surface of a great day. I hope the enthusiasm generated and connections formed will build towards greater understanding of the theory and practice of knowledge brokering.

Archive of  tweets posted from the conference : contains some interesting thoughts and links to resources.

• The long titles of these events reflect the difficulty of describing them and the lack of shared language – check out the conference I organised in collaboration with HSRC in 2008 which laboured under the title “Locating the Power of In-between : how research brokers and intermediaries support evidence-based pro-poor policy and practice"