With my colleagues from evaluation 5.o we wrote a paper for the upcoming INTRAC conference on monitoring and evaluation entitled “Fifth generation Evaluation of a HIV/AIDS prevention programme among LGTBIin 15 countries Africa and Latin America”. In the paper we describe how we used constructivist evaluation methodologies to facilitate the evaluation of the International Programme of Schorer Foundation, Amsterdam generating a process that produced real learning among all involved
Comments on a very useful brief on theories of (policy) change on evaluation5.0 : check it out!
Evaluation5.0 is an initiative of people that work together to promote innovation in evaluation. Recently evaluation5.0 has been working on its business model, using the approach that I used on an earlier occasion with WHEAT to develop their business model. The evolving Evaluation5.0 model looks like this.
I have been too busy for blogging lately. Actually since September I have worked and traveled, supporting three non-governmental organizations in Nicaragua, Bolivia and Brazil who work with commercial sex-workers in the process of learning from their doing. As part of an evaluation process I also wrote case stories on prevention of HIV/AIDS among young gays in Honduras and among transsexuals in Ecuador. And I am involved in the evaluation of the woman’s rights component of the programme of grants covering several countries in Latin America, a major evaluation project with 8 other professionals from Costa Rica, Honduras, Haiti, Uruguay and the Netherlands.
It is very frustrating when you spend more time on report writing for clients than on blogging. If only because some of those reports will never be read by more than 5 or 6 people. But I did get to meet some interesting and courageous people: activists for LGTB-rights and other human rights defenders a variety of street workers and activists who relentlessly fights for the interests of those who are marginalized and excluded. Courageous because some of the actually received threats. And there was a lot of learning:
– in Nicaragua we tried to set-up a simple monitoring system that actually informs about how the clients are doing and not only on what the organization is doing with the grant money. From the reports and evaluations I have read over the last few months – piles of them – what the receiving organizations does with the grant money seems to be the overriding concern. Surprisingly few pages are actually dedicated to how all this affects and changes the life of the beneficiaries.
– it was also nice to work with a Dutch organization that makes grants to organizations in developing countries and that actually invited a group of them to tell them face-to-face how they do as funders. They were really very open. Based on some case stories written by external people, in the group we looked the concerns and issues that arose from that funding experience. All the organizations got a lot of “food for thought” out of that open and frank debate, which they most probably will use to improve on what they are doing: something that is not the standard for evaluations in the context of development coöperation.
– and I also learned a lot myself: about the need to improve my skills as an interviewer and about my strengths in facilitation, about the situation of women’s rights and women’s rights defenders in Latin America, and a neat technique to “dissemble” controversial statements with a group: with some colored cards and some magic you can bring in different experiences and look at what is the essence of the problem, instead of a traditional exchange of views.
And before you know it fall is over and we are all heading for winter…………. Brussels is freezing cold, but with some luck my strawberries will come back in the spring…
It actually snowed tonight:
If you Google for development coöperation (just over 1.000.000 hits) you will find that some 132,000 of those hits refer to both “evaluation” and “behavio(u)r(s)”. So behaviour is clearly an issue when you look at evaluation of development coöperation.
Out of the one million hits on development coöperation, 4730 give a hit for “evaluation tools”, 2180 for “evaluation tool”. For “evaluation methodology” the result is 5160, and for its plural 2430. If you look within the one million hits for development coöperation for “behavior(u)r(s) of evaluators” you will get exactly two hits.
Evaluation. So much reflection on behaviour, on evaluation methodologies and tools but the behaviour of evaluators is never studied, at least not in the context of development coöperation.
Most evaluations primarily aim to satisfy accountability requirements. Tax money is allocated and channeled down a chain towards public or private actors and finally to the ultimate beneficiaries. Actors in the chain need to inform the actors one step above them. Besides information coming from that actor itself, external audits ensure that resources are handled in accordance with certain standards and not misused for other purposes than they were intended to. External evaluation normally looks as to whether the “use according to the originally intended purposes” actually has any of the originally intended effects on the ultimate beneficiaries.
But do we actually learn from evaluations. Learning and accountability are not the same. Accountability normally has a legal and institutional dimension; it involves negotiation, judgment and possibly punishment. Learning is much more fluid process, there are few rules, it can be an individual or organizational dynamic and it can be quite irrational, involving emotions.
It happened to me again a few months ago: the evaluator comes in, hired by the provider of our project’s financial resources. We are to be reviewed “as part of standard accountability procedures”. After having told us what will happen to us – we and our work are the ‘object’ of evaluation, the evaluator and donor are the subjects in this evaluation grammar – the evaluators cheerfully adds he hopes we will also learn from the process. Intimidated, I nod, off course we hope to learn, it would be actually inappropriate to say you do not intend or expect to learn. But you know you will not. The fact you are treated as an object, and respond that way seems to play a role in does, but how?
Weisbord and Janoff use a four room model for learning: Home base is the room labeled “contentment”. Inevitably at some stage we are in “denial”: reality has changed; our assumptions were wrong, things do not go as well as expected. In a learning process you are able to move on, acknowledge you have a problem and focus on finding a solution: you may linger sometime in the room labeled “confusion”. That uncertainty can lead you to growth, change and move you back into “contentment”, or – if learning does not take place – it hurls us back into the room of denial, waiting for someone or something to draw you back into the uncomfortable room of confusion.
What influences whether the evaluation actually help us overcome denial and confusion and generate growth and learning? When I commission an evaluation or when I am being evaluated in order to be accountable, will I actually also learn? Besides methodologies and tools, the behavior of all those directly involved that will also very much determine or not whether I will actually make my round through those four rooms of learning. But if Google is a measure there seems to be no yet any serious confusion about behaviours of evaluators.