Recently I have been involved with a team of researchers in researching so called ‘transformational change’ in a not-for-profit sector. I suspect the research has been commissioned on the understanding that transformational change is something which senior managers choose, and can, to a degree control. We are at the beginning of the research but the process itself has thrown up interesting insights into research methods , but also how the idea of transformation is framed and understood by our commissioners, and by the respondents. This helps us researchers understand the term anew too, but makes it no easier to think and write about. Continue reading
I was sitting in a meeting with a social development organisation listening to the kinds of requirements that have been placed upon it by a governmental body in order to trigger the full funding for a grant that they had succesfully bid for. 10% of the grant is ‘performance related’. In other words, and on a sliding scale of reward for performance, the social development organisation has to prove that it has helped educate a certain number of girls in a developing country to a predicted level of attainment, and that these girls will have stayed in school for the three year duration of the project and not dropped out. Additionally money is released against the achievement of pre-reflected project milestones. ‘Results’ are validated by ‘rigorous research methods’ which turned out to mean quasi-experimental methods. In other words, the rubric insists that the project sites be compared with communities where there has been no such intervention, and which are ‘similar in every way’. The organisation will only be fully rewarded if it achieves exactly what it said it would, and precisely to the timetable it set out in the proposal.
This particular social development organisation I am visiting is one amongst a dozen or so others which have received similar or much bigger grants, some of which amount to the low tens of millions. All of them have proposed highly complex interventions in very different developing countries involving the girls themselves, their families, teachers, head teachers, community groups, religious and community leaders, sometimes even boys. As with most social development these days the intervention is highly ambitious and leaves the impression that the organisation, working through a local social development organisation in the country concerned, will be intervening in particular communities at breakfast, lunch and dinner and in a variety of different and incalculable ways. This combination of interventions may be necessary, but the extent and range of them makes the question of causality extremely problematic, experimental methods or no.
The other thing that struck me is that the dozen or so social development organisations receiving this money all have to use the same project management tools and frameworks so that the government department can aggregate progress and results across all countries and all projects. Quantification and standardisation is necessary, then, in order to render the projects commensurable, and in order to make a claim that the government has made a quantifiable contribution to the Millennium Development Goals (MDGs) which they can ‘prove’. The kind of assertion that the government would like to make is that it has improved X tens of thousands of girls’ education to Y degree through its funding of a variety of organisations. These results, the claim will continue, will have been rigorously demonstrated through scientific methods and will therefore be uncontestable. Continue reading
During the last 10-15 years there have been repeated appeals to the complexity sciences to inform evaluative practice in books and journals about evaluation. This partly reflects the increased ambition of many social development and health programmes which are configured with multiple objectives and outcomes and the perceived inadequacy of linear approaches to evaluating them. It could also be understood as a further evolution of the methods vs theories debate which has led to theory-based approaches becoming much more widely taken up in the evaluative practice. It is now very hard to avoid using a ‘theory of change’ both in programme development and evaluation. What kind of theory informs a theory of change, however?
Although the discussion over paradigms has clearly not gone away, the turn to the complexity sciences as a resource domain for evaluative insight could be seen as another development in producing richer theories better to understand, and make judgements about, complex reality. However, some evaluators are understandably nervous about the challenge of what they perceive as being the more radical implications of assuming that non-linear interactions in social life may be the norm, rather than the exception. In a variety of ways they try to subsume them under traditional evaluative orthodoxies, which is just as one might expect any thought collective to respond. Continue reading
One of the main themes of Mats Alvesson and Hugh Willmott’s new edition of their book Making Sense of Management is that management, and the ubiquitous tools and techniques that accompany the practice are widely taken for granted as neutral, technical and helpful. In detail, and at length, they call these assumptions into question. Further, in a forthcoming article in the Journal of Management Studies, Alvesson, with his co-author André Spicer go on to accuse organisations of practising both knowledge and stupidity management. By stupidity management they mean the way that many organisations rush into adopting the latest management fad that everyone else is taking up, simply because everyone else is taking it up. They point to an absence of critical reflection and questioning in many organisations.
It is this process, endlessly rushing towards the next big idea provoked by an anxiety about keeping up with ‘the latest thinking’, or perhaps because of (self-imposed) coercion from peers or scrutinising boards and other agencies, that keeps the management shelves of bookshops filled to overflowing, and management academics and popular writers busy (and sometimes rich). Continue reading
In an INGO where I was working recently one of the newer members of staff proudly told me that he was Prince2 trained. This was mentioned in relation to the conversation we were having about what he considered to be the ‘lack of systems’, I think implying a lack of rigour, that he perceived in the organisation he had just joined. As someone who once worked as a systems analyst, operating at the interface between software developers and end users, I was prompted into thinking about why my colleague might believe that a project management method originating from software development, and contested even there as to its usefulness, might also be suitable for managing social development projects. One would hardly look to the domain of IT for examples of projects which have been delivered on time and to budget, without even considering the other, obvious differences between the two fields of activity. Nevertheless, Prince2 is a good example of the kinds of tools, frameworks and methods which increasingly pervade the management of social development, and are taken to be signs of professionalization in the sector. Continue reading
Evaluation is a domain of activity which the French sociologist Pierre Bourdieu referred to as a field of specialised production. In other words, it is a highly organised game, extended over time, with its own developing vocabulary, in which there are a wide variety of players who have a heavy investment in continuing to play. Because the game is complex, and played seriously, and those who want to play it must accumulate symbolic and linguistic capital, it is very hard to keep up. To influence the game there is a requirement to be recognised as a legitimate player, as one worth engaging with, and this requires speaking with the concepts and vocabulary that are valued in the game. To call the game into question, then requires the paradoxical requirement of using the vocabulary of the game to criticise the game, and this is no easy thing.
However, a number of evaluation practitioners have begun to question the linearity of development interventions, and therefore the evaluation methods which are commonly used to make judgements about their quality. Since most social development interventions are construed using propositional logic of an if-then kind, there can be no surprise that most evaluation methods follow a similar path. As a recent call for papers for an international conference articulated this, evaluation is understood as being about developing scientifically valid methods to demonstrate that a particular intervention has led causally to a particular outcome. In calling into question the reductive linear logic of the framing of both social development and evaluation, a number of scholars have found themselves turning to the complexity sciences as a resource domain of a different kind of thinking but have done so with a varied radicalism in calling the evaluation game into question. Continue reading
I was recently sent a proposal by the designers of a project who intended to demonstrate a particular approach to undertaking development work in a geographical district in a developing country, and if it was successful they then intended to ‘scale up’ the model to other districts . This was, they said, in order to overcome the piecemeal approach of just working at village level, which led to uneven development. The models embraced both the technical and the social – technical in terms of engineering solutions, but social in the way that they intended to work with different groups to encourage them to commit to the engineering solutions. The idea of modelling assumed that the same outcomes were possible with standardised approaches to both objects and people.
One of the difficulties that this presents is of assessing the effectiveness of the models in their own right as distinct from the organisation’s staff taking up these models with other, local people. The premise seems to be that if the models ‘work’ then anybody can take them up elsewhere with the same effect. This, of course, is the basis of scientific thinking as it implies to the natural world. A method is generalisable if anyone can apply it with the same results. However, if effectiveness is in good part due to both the quality of thinking about method (models) but also the calibre of the people who are working and the quality of the relationships they are forming with others to help them work, then there is no separating out the contextual from the generalisable. Success will arise from a whole host of local and national factors, while the idea of ‘scaling up’ implies that it is the generalisable factors which are the most important. What is emphasised, then, is abstracting from the context and the privileging of the general over the particular. Continue reading
A couple of years ago I was contracted to support a programme where peace workers were engaged in action to try and prevent human rights abuses towards a vulnerable population, and I have just been reengaged for reasons I will explore below. Every day in this particular country is different, since in times of military emergency there is no predicting quite how things are going to kick off. The peace workers do plan and undertake certain activities on a more or less regular basis, however.
Two years ago, their funders were dissatisfied with the quality of reporting on these activities and wanted to know what impact the peace workers were having. This is a reasonable question – why spend the money if we couldn’t decide whether it was making a difference or not? Of course, there was already plenty of anecdotal evidence that it was making a difference to the local population, who understood the peace workers’ efforts to be a kind of solidarity with their suffering, and they relayed their thanks in lots of different ways to the peace workers. But were they making any material difference, and was this difference worth the money being spent?
The managers of the project thought it would be a good idea to shape it using the logical framework approach (LFA) which is a planning tool widely used by donors to disaggregate projects into causally derived objectives. So, every project has an overall objective, which in this case was construed as being ‘to bring about peace’ in this particular country. Thereafter, the sub-objectives were logically derived from this overarching objective, and the necessary tasks and activities are supposed to tip out from these. If X, then Y. In each placement the peace workers were encouraged to report against these objectives.
So construed the planning and reporting were causing quite widespread frustration, and no better reporting. From the perspective of the peace workers, the objectives they were obliged to report against were often not the things that they ended up by doing because of the exigencies of the war that they found themselves caught up in. They were obliged to respond to whatever was happening, which might even prevent them from leaving home if there was a curfew. Moreover, the overall objective was absurd, given this particular programme’s tiny size. The programme was a contribution to bringing about peace, but not in any causally identifiable way. The nature of the LFA also implies progress towards a specified end point: we are here in a situation of war, and through our activities we will bring about a situation of peace, or fewer human rights abuses, or fewer attacks on vulnerable communities, at a certain point in the future. There will be improvement which we can demonstrate to having had a large hand in bringing about.
Seasoned observers of this particular conflict would probably say that the situation has got worse rather than better over the last 10-15 years. At most, and in the few locations where they operate, the peace workers might have contributed to preventing things deteriorating.
Together we decided to abandon the log frame, and the 10 objectives and to construe them much more broadly and simply. We developed methods of reflecting on what peace workers were actually doing in their placements, and, using narrative and systematic questioning of beneficiaries, we developed a more systematic way of reporting on the impact of their work and the attitudes of the beneficiaries. Wherever possible, we also counted things, such as how many people were helped through a particular checkpoint, for example. The intention was to get better at describing what peace workers were doing, rather than what they could or should be doing. We also put forward a proposal that at some point in the future the programme would employ some local researchers to question local people about what they thought of the peace workers, and how effective they were being.
And, two years later, the report from the local research organisation was the occasion of my being brought back into the project. One of the recommendations of the evaluation report by the local organisation was that the project should set much clearer medium to long term objectives, and use project cycle management techniques. These techniques, like the log frame, imply setting a goal for the programme to achieve, in say, three years, and then working back in logical steps from there. Perhaps we might even find ourselves being invited to set the overarching objective of bringing about peace in this particular country.
I was struck by how resilient and persistent these ways of understanding the work have become across all development agencies, irrespective of the context, and the type of work being undertaken. Not to use a log frame to plan the project becomes a badge of lack of professionalism, as though the managers in the programme had not realised the inadequacy of their approaches.
Project management, log frames, project cycle management arose out of the desire of funders to control progress, cost and effectiveness at a distance. Originally they were used for logistical projects, such as bridge building, but now they are applied in every aspect of human endeavour, as though social development can also be reduced to goals and milestones. Donors have a legitimate right to scrutinise the spending of their money, but in situations as complex as countries at war, and/or extreme hardship and poverty, how can any of us know what input will lead to what outcome. The effectiveness of logical if-then thinking in situations of extreme complexity breaks down. Rather than encouraging peace workers to respond creatively to the situations they find themselves in oriented by the broad purpose of the organisation sending them, they could instead be trying to fulfil and unfulfillable plan that meets donors’ needs more that it meets the needs of vulnerable populations.