Category Archives: project management

Researching ‘transformational change’

Recently I have been involved with a team of researchers in researching so called ‘transformational change’ in a not-for-profit sector. I suspect the research has been commissioned on the understanding that transformational change is something which senior managers choose, and can, to a degree control. We are at the beginning of the research but the process itself has thrown up interesting insights into research methods , but also how the idea of transformation is framed and understood by our commissioners, and by the respondents. This helps us researchers understand the term anew too, but makes it no easier to think and write about. Continue reading

Advertisements

Payment by results: research methods and disciplinary power

I was sitting in a meeting with a social development organisation listening to the kinds of requirements that have been placed upon it by a governmental body in order to trigger the full funding for a grant that they had succesfully bid for. 10% of the grant is ‘performance related’. In other words, and on a sliding scale of reward for performance, the social development organisation has to prove that it has helped educate a certain number of girls in a developing country to a predicted level of attainment, and that these girls will have stayed in school for the three year duration of the project and not dropped out. Additionally money is released against the achievement of pre-reflected project milestones. ‘Results’ are validated by ‘rigorous research methods’ which turned out to mean quasi-experimental methods. In other words, the rubric insists that the project sites be compared with communities where there has been no such intervention, and which are ‘similar in every way’. The organisation will only be fully rewarded if it achieves exactly what it said it would, and precisely to the timetable it set out in the proposal.

This particular social development organisation I am visiting is one amongst a dozen or so others which have received similar or much bigger grants, some of which amount to the low tens of millions. All of them have proposed highly complex interventions in very different developing countries involving the girls themselves, their families, teachers, head teachers, community groups, religious and community leaders, sometimes even boys. As with most social development these days the intervention is highly ambitious and leaves the impression that the organisation, working through a local social development organisation in the country concerned, will be intervening in particular communities at breakfast, lunch and dinner and in a variety of different and incalculable ways. This combination of interventions may be necessary, but the extent and range of them makes the question of causality extremely problematic, experimental methods or no.

The other thing that struck me is that the dozen or so social development organisations receiving this money all have to use the same project management tools and frameworks so that the government department can aggregate progress and results across all countries and all projects. Quantification and standardisation is necessary, then, in order to render the projects commensurable, and in order to make a claim that the government has made a quantifiable contribution to the Millennium Development Goals (MDGs) which they can ‘prove’. The kind of assertion that the government would like to make is that it has improved X tens of thousands of girls’ education to Y degree through its funding of a variety of organisations. These results, the claim will continue, will have been rigorously demonstrated through scientific methods and will therefore be uncontestable. Continue reading

Complex, but not quite complex enough

During the last 10-15 years there have been repeated appeals to the complexity sciences to inform evaluative practice in books and journals about evaluation. This partly reflects the increased ambition of many social development and health programmes which are configured with multiple objectives and outcomes and the perceived inadequacy of linear approaches to evaluating them. It could also be understood as a further evolution of the methods vs theories debate  which has led to theory-based approaches becoming much more widely taken up in the evaluative practice. It is now very hard to avoid using a ‘theory of change’ both in programme development and evaluation. What kind of theory informs a theory of change, however?

Although the discussion over paradigms has clearly not gone away, the turn to the complexity sciences as a resource domain for evaluative insight could be seen as another development in producing richer theories better to understand, and make judgements about, complex reality. However, some evaluators are understandably nervous about the challenge of what they perceive as being the more radical implications of assuming that non-linear interactions in social life may be the norm, rather than the exception. In a variety of ways they try to subsume them under traditional evaluative orthodoxies, which is just as one might expect any thought collective to respond. Continue reading

Management fads and the importance of critical thinking

One of the main themes of Mats Alvesson and Hugh Willmott’s new edition of their book Making Sense of Management is that management, and the ubiquitous tools and techniques that accompany the practice are widely taken for granted as neutral, technical and helpful. In detail, and at length, they call these assumptions into question. Further, in a forthcoming article in the Journal of Management Studies, Alvesson, with his co-author André Spicer go on to accuse organisations of practising both knowledge and stupidity management. By stupidity management they mean the way that many organisations rush into adopting the latest management fad that everyone else is taking up, simply because everyone else is taking it up. They point to an absence of critical reflection and questioning in many organisations.

It is this process, endlessly rushing towards the next big idea provoked by an anxiety about keeping up with ‘the latest thinking’, or perhaps because of (self-imposed) coercion from peers or scrutinising boards and other agencies, that keeps the management shelves of bookshops filled to overflowing, and management academics and popular writers busy (and sometimes rich). Continue reading

Complexity and project management – exercising practical judgement in conditions of uncertainty

In an INGO where I was working recently one of the newer members of staff proudly told me that he was Prince2 trained. This was mentioned in relation to the conversation we were having about what he considered to be the ‘lack of systems’, I think implying a lack of rigour, that he perceived in the organisation he had just joined. As someone who once worked as a systems analyst, operating at the interface between software developers and end users, I was prompted into thinking about why my colleague might believe that a project management method originating from software development, and contested even there as to its usefulness, might also be suitable for managing social development projects. One would hardly look to the domain of IT for examples of projects which have been delivered on time and to budget, without even considering the other, obvious differences between the two fields of activity. Nevertheless, Prince2 is a good example of the kinds of tools, frameworks and methods which increasingly pervade the management of social development, and are taken to be signs of professionalization in the sector. Continue reading

Complexity and evaluation

Evaluation is a domain of activity which the French sociologist Pierre Bourdieu referred to as a field of specialised production. In other words, it is a highly organised game, extended over time, with its own developing vocabulary, in which there are a wide variety of players who have a heavy investment in continuing to play. Because the game is complex, and played seriously, and those who want to play it must accumulate symbolic and linguistic capital, it is very hard to keep up. To influence the game there is a requirement to be recognised as a legitimate player, as one worth engaging with, and this requires speaking with the concepts and vocabulary that are valued in the game. To call the game into question, then requires the paradoxical requirement of using the vocabulary of the game to criticise the game, and this is no easy thing.

However, a number of evaluation practitioners have begun to question the linearity of development interventions, and therefore the evaluation methods which are commonly used to make judgements about their quality. Since most social development interventions are construed using propositional logic of an if-then kind, there can be no surprise that most evaluation methods follow a similar path. As a recent call for papers for an international conference articulated this, evaluation is understood as being about developing scientifically valid methods to demonstrate that a particular intervention has led causally to a particular outcome. In calling into question the reductive linear logic of the framing of both social development and evaluation, a number of scholars have found themselves turning to the complexity sciences as a resource domain of a different kind of thinking but have done so with a varied radicalism in calling the evaluation game into question. Continue reading

On models and scaling up

I was recently sent a proposal by the designers of a project who intended to demonstrate a particular approach to undertaking development work in a geographical district in a developing country, and if it was successful they then intended to ‘scale up’ the model to other districts . This was, they said, in order to overcome the piecemeal approach of just working at village level, which led to uneven development. The models embraced both the technical and the social – technical in terms of engineering solutions, but social in the way that they intended to work with different groups to encourage them to commit to the engineering solutions. The idea of modelling assumed that the same outcomes were possible with standardised approaches to both objects and people.

One of the difficulties that this presents is of assessing the effectiveness of the models in their own right as distinct from the organisation’s staff taking up these models with other, local people. The premise seems to be that if the models ‘work’ then anybody can take them up elsewhere with the same effect. This, of course, is the basis of scientific thinking as it implies to the natural world. A method is generalisable if anyone can apply it with the same results. However, if  effectiveness is in good part due to both the quality of  thinking about method (models) but also the calibre of the people who are working and the quality of the relationships  they are forming  with others to help them work, then there is no separating out the contextual from the generalisable. Success will arise from a whole host of local and national factors, while the idea of ‘scaling up’ implies that it is the generalisable factors which are the most important.  What is emphasised, then, is abstracting from the context and the privileging of the general over the particular. Continue reading