Evaluation is a domain of activity which the French sociologist Pierre Bourdieu referred to as a field of specialised production. In other words, it is a highly organised game, extended over time, with its own developing vocabulary, in which there are a wide variety of players who have a heavy investment in continuing to play. Because the game is complex, and played seriously, and those who want to play it must accumulate symbolic and linguistic capital, it is very hard to keep up. To influence the game there is a requirement to be recognised as a legitimate player, as one worth engaging with, and this requires speaking with the concepts and vocabulary that are valued in the game. To call the game into question, then requires the paradoxical requirement of using the vocabulary of the game to criticise the game, and this is no easy thing.
However, a number of evaluation practitioners have begun to question the linearity of development interventions, and therefore the evaluation methods which are commonly used to make judgements about their quality. Since most social development interventions are construed using propositional logic of an if-then kind, there can be no surprise that most evaluation methods follow a similar path. As a recent call for papers for an international conference articulated this, evaluation is understood as being about developing scientifically valid methods to demonstrate that a particular intervention has led causally to a particular outcome. In calling into question the reductive linear logic of the framing of both social development and evaluation, a number of scholars have found themselves turning to the complexity sciences as a resource domain of a different kind of thinking but have done so with a varied radicalism in calling the evaluation game into question.
A number of general themes seem to repeat themselves in the struggle to make sense of the complexity sciences and to think about how they might be useful to evaluators. Although there are a wide variety of approaches demonstrated in articles about evaluation (simple rules, wicked problems, complicated vs. complex, systems dynamics, complex adaptive systems) complexity is often adduced without scholars ever taking a view on which of its manifestations is more helpful, or exploring the theoretical assumptions behind each of them. This renders the borrowing of insights from the complexity sciences more or less useful in the problematising of evaluation as a discipline.
One very popular way of taking up insights from the complexity sciences is by reference to the idea of ‘simple rules’, drawing on Reynolds’ Boids simulation (1987). Reynolds developed a graphical computer simulation of birds flocking, where the programmed agents, called Boids, followed three rules: maintain a minimum distance from other objects in the environment, including other Boids; match velocities with other Boids in the neighbourhood; move towards the perceived centre of mass of Boids in the neighbourhood. Programmed thus, the interacting agents demonstrate flocking behaviour. A number of scholars have seized on this insight to suggest that all that is required for a manager to ‘encourage’ emergence, which they conflate with flocking behaviour in organisations, is to set a few simple rules. There are a number of difficulties with the direct application of the Boids simulation to organisational life, which have been comprehensively rehearsed by Ralph Stacey. One of the principal objections to the idea that managers can apply simple rules is that the Boids simulation is a deterministic model where all the interacting agents are the same and are behaving exactly the same: the model is incapable of evolving over time but simply fluctuates around one attractor. In this sense there is a limited form of emergence since the model never evolves beyond flocking. So this is a good example of the way in which the idea of simple rules can be introduced along with a variety of other insights from the complexity sciences without any critical assessment of the limitations different ways of taking different manifestations of complexity.
Most scholars never stray from the idea that organisations are systems. Sometimes they might argue that organisations or development projects are complex systems and sometimes they are just like them. In thinking of development initiatives as complex systems, some scholars find it hard to relinquish the idea that they are somehow outside the systems they are describing and modelling: this makes it possible for them to suggest that these models might be mapped, directed or ‘tipped’ in one ‘direction’ or another. As with the idea of simple rules, this understanding of complexity and emergence still allows the potential for managers of social development projects, or perhaps evaluators, to control complex social processes by standing apart from them and ‘moving’ them from one state to another, similar to the detached and objective observer of the realist perspective on social development. Managers, or indeed evaluators, are still the principal instigators or investigators of semi-predictable change. This may be why the Boids simulation so appeals: the manager or evaluator is like the simulation programmer who can set the parameters for what is happening in the development intervention. To this extent the role of evaluator is relatively unproblematised. It is for the evaluator to decide whether what they are dealing with is simple, complicated or complex phenomena, to find different ways of collecting data, whatever we might mean by that term, and to ‘analyse’ it together with the project staff they are working with.
Many evaluation scholars are working with what they call logic models, by which they mean a summarized theory of how the social intervention works, usually in diagrammatic form, which gives an overview of how change occurs and thus what data an evaluator might collect. Scholars may call on the complexity sciences as a means of making their logic models more complex: they are largely intent on subsuming complexity within a pre-existing systemic framework where there is an assumed detachment of the evaluator from the reality they purport to be modelling, having first decided whether what they are modelling is simple, complicated or complex. Some scholars equate complexity with the number of actors involved in social development programme, or with the scale of the programme: increased scale means increased complexity. Usually they are still looking for a grand, aggregating theory of change. ‘Emergence’ may be taken to mean being flexible, or perhaps not having too much planning, or perhaps doing things ‘bottom up’.
In setting out my own understanding of what I consider some of the more radical implications of insights from the complexity sciences I will draw principally on evolutionary complex adaptive systems models. In complex adaptive systems simulations where diverse agents are demonstrating non-average behaviour in their interactions with each other, novel global patterns emerge which have not been pre-programmed or planned in any way. In other words, agents acting locally and responding to the amplification of small differences between them and other diverse agents produce novel patterning that has not been programmed or planned in any way. The emerging global patterns constrain what it is possible for agents to do in their local interactions – they cannot just do anything- but at the same time the local interactions are forming the global patterning. There is a paradoxical forming and being formed both at the same time. Computer simulations of complex adaptive systems are temporally bound, as patterning leads to further patterning over time – the simulations are modelling non-linear equations which have no solution, but simply iterate and reiterate. Taking a temporal view of the patterning of interaction would help us understand retrospectively that one phenomenon lead to another and we would be able to say something about the patterning which has emerged, but we would never be able to know all the causes. In the words of the sociologist Peter Hedström: ‘There is no necessary proportionality between the size of a cause and the size of its effect… Aggregate patterns say very little about the micro-level processes that brought them about’. Self-organising emergence, then, is not a free-for-all, but nor has it been pre-planned. Individual agents are both constraining and enabling each other to bring about the dynamically changing pattern of order and disorder. The patterning emerges as a result of what every agent is doing and not doing, and none of this is predictable in advance or reducible to history of interaction. Complex adaptive systems models are able to take on a life of their own in the way that Boids simulations are only able to do in a much more limited way.
If we were to think of these insights in organisational terms then it would be impossible to propose that emergence means ‘just letting things emerge’, i.e. anything goes and we don’t need to make plans. We cannot take emergence to mean being flexible or being the opposite of being in control, or having a plan, or allowing people to be creative. There is nothing to allow, permit, unleash, guide, tip, steer or encourage. We might take emergence to mean the complex interplay of human intentions as we constrain and enable each other, whether we have controlling plans or not. Whatever happens in all social development interventions can never be entirely accurately captured or described, and will never be reducible to even a highly detailed account purporting to show what led to what.
In thinking about the consequences of the emergence of novelty in non-linear systems simulations and moving by analogy to make comparisons with organisational and social life we might conclude that our plans for change have a limited predictability, since whatever we plan will continue to emerge in novel ways in local interactions irrespective of what we intended. It also has implications for thinking about the models we might build about complex reality, should we be tempted to do so. This would problematise the idea of a fixed logic model since we could never say with complete certainty that X input let to Y outcome. Any model we would be tempted to build would be permanently evolving, with the danger that it would take on a life of its own and would cease to reflect the reality it purported to model.
I will set out some of what I understand to be implications of the above for evaluative methods by comparing and contrasting with some of general themes of other scholars’ attempts to draw on complexity in their writing on evaluation. I am suggesting abandoning the idea that evaluators are objective observers of reality which they have come to evaluate in a detached way, but can be stakeholders in any development programme. Their role could be to help co-create interpretations of what it is everybody thinks is going on. This might involve using different techniques to collect data, but would also involve problematising and discussing this data to work out and interpret what the data might mean. I am suggesting that an evaluator should not be naïve about the power of the role they have and the way it affects the interpretations that they make with others, and the way that evaluators, in turn are affected by their participation in the social development programmes they come to evaluate with others. I share with the more radical scholars that an evaluator’s role is to question and then question further, and understand this to be having an iterative effect: the evaluator learns from programme participants and in turn ‘teaches back’ what they think we have learned in order for this to be relearned and retaught. In this way meaning emerges iteratively: paradoxically evaluators shape meaning and are shaped by it at the same time. I would argue against scholars where they put forward managerialist ideas that social projects arise from unity, shared values and vision, and would argue instead on the basis of my own understanding of some insights from complex adaptive systems theory, that what contributes to social change is the exploration of similarities and differences which amplify into larger population-wide changes, and it is from these that genuine novelty emerges.
What is most interesting for me about social development programmes is the way in which actors, including evaluators, negotiate order in local contexts at a particular time and place. And I am putting forward the idea that reflexivity, particularly the way that the evaluator’s own reflexivity is helpful in giving an account of how change is occurring. In contradistinction to some scholars, I do not think I am producing research about programme stakeholders, but rather, with them. This distinction ‘about’ and ‘with’ is the difference between an epistemological position which assumes separation from the objects of research, which is metaphorically conveyed by the image of taking up complexity as a ‘lens’ for example, and a position that assumes no separation between researcher and the researched. A radical interpretation of complex adaptive systems theory might problematise the idea of logic models being any more than highly abstract, fixed and thin simplifications of reality which can never produce the infinite level of detail required to approximate causality. If interaction is non-linear, then small interventions can have a large effect, and the opposite is also true. This makes the search for causality which forms the basis for most evaluations a highly uncertain exercise.