Evaluation scholars abstract to varying degrees from the social programmes they are invited to evaluate. Perhaps the highest degree of abstraction is demonstrated by those evaluators using experimental methods who are concerned to draw statistical distinctions between a ‘treatment group’ and a comparator group which is randomly selected. Experimentalists are generally disinterested in social theory and think of causality in terms of independent and dependent variables. Meanwhile, adherents of Theories of Change (ToCs) made popular by the Aspen Institute (1997), draw on propositional logic and represent social change in the form of entity-based logic models showing the linear development of social interventions towards their conclusions. Additionally, however, they will often point to the importance of participation and involvement of the target population of programmes to inspire motivation. In this sense TOCs are a hybrid of functionalism and emancipatory social theory, which encourages participants in social programme to be active in the change process.
Less abstract still are ‘realist’ evaluators who claim to be interested in ‘generative’ theories of causality, i.e. ones which open up the ‘black box’ of what people actually do to make social programmes work or not. Realistic evaluation draws on Bhaskar’s critical realism (1978) as taken up and developed by Pawson and Tilley (1997) and Pawson (2006) and is the theory most often linked to the complexity sciences, particularly complex adaptive systems theory (CAS). In trying to reconcile realistic evaluation and CAS they adopt a functionalist, systems-based understanding as a default position and argue that interactions between human beings take place as ‘mechanisms’ and have an effect at different ‘levels’ of reality.The conceptual link between CAS and realistic evaluation is that they both have an understanding that stability and change does not arise because of ‘variables’, the staple of experimental methods, nor does it proceed with propositional logic as in ToC, but as a result of what people are doing in their local interactions with other people. CAS are relational models demonstrating how patterns emerge over time because of ensembles of interacting agents. So from a realistic perspective and in the words of Pawson and Tilley:
Realists do not conceive that programmes ‘work’, rather it is the action of stakeholders that makes them work, and the causal potential of an initiative takes the form of providing reasons and resources to enable programme participants to change. (1997: 215)
So both CAS and realist evaluators are most interested in local interaction as the basis for developing more general observations about the success or otherwise of social interventions. Realistic evaluators argue that interventions do or do not achieve what they set out to because of a combination of context, mechanism and outcomes (CMO). The perspective is concerned with finding what works for whom and in what circumstances and then extrapolating a detailed and evolving explanation to other contexts. In Pawson’s words it is predicated on the ‘steady accretion of explanation’ (2006: 176) about a reality which exists independent of the evaluators who are enquiring into it.
It is easy to see the appeal of the link between a realistic evaluator’s interest in what people are doing to make a project work, through negotiated order or rule-following, and CAS. Realistic evaluation has much to recommend it in terms of its insistence on the importance of the particular history and local context of social interventions, and that prediction and questions of validity for different contexts are made highly problematic. However, some of the more arcane aspects of critical realism are in danger of covering over what we might think of as the radical implications of CAS. Rather than opening up the black box of causality realistic evaluators, in Norbert Elias’ words (1978: 73), seem to use a mystery to explain a mystery when they draw on the concepts of systems to describe the way that contexts and mechanisms work. For example, Pawson argues that social interventions are ‘complex systems thrust amid complex systems’ (2006: 168), and that: ‘A sequence of events or a pattern of behaviour are explained as being part of a system and the mechanism tells us what it is about that system that generates the uniformity’ (2006: 23). In my understanding of CAS models there is nothing to suggest that they are either open, nested, or have multiple levels. The global patterning that emerges may tell us very little about the local interaction that has brought it about: even if we were able to identify local ‘rules’ conditioning people’s behaviour, or ‘generative mechanisms’, they would not necessarily help us, since there may be no obvious connection between local and global ‘uniformity’. Additionally, and as far as the term rules is helpful in thinking about human interaction, social ‘rules’ would themselves be evolving according to the contingencies of each social programme. Introducing functional abstractions, ‘system’, ‘levels’, ‘mechanisms’, covers over as much as it reveals about what may be happening in a social development intervention, and promises more than it can deliver if we are to take the insights from CAS seriously. Rather than being concerned with static, entity-based and spatial representations of complex reality where causal powers are attributed to machine-like mechanisms, CAS models are helpful in understanding qualitative changes in ensembles which change over time. It is true that in CAS the rules, in the form of algorithms, are deterministic and are set by the programmes. In a social setting there is no equivalent to the programmer, and in the last post we noticed how Taylor, drawing on Wittgenstein, noted how rule-following has to be contextual and adaptive: in other words, in a social setting the ‘rules’ of engagement are themselves constantly evolving and changing.
Of course, realistic evaluators are not the only evaluation scholars to understand what they are doing in systemic terms, no matter how much the idea of a system is problematized: i.e. scholars often claim that despite using the term, they do not think it is easy to know where the boundary of a system is, or claim that systems are open, or nested, or intersecting with other systems or whatever. It is only a short step to begin thinking that if the idea of a system in social terms is so problematic, then perhaps it would be preferable not to use it at all, but to find some other way of paying attention to, or describing what happens when social development interventions occur. Part of the explanation for the persistence of systemic abstractions may that they protect the discipline of evaluation by separating the evaluator from the object to be evaluated. In this sense, and despite the encouragement of a variety of evaluation scholars to value reflection, reflexivity and multiple views of reality this decentering radicalism rarely takes in the discipline of evaluation itself, with some exceptions. This is not to argue that evaluators, particularly in the realistic school of evaluation are unaware of the way that they influence social interventions, by learning then ‘teaching back’ as Pawson and Tilley (1997) express it. To a degree, then, evaluation scholarship takes refuge behind its abstractions and takes what the philosopher Thomas Nagel (1986) described as ‘a view from nowhere’, by which I understand him to mean that by abstracting away from ourselves as subjective thinkers we leave out precisely what we need to explain. Even those evaluation scholars, who problematize more positivistic perspectives on their discipline by only go so far in developing how much these non-linear sciences apply to them and what they are doing in the practice of evaluation.
Aspen Institute (1997) Voices from the Field: Learning from the Early Work of Comprehensive Community Initiatives, Aspen Institute, Washington, DC.
Bhaskar, R. (1978) A Realist Theory of Science, 2nd Edition, Brighton: Harvester Press.
Nagel, T. (1986) The View from Nowhere, Oxford: Oxford University Press.
Pawson, R. and Tilley, N. (1997) Realistic Evaluation, London: Sage.
Pawson, R. (2006) Evidence-based Policy, London: Sage.