Complex, but not quite complex enough

During the last 10-15 years there have been repeated appeals to the complexity sciences to inform evaluative practice in books and journals about evaluation. This partly reflects the increased ambition of many social development and health programmes which are configured with multiple objectives and outcomes and the perceived inadequacy of linear approaches to evaluating them. It could also be understood as a further evolution of the methods vs theories debate  which has led to theory-based approaches becoming much more widely taken up in the evaluative practice. It is now very hard to avoid using a ‘theory of change’ both in programme development and evaluation. What kind of theory informs a theory of change, however?

Although the discussion over paradigms has clearly not gone away, the turn to the complexity sciences as a resource domain for evaluative insight could be seen as another development in producing richer theories better to understand, and make judgements about, complex reality. However, some evaluators are understandably nervous about the challenge of what they perceive as being the more radical implications of assuming that non-linear interactions in social life may be the norm, rather than the exception. In a variety of ways they try to subsume them under traditional evaluative orthodoxies, which is just as one might expect any thought collective to respond.

Take those authors who suggest more or less strongly that the complexity sciences may be a perspective only applicable in particular circumstances and at particular times according to the evaluator’s assessment. Programmes which need evaluating, the authors claim, are either simple, complicated or complex, or complex programmes may have simple or complicated parts. It is considered to be a perspective, that can somehow be grafted onto more conventional approaches dependent upon circumstances. In a domain which is replete with a dizzying array of tools, techniques and perspectives all offered with propositional (if, then) logic, a complexity perspective then becomes another weapon in the rational evaluator’s armoury, but only if circumstances allow.

Evaluators who have an interest in the complexity sciences have an understandable need to define what they are talking about both for themselves and their readers, and this has no doubt motivated them to draw on what has become known as the Stacey matrix (Stacey, 1992). Stacey’s matrix represents a contingency theory of organisations understood as complex adaptive systems and suggests that the nature of the decision facing managers depends on the situation facing them. In situations of great uncertainty and high disagreement conventional linear/rational methods of decisions making are dangerous, Stacey argues. It behoves managers to fit their decision-making methods with the circumstances they analyse as being in one category or another, according to the inevitable two by two grid.

Variations on Stacey’s idea of presenting complexity as contingent decision-making have been reproduced by others, most notably Glouberman and Zimmerman (2002), who seem to have gained purchase amongst a number of prominent evaluation scholars. Glouberman and Zimmerman’s proposal is that social problems are of three kinds: simple, complicated and complex. Simple problems require following a recipe, which, once mastered, carries with it a very high assurance of success. Complicated problems ‘contain subsets of simple problems but are not merely reducible to them. Their complicated nature is often related ….to the scale of a problem like sending a rocket to the moon’ (2002:1). Complex problems are ones like raising a child, where there is no formula to follow, and success with one does not guarantee success with the next.

This is the kind of formulation which may look helpful on first reading but does not stand up to much careful investigation. Nor does it become more credible because it is widely taken up and endlessly repeated. It is hard to conceive of sending a rocket to the moon, except in the very narrow sense of being able to see whether one has landed on the moon or not, as being anything other than a complex undertaking. Inevitably, and on each occasion it will have involved widespread mutual adaptation and improvisation, disagreements, lacunae, the unexpected and the contingent, and with occasional catastrophic interludes (Apollo 13 and the Challenger disaster), which surely bear out the idea the even in highly disciplined scientific contexts the unexpected and the unwanted happen. Even following rules like a recipe, to draw on the Canadian philosopher Charles Taylor (1999), is a highly social process where the rules inform practice and practice informs the rule. There is no recipe so clear that it is completely obvious what to do in every situation, and rules are ‘islands in the sea of our unformulated practical grasp of the world’ (1999: 34). Following a recipe implies a rich background of unreflected beliefs and taken-for granted assumptions about the world which only become evident in the practical application of what James C Scott has referred to as a ‘thin simplification’ in often uncertain circumstances.

If, as this post claims, the heuristic does not seem to support the weight of expectation freighted upon it, how might we account for its continued appeal? Although there seems to be some agreement that insights from the complexity sciences may help us understand why social activity is unpredictable, to consider evaluation practice, which is also a social activity, in the same light radically decentres it: it can no longer be grounded in the certainties of the rational, designing evaluator. That is, if you accept the contention that without an explicit model laying out goals and measurable objectives a programme cannot be evaluated, then theories of change methods, all based on propositional logic models, immediately become problematic if the idea of non-linearity is taken seriously. Each of the proponents of Glouberman and Zimmerman’s framework acknowledges this to a greater or lesser extent. They may do so by arguing that an evaluator needs to make their logic model more ‘flexible’, which appears to mean developing a series of logic models and being prepared to evaluate what are sometimes termed the ‘emergent’ aspects of the programme. Or they may simply conclude that although complexity may potentially offer a valuable framework to understand ‘complex systems’, a complexity perspective cannot be applicable across all evaluation settings. In general then, the heuristic allows evaluators in the mainstream to maintain what John Dewey referred to as a ‘spectator theory of knowledge’, which in this case means the evaluator can decide when the insights do and don’t apply. And they seem only loosely to apply to the evaluator’s own activity.


Dewey, J. (2005) The Quest for Certainty: A Study of the Relation of Knowledge and Action, New York: Kessinger Publishing.

Glouberman, S. and Zimmerman, B. (2002) Complicated and Complex Systems: What Would Successful Reform of Medicare Look Like?, Discussion Paper No. 8: Commission on the Future of Health Care in Canada, Plexus Institute.

Scott, J. C. (1998) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, New Haven: Yale University Press.

Stacey R (1992) Managing the Unknowable, San Francisco: Josey-Bass.

Taylor, C. (1999) To Follow a Rule in Shusterman, R. (ed.) Bourdieu: a Critical Reader, Oxford: Blackwell.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s