In his most recent novel The Fear Index the novelist Robert Harris tells the story of a mathematical genius Alex Hoffman, who, frustrated by his job at the Cern laboratories, leaves to set up his own business, a hedge fund. Hoffman’s innovation in complex computer modelling of financial trading is not just that he can model many variables in the constant fluctuations in international markets, but that he can model human emotions which contribute to these fluctuations. The novel speaks to the critique offered by ex-quants like the author Nassim Nicholas Taleb in his book The Black Swan, that the mathematical models developed in the financial sector are highly idealised abstractions and do not do justice to the complexity and unpredictability of human life. The conceit of the novel is that there is an algorithm for human emotion, although in this case the only emotion which seems to count is fear, hence the title of the novel.
Although it is unclear from the book at what level of aggregation Hoffman’s computer simulation is operating, wittingly or unwittingly, it addresses a number of concerns of social theory. That is to say Harris sets out a theory of social action, ie financial traders are as much driven by fear as they are rational calculation, as a well as a theory of stability and change: global social phenomena arise from the complex interweaving of the daily activities of multiple numbers of traders with amalgams of calculation and fear. As with agent-based models of complex social processes, agents are forming and being formed by the population of which they are part, both at the same time. Fearful micro-decisions can stampede markets, which drive fearful micro-decisions. In this way Harris undercuts some of the principle assumptions of classical economics, that actors in an economy are rational atoms acting to maximise their own utility according to clearly articulated preferences. Nonetheless the novel still sustains the fantasy that the non-rational, even the irrational, can be modelled with efficient causality.
Of course there are currently many researchers working with agent-based non-linear models of complex social phenomena, but I know of none who would claim that their models are particularly helpful at predictions, rather than offering retrospective insights into the ways in which particular global social patterns have arisen. They have much stronger explanatory rather than predictive power. They may show trends and describe probabilities, but there will always be a margin of error. Small changes in the model can amplify into dramatic and large, population-wide changes in patterns, just as seemingly large interventions may result in not much change at all. Everything will depend on the history of interactions, the context and the way the agents self-organise.
In much organisational theory, however, and with the proliferation of tools and technique, the emphasis is still on developing methods which are assumed to be able to predict and control human behaviour. They aspire to Robert Harris’ dream.For example, in previous posts I have been talking about the preponderance of logic models in strategy development, project development and evaluation methods. In order to sustain the logic, these models must be construed in a highly abstract, deductive and reductive way, covering over uncertainties, messy contingencies and the mixed motivations of the human beings who will be carrying out the work. For managers sitting at a distance a strategy plan with milestones and targets, a log frame or a theory of change (ToC) are idealised and static representations of the work as it should be done. They aspire to being law-like generalisations which trace causality of an if-then kind and describe what will happen rather than how it will happen. In previous posts I have discussed how these methods allow managers sitting remotely to ‘see like a state.’
Staff in organisations may then apply statistical techniques to determine how closely interventions can be correlated with outcomes, provided that they can control for the many variables that impact upon complex human interactions. In doing so they may draw on quasi-experimental methods and look for counterfactuals or control groups. The results of these techniques are often interpreted in dualistic terms: such and such an intervention has or has not been successful, it proved the logic model, or disproved it. In its extreme form preliminary conclusions from this kind of research can sometimes appear highly unrealistic. For example, an academic I know had been involved in an experimental evaluation of a nationwide government intervention aimed at families and young children claimed that it had apparently demonstrated that the multi-million pound intervention ‘hadn’t achieved anything’. It may not have proved the original hypothesis, but I doubt that it had achieved nothing. The interesting question, perhaps suggesting a review of the original hypothesis and more research using different methods, would be to ask what it had achieved. By focusing on the predictive power of the hypothesis, much else can be lost. This is particularly significant if future funding decisions turn on evaluations of a particular programme and are influenced by reductive conclusions turning on simplifications, such as successful/not successful.
When construed at the highly abstract level theories of social change inevitably discount the uncertain contingency of human life because they are concerned with averages and logical causality, and provide a snapshot of a situation at a point in time. Robert Harris has his central character deal with just one factor of uncertainty in his computational model, which, unlike most statistical methods, runs over an extended period of time. Dr Alex Hoffman is able to programme in a degree of uncertainty which affects the behaviour of actors over time.
How might we otherwise think about the uncertainties that affect our interactions with others, and how ‘computable’ might they be? To what extent would it be possible, to build uncertainty into a predictive model?
To get an idea of the scale of the problem we could turn to the thinking of the Norwegian political philosopher John Elster, who started out as an advocate of rational choice theory, but over time became disaffected with its inability to explain social interaction adequately. Despite how we would like to think of the world in idealised terms, he argued, we face at least five types of uncertainty in social life. The first he described as ‘brute’ factual uncertainty, also identified by the pragmatic philosopher Charles Sanders Peirce, by which he means the way that nature, fog, earthquakes, snow, the inflexibility of things, may confound our plans and expectations. The second type of uncertainty is related to the first about the cost and manner of resolving the first uncertainty. The third type of uncertainty he refers to as strategic uncertainty: that is to say, in a competitive environment there are many determining factors. How might my competitors behave? Tit for tat, or sudden death? His fourth type of uncertainty is due to asymmetric information where we may not know what our counterparts or competitors know. We are then obliged to anticipate and adapt just as our competitors will be doing. The fifth uncertainty he ascribes to incomplete causal understanding : ‘will tyrannical measures imposed by the dictator make the subjects more compliant or less?’ Elster argues that the compound effect of each of these uncertainties will, in most complex situations, be overwhelming.
Uncertainty arises not just from the way we are influenced by our own emotions, such as fear, but by the contingency of the world in which we live and the unpredictability of the behaviour and motivations of others. We are constantly anticipating and adapting to the adaptations of others. Modelling this would be a huge task, even for the likes of Dr Alex Hoffman, and even if it were so modelled, the model itself would take on a life of its own just as the programme does in The Fear Index.