Payment by results: research methods and disciplinary power

I was sitting in a meeting with a social development organisation listening to the kinds of requirements that have been placed upon it by a governmental body in order to trigger the full funding for a grant that they had succesfully bid for. 10% of the grant is ‘performance related’. In other words, and on a sliding scale of reward for performance, the social development organisation has to prove that it has helped educate a certain number of girls in a developing country to a predicted level of attainment, and that these girls will have stayed in school for the three year duration of the project and not dropped out. Additionally money is released against the achievement of pre-reflected project milestones. ‘Results’ are validated by ‘rigorous research methods’ which turned out to mean quasi-experimental methods. In other words, the rubric insists that the project sites be compared with communities where there has been no such intervention, and which are ‘similar in every way’. The organisation will only be fully rewarded if it achieves exactly what it said it would, and precisely to the timetable it set out in the proposal.

This particular social development organisation I am visiting is one amongst a dozen or so others which have received similar or much bigger grants, some of which amount to the low tens of millions. All of them have proposed highly complex interventions in very different developing countries involving the girls themselves, their families, teachers, head teachers, community groups, religious and community leaders, sometimes even boys. As with most social development these days the intervention is highly ambitious and leaves the impression that the organisation, working through a local social development organisation in the country concerned, will be intervening in particular communities at breakfast, lunch and dinner and in a variety of different and incalculable ways. This combination of interventions may be necessary, but the extent and range of them makes the question of causality extremely problematic, experimental methods or no.

The other thing that struck me is that the dozen or so social development organisations receiving this money all have to use the same project management tools and frameworks so that the government department can aggregate progress and results across all countries and all projects. Quantification and standardisation is necessary, then, in order to render the projects commensurable, and in order to make a claim that the government has made a quantifiable contribution to the Millennium Development Goals (MDGs) which they can ‘prove’. The kind of assertion that the government would like to make is that it has improved X tens of thousands of girls’ education to Y degree through its funding of a variety of organisations. These results, the claim will continue, will have been rigorously demonstrated through scientific methods and will therefore be uncontestable.

Placing strict requirements on the timetabling of the work, the recording of progress and the measurement of results, has led the government agency to pay millions of pounds to two large commercial consultancy companies to assist with the project management of the project management, and to support the social development organisations with technical questions about how to evaluate rigorously. The second greatest investment, after the social development fund itself, is therefore in mechanisms of standardisation and verification. One of the consultancy companies employs project managers to give technical advice who have worked for other government or international bodies requiring similar ways of working, such as USAID and the World Bank. This is one of the ways in which particular ways of working come to proliferate and become mutually reinforcing.

This is a very good example of the kinds of trends and pressures that all areas of the public and not-for-profit are subject to currently, and this example is so rich in lines of enquiry that I only intend to deal with some of them in this post.

The first thing to say is that the funding of the projects and the elaborate mechanisms of scrutiny and control that are bundled with it is a very good example of what James C Scott (1998) was referring to in his book Seeing Like a State. The need for the bureaucracy to make seemingly uncontested claims about the efficacy of the money they have set aside to improve the lives of girls in the developing world has led to much over-simplification and the insistence on the adoption of reductive tools and frameworks. This simplification is necessary if projects are to be legible, controllable and commensurable at a distance: as in a landscaped garden of an 18th C stately home, civil servants feel they need a view down long straight avenues of trees. Staff in bureaucracies such as the civil service are struggling with a variety of different bureaucratic values. They aspire to working in ways which are objective and even-handed, they aspire to being professional, but they are also answerable to political masters who may have a limited attention span and short time-frame. One of the paradoxes here is that because of the political pressure governments feel they are under to justify spending or cutting spending, they move to develop complex and expensive apparatuses of scrutiny to put an end to political contestation and uncertainty by claiming that the money is well spent and has ‘worked’ uncontestably, and perhaps even that if the same projects were run again elsewhere they would work again. They turn to numbers to inspire trust, but the kinds of numbers they produce are only understandable with an explanation of the great number of assumptions about what is included and excluded.  It is debatable how credible the claim will sound even to a semi-informed audience. Expense is justified only at great expense.

At the same time, these reductive tools and frameworks do not just represent reality, as Scott observes, they shape it as well. The time and attention of all those involved in the projects are invested in keeping the bureaucratic beast fed. Whatever transpires has to be reframed in the pre-given categories which constitute the project’s most preeminent meaning. Although the staff employed in the social development organisation have a variety of worries, about the broad quality of the work, about unintended consequences, about the relationships with Southern partners which make these projects possible, these concerns have to take second place to the disciplinary regime of scrutiny and control.

The second thing to notice is the coincidence of quantitative methods of research and bureaucratic control, a convergence noticed by the moral philosopher Alastair MacIntyre in an essay entitled ‘Social Science Methodology as the Ideology of Bureaucratic Authority’[1]. Quantitative social science methods lend themselves to reinforcing bureaucratic authority, he claims, because they mirror each other; they share the same partial view of the social world. MacIntyre argues that both quantitative social scientists and administrators are concerned with classificatory schemes which suit their purposes and often make no reference to rival arguments about alternative forms of classification; they aspire to producing evaluatively neutral variables and assume that change is brought about causally by them – for the quantitative social scientist this is necessary in order to draw on statistical methods, and for the bureaucrat to claim that their policy intervention ‘works’. Both quantitative social scientists and bureaucrats believe that the social world is manipulable, that they can engineer change in social structures in predictable ways. But they do so by simplifying the world and the selection of hypotheses about the world so that they are only paying attention to contexts where there is an assumption of defined regularity.

“Methodology then functions so as to communicate one very particular vision of the social world and one that obscures from view the fundamental levels of conceptualisation, conflict, contestability, and unpredictability as they constitute and operate in the world.”

In this way, MacIntyre argues, the entwining of bureaucratic authority and quantitative methods acts ideologically, by presenting a particular conception of the world not as a partial view, but as the way things are, as ‘the facts’. MacIntyre’s view is of course counter to what experimental social scientists say about themselves: for example, in their book Poor Economics[2] Banerjee and Duflo set out the argument that ideology is one of the three ‘I’s that get in the way of social development (along with ignorance and inertia). From their perspective, everyone else is ideological except them, as they try to steer a neutral path between left and right in the debate about development drawing on methods which produce objective and uncontestable evidence.

Whatever the case for and against experimental methods in social contexts, the third thing to notice about this particular funding relationship is an underlying assumption about human motivation that links ‘performance’ to payment. It is something of a category error to yoke the two. In a natural science setting the result of an experiment, whether positive, negative or null, is equally important and helpful.  The experiment to educate a given number of girls to a particular standard over a given number of years using specific approaches might or might not be successful. That’s the point of testing the hypothesis. To link payment to the outcome one has predicted in advance of carrying out the experiment is to punish people for getting their hypothesis wrong. This seems to me to assume a veneer of scientific rhetoric but a much stronger underlying theory of human motivation that people will always work better if they are coerced, when they are impelled by punishment or reward. The danger of course that this disciplinary pressure creates all kinds of forced ways of working which are precisely not reproducible without increased coercion, as well as the potential for bullying and gaming the results.

What I see in this particular case is a good example of the way that ideology, reductionism and coercion come together to  constrain the way that people are able to work. It is a method almost entirely predicated on meeting the needs of the bureaucracy which aspires to ‘seeing like a state’. It is based on an impoverished understanding of the complex contexts in which people are working and endangers relationships of equality and cooperation. Despite any claim to the contrary, the particular conditionality of the grant is a very constricting relationship of power and domination, which carries with it an implicit theory of human motivation, that staff are unlikely to do their best unless threatened with financial penalties.


[1] MacIntyre, A. (1979) Social Science Methodology as the Ideology of Bureaucratic Authority, in Falco, M. (ed.) Through the Looking Glass: Epistemology and the Conduct of Enquiry, New York: University Press of America.

[2] Banerjee, A. and Duflo, E. (2011) Poor Economics, London: Penguin Books.

Advertisements

One thought on “Payment by results: research methods and disciplinary power

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s