How Government Is Turning into Algoracy
As governments collect vast amounts of data on their citizens, decision-making is increasingly outsourced to algorithm – in the hope that behaviour can be modified and sites of democratic resistance diminished.
In the early 1970s, the Chilean government attempted to pioneer technological socialism. Project Cybersyn, a system of networks which strived to manage the national economy, was dreamt up as a result of an unlikely alliance between the left-wing administration of Salvador Allende and British management consultant Stafford Beer.
Although the project was abandoned following a CIA-backed coup, the aims of Cybersyn linger in many dimensions of contemporary public policy. Now, another British advisor wishes to use large quantities of data to fine tune social coordination.
Those familiar with Dominic Cummings will be aware that Boris Johnson’s chief advisor prefers data scientists to ‘Lacan-spouting humanities graduates.’ His roadmap to overhaul the civil service in the mould of data-mining technocrats is part of a trend in policy-making that applies big data to nudge theory. This manifests in governance by ‘hypernudging.’
Law professor Karen Yeung has intricately mapped out this development in algorithmic design. Nudge theory was initially put forward by Cass Sunstein and Richard Thaler as a cheap way for states to exercise ‘libertarian paternalism.’
They recognised that human behaviour was not always ‘rational,’ and an individual’s decisions, or choice architecture, could be tweaked with the introduction of subtle policy changes which didn’t amount to banning or tax changes.
This made shockwaves globally, but nudge theory found its home in Britain. David Cameron set up the Behavioural Insights Team (or ‘Nudge Unit’) in 2010, embarking on effective policy changes such as changing pension schemes to being ‘opt-in’ and informational campaigns to combat the problem of paying taxes late.
The Behavioural Insights Team (BIT) has now expanded into a private company that offer solutions worldwide. Many have attributed the term ‘behavioural fatigue,’ an unverified term which assumed that Britons were ‘too liberal’ to adhere to strict lockdown measures, to the Unit. But the availability of datapoints in such a ubiquitous manner has transformed the role of the BIT.
Nowadays, policymakers do not need to guess citizens’ preferences. Instead, complex networks with millions of ‘smart devices’ in constant communication will give them access to solid information on what drives citizens. This will provide a greater degree of precision than any behavioural experiment.
‘Smart regulation’ and policy are thus increasingly informed not only by clinical predictions based on academic research, experience or anecdotal piece of information, but also by data collected and processed by predictive analytics and decision-guiding processes.
Such an approach has been increasingly adopted by the BIT. Hypernudging is now being touted as a technique to reduce violence in the city of London, whilst their research on the use of smart technology demands large datasets to understand consumer preferences.
Hypernudging isn’t an inherently damaging policy package. Part of the problem lies in the philosophy that the advocates of this new mode of governance believe in. Hypernudging is currently grounded in the notion of behaviourism.
This is a worldview which posits that human behaviour can be predicted and ultimately shaped. Here, free will appears less prescient. For example, the standard model of AI doesn’t aim to satisfy preferences, but to make them more predictable.
If merely optimising a certain preference at a given time is the key objective of these technologies, hypernudging appears to lack a real theory of justice. There was not a single discussion of data ethics in the recent job advertisement for the UK government’s Chief of Data Science role. An absence of such a critical area of discussion has worrying implications for emerging techno-social realities.
Currently, the field of predictive analytics is taking a foothold in public policy. One of Vote Leave’s AI firms has been awarded a series of contracts from the Johnson administration. Faculty AI has been engaging in ‘unprecedented data-mining operations’ during the pandemic.
Examples of these techniques include smart cities, where real-time data can monitor dimensions such as the spread of disease, along with traffic to reduce pollution. Furthermore, many on the Right claim that predictive policing is a preferable alternative to current police brutality. But ‘less cops, more computers’ would amount to what Princeton sociologist Ruha Benjamin refers to as the ‘new Jim Code.’
Despite Michael Gove’s Ditchley House speech lauding data as objective, it is clear that this is not the case. Even some of the most powerful natural language processing (NLP) systems display racial bias; for example, when failing to recognise African American Vernacular English (AAE).
Open AI’s most powerful NLP to date, GPT-3, produces disturbingly racist and antisemitic content when it scrapes the internet to understand conversations regarding current affairs. Discrimination within nudging carries a real risk, as profiling techniques may result in personalised nudges being embedded within stereotypes. The lack of concern over ethics and governance means that new political programmes run the risk of perpetuating similar problems.
A further issue underpinning the narrative that data is objective concerns what it means for modern politics. Cybernetic systems view disagreement and intellectual conflict as unnecessary. For algorithmic optimisation problems, disputes are seen as a form of inefficiency that could be solved through aggregation of further data.
The notion of the politicised ‘self’ will therefore be far less prevalent, replaced by dividuals; the breaking of individuals into sub-units such as your healthcare data or credit score.
Coupled with the nudge techniques, this form of governance reduces deviation from the ‘objective truth’ to an outlier that can be shaped to fit the code required.
Automation of decision-making means that sites of democratic resistance will decline. This anti-political nature displayed by some technologies will significantly reduce the freedom of modern citizens.
Alongside biased data and reduced liberty, hypernudging assumes that we are relatively close to the optimal state of affairs. Just a tweak here or there will get society to their desired outcome.
The fact that nudging limits the scope of action also makes the ability to predict easier. This depiction of public policy is an extension to Shoshana Zuboff’s description of firms desiring ‘regimes of certainty.’
With these developments gaining momentum, governance is set to become more technocratic, algorithmic, automated and predictive in nature. Uncoupled from its constitutional control within government programmes, hypernudges become the basis for a new science of large-scale human behaviour modification.
How knowledge is produced, and who has the authority in which to arrive at these new ‘objective’ truths, will be determined by the role of data science and the behaviourism-driven rationale of hypernudging. Algocracy, or ‘rule by the algorithm’, will enable more pervasive forms of control in a fashion that may generate unjust, damaging outcomes.