When the Algorithm Is Your Boss
At workplaces like Amazon, algorithms have become the worst kind of boss – one who watches you constantly, makes impossible demands and then sacks you without explanation.
In the first half of the twentieth century, thinkers like John Maynard Keynes and Bertrand Russell predicted that the advance of technology would, by this point, have left us working something like fifteen- or twenty-hour weeks, liberated from the all-consuming demands of toil. As anyone will tell you, that future has not come to pass. Instead of sharing the pressure of work, the algorithms now commonplace in our workplaces have taken on a different role: that of the professional managerial class.
Stephen Normandin, a sixty-three-year-old army veteran from Phoenix, Arizona, was an Amazon delivery driver who last year reported being fired by the company’s algorithmic Human Resources Management (HRM) system with no explanation. ‘I depend on this job to survive,’ Normandin explained to Bloomberg back in June. ‘I have a consistent rating of always getting everything delivered. I have never missed a block. I always show up early or on time. I’ve never cancelled late. This just doesn’t make any sense.’
Stories like Stephen’s are growing more common. Like Uber, whose algorithmic face-ID process has been accused of racism by couriers it sacked, Amazon outsources responsibility for key HR decisions almost entirely to nonhuman agents—from managing productivity quotas through widespread surveillance, hiring, firing, and training workers. And although Amazon might be the most evangelical convert, it’s far from the only one. A recent study revealed that forty percent of HR departments in international companies use AI-based tools.
The results have been predictably disastrous. A report from UNI Global, a trade union federation including more than twenty trade unions representing Amazon workers worldwide, found using algorithmic HRMs to endlessly drive efficiency ‘eliminat(ed) downtime’ and placed ‘enormous psychological stress on the human workers.’
One UK-based Amazon worker, represented by the Union of Shop, Distributive and Allied Workers (USDAW), said the technology watching over them had affected their mental health. Another said Amazon’s ‘systems of control’ made them feel ‘monitored and analysed as if I were a machine.’ At the time the UNI report was written, the company was trialling wearable haptic feedback devices for warehouse workers, which use targeted vibrations to guide arm movements as quickly as possible to ‘maximise efficiency’.
Algorithms aren’t only used to surveil workers—they’re also used to discipline them. Where Normandin and his fellow Flex drivers were subject to a mechanised rating scale—Fantastic, Great, Fair, or At Risk—Amazon warehouse workers have described a points system directly governed by algorithmic management. Each point is a demerit, and can be administered for something as simple as requesting a shift, or half a shift, off. Nine points as a seasonal worker or thirteen points as a regular employee means you’re fired.
Expectations set by the AI become impossible to meet or appeal: human managers, themselves managed by the system, are unable to use empathy or common sense to intervene in decision making, often leaving the final word on sanctions or dismissal to a string of code. Delivery drivers in the US like Normandin, for example, who find fault with the algorithm’s decision, have ten days to launch an appeal to yet another bot, during which time they’re banned from working. If they lose, which they usually do, it costs $200 to take the dispute to arbitration—often an insurmountable sum.
Rather than dealing with the problems of algorithmic management, Amazon weaponises its stupidity. Workers at as many as 179 Amazon warehouses across the United States were paid incorrectly over the past year because of a glitch in Amazon’s leave system which shorted pay cheques whenever employees applied for paid or unpaid leave. Arbitrary terminations based on facial recognition glitches are commonplace, keeping workers conscious of their replaceability. All the while, the potential for discrimination based on race, gender, or disability, especially in hiring and firing software, remains obscured behind the black box of AI.
All this means that algorithmic HRM systems present a unique challenge for the Left. On the rare occasions courts or legislatures across the world decide to roll up the red carpet for Amazon’s frothy-mouthed union busters—as with December’s landmark settlement between the company and the US National Labor Relations Board, which prevents Amazon from interfering with labour organising on any of its properties after hours—union reps risk hitting a wall because of just how opaque the system is. How do you bargain with a robot?
For Alec MacGillis, author of Fulfilment: Winning and Losing in One-Click America, the company goes one step further to achieve that opacity by ‘segment[ing] its workforce into classes… spread across the map’. By extension, that means human managers, engineers, and software developers are more likely to be physically separated from precarious workers and their AI overlords, ensuring the latter’s opportunities to encounter real humans with real influence are limited.
There’s also the problem of leverage. Amazon has a virtually limitless labour pool to choose from: for a company that hires towns at a time, worker retention is an afterthought. So what if your digital boss fires you because it can’t recognise you with your new haircut? There are plenty of others.
Appraisals of the algorithm’s efficacy are often boosterish. A report on algorithmic management and app-work in the gig economy—which includes delivery drivers hired by Flex—published in Human Resource Management Journal in 2019 claimed algorithmic HRM systems play ‘a key role in accomplishing the fast and efficient market transactions’ valuable for ‘controlling workers at scale’. Despite that, even a liberal critique of the algorithmic HRM has emerged. Business ethicists claim algorithmic HRM systems can inhibit workers’ ability to ‘manage or market their capabilities’ by keeping their data hidden from them, which, for liberals, amounts to a cardinal sin.
Recent victories against Amazon’s union busters nonetheless offer glimmers of hope. UNI Global has begun setting out ‘algorithmic use agreements’ that include key demands for ‘ethical algorithmic management’, including a focus on greater data transparency to show workers exactly how and why algorithms make decisions. This, many feel, is key to prying open Amazon’s black box, ensuring proper records—showing what decisions have been made, when, and why—are there the next time a challenge is launched.
But there’s scope for a modern and technically literate labour movement to go further, too. The principle of ‘human in command’—championed by the European Trade Union Confederation, among others—prevents algorithms from deciding a worker’s fate, with workers given the right to appeal to a human authorised to override the algorithm without fear of suspension or dismissal. Tightened regulation could also independently monitor algorithms like Amazon’s HRM for bias and discrimination, with the results of these audits freely available to anyone affected by algorithmic decisions—whether they’re a worker, a manager, or a union representative.
The ultimate goal, however, should be the democratisation of technology in the workplace, to be developed and applied only with the input of those who actually understand and carry out the work. Only this basic principle—as much as some may deem it utopian—can help untangle the web of injustice still being woven by companies like Amazon—and move toward the world once envisioned, in which tech works for the benefit of all.