Editorial 

The Guardian view on digital injustice: when computers make things worse

Editorial: Software makes bigger mistakes and faster than humans can. It should not be trusted with vital decisions
  
  

HAL from Stanley Kubrick’s 2001: A Space Odyssey. ‘Handing over life-changing decisions to machine-learning algorithms is always risky.’
HAL from Stanley Kubrick’s 2001: A Space Odyssey. ‘Handing over life-changing decisions to machine-learning algorithms is always risky.’ Photograph: Allstar/MGM/Sportsphoto Ltd

The news that the Home Office is sorting applications for visas with secret algorithms applied to online applications is a reminder of one of Theresa May’s more toxic and long-lasting legacies: her immigration policies as home secretary. Yet even if the government’s aims in immigration policy were fair and balanced, there would still be serious issues of principle involved in digitising the process.

Handing over life-changing decisions to machine-learning algorithms is always risky. Small biases in the data become large biases in the outcome, but these are difficult to challenge because the use of software shrouds them in clouds of obfuscation and supposed objectivity, especially when its workings are described as “artificial intelligence”. This is not to say they are always harmful, or never any use: with careful training and well-understood, clearly defined problems, and when they are operating on good data, software systems can perform much better than humans ever could.

But they cannot manage everything. It is much easier to define success, and to train the programs towards it, when this is a simple, one-dimensional numerical measure such as the reduction of credit card fraud. If instead the software is set to maximise something as hard to define as justice, it is much less likely to be successful. And if it is set targets by an ambitious politician who is determined to bring down the numbers of immigrants admitted to this country, it is almost certain to be biased and unjust.

This isn’t just a problem of software. There is a sense in which the whole of the civil service, like any other bureaucracy, is an algorithmic machine: it deals with problems according to a set of determinate rules. The good civil servant, like a computer program, executes their instructions faithfully and does exactly what they are told. The civil servant is supposed to have a clearer grasp of what their instructing human means and really wants than any computer could. But when the instructions are clear, the machinery of government – a telling metaphor – is meant to put them into action.

Digital algorithms make it easy to make bigger mistakes, faster, and with less accountability. One key difference between the analogue bureaucracy of the traditional civil service and the digitised bureaucracy of artificial intelligence is that it is very much easier to hold a human organisation to account and to retrace the process by which it reached a decision. The workings of neural networks are usually opaque even to their programmers.

Yet the technology promises so much to governments that they will certainly deploy it. This does not mean we are powerless. There are simple, non-technical tests that can be applied to government technology: we can ask easily whether digitising any particular service makes life better for the public who must use it or merely more convenient for the civil service. In some cases these aims are directly opposed. Forcing applicants for universal credit to apply online directly harms anyone without an internet connection, and such people are likely to be most in need of it. Similarly, the digitisation of immigration services means it is more difficult and more expensive to get a face-to-face interview; again, something that is likely to damage those who need it most.

Software is never deployed in isolation. Even when its inner workings are impossible to scrutinise, the motives of those who deploy it can be examined and must be criticised – and they must be held responsible for the effects of their algorithms.

 

Leave a Comment

Required fields are marked *

*

*