Lizzie O'Shea 

Tech has no moral code. It is everyone’s job now to fight for one

From racist ads to Cambridge Analytica, technology’s ethical deficit needs disrupting fast, says the human rights lawyer Lizzie O’Shea
  
  

Humanoid robots in the film I, Robot
Humanoid robots in the film I, Robot. ‘Coders who write the programs need to anticipate the needs of others, their vulnerabilities and circumstances.’ Photograph: Allstar/20th Century Fox

It has been a tough two years for the technology industry. The 2016 US election was a turning point for what was formerly the face of upbeat, self-actualising capitalism. Today the common view is that a tiny minority has been making money by disrupting things at the expense of the majority. Technology companies are out of control because law-makers have been neglectful, indifferent or – worse – baffled by the prospect of regulation. But in the wake of the Cambridge Analytica/Facebook data-harvesting scandal, there is new interest in the role of ethical considerations in the work of technology companies, and the programmers who build their machinery.

Engineers have traditionally not been required to reflect on the work they do in its broader social context. The technological design process rarely lends itself to consideration of power dynamics, even though they are never absent. I once heard a coder talk about scratching his itches, by which he meant building computer programs to solve the problems that he confronted. Coding is about writing the most elegant program to get a computer to complete a task, which can be anything from fixing a bug or creating a plugin to swapping pictures of politicians with cats. Ethical considerations are often considered outside the engineers’ remit or above their pay grade – for others to ponder, while they build stuff.

The problem with this outlook is that digital technology is so widely used, the coders who write the programs need to anticipate the needs of others, their vulnerabilities and circumstances: a significant and complex task. We have seen the unintended consequences of earlier failures to undertake such work – where code has gone wrong. Five years ago, the Harvard professor Latanya Sweeney produced convincing research about Google searches for African-American names being more likely, when compared with searches for white-sounding names, to generate adverts for companies that offer background checks – raising the suggestion that the person might have a criminal record. Google photos have tagged photos of African-American people as “gorillas”. Until recently, Facebook had an algorithm that generated categories for use in targeted advertising that included people who expressed interest in topics such as “Jew hater”. Google has had a similar problem with targeted categories that were racist.

It is not that the people who coded these programs are racist. Individual workers are not to blame for these bad outcomes. On the contrary, they arise because of many factors, including the profit motive, rushed design, and a lack of diversity among technology workers, which reduces the chances that thoughtful programs will be written, with sufficient attention paid to issues of diversity and discrimination. If company executives command that their workers move fast and disrupt things, the presumption is clearly that other people will be forced to pick up the pieces.

Martin Lewis, the UK consumer advice and money-saving expert, is now suing Facebook over fake adverts using his name and face to lure people into costly scams. The company’s algorithms have allowed certain people to be targeted with these false adverts, rather than having systems of checks and balances that might pick up these problems. Similarly, more human moderation of misogynistic “incel” (involuntarily celibate) forums and warnings of rebellion on Facebook profiles that might be linked to the Toronto van attack could prevent atrocities.

The future of technology is not fixed. Addressing the problems facing us will require all kinds of efforts. But one thing that needs to happen is that ethical considerations must be brought to the fore. Lawyers owe their highest duty to the court. Doctors owe theirs to patients. Such frameworks can be a way for technology professionals to articulate the conditions necessary to do their jobs well. They also ask us to consider to whom software engineers should be accountable – their chief executives, themselves, or the public?

We are already starting to see how ethical considerations can serve as a focal point for industrial and political organising in tech companies. Thousands of Google employees recently signed a letter protesting at the company’s involvement in a Pentagon project to apply artificial intelligence to footage collected from drones. Dozens of scientists from numerous countries have called for a boycott of a South Korean university that wants to create autonomous robots for defence purposes. Thousands of engineers in the US signed a pledge refusing to build an immigrant database. The Tech Workers Coalition helped organise a protest outside Palantir, the US company creating an enormous technological spy brain used by police and spooks.

Such actions by individuals and groups require bravery and deserve public support. They also remind us what is possible, and give us the chance to talk about where such activism might lead. Ethical design can serve as a bulwark against the relentless pursuit of profit and power. Builders of digital technology need space, and encouragement, to discuss the resources they need, if they are to work in ways that are respectful and focused on the needs of users. Proper regulation of these industries by governments is well overdue, but it’s not a total answer. Workers need the power to resist the business model that creates these problems, and they often have good ideas about how to solve them.

What might such problem-solving look like? It will start with more ethical training, both as part of every computer engineer’s education, and on the job. It could include ideas such as allowing users to know what data is collected about them, and giving them the choice of what and how they want to share. It might also require actively considering the experience of a diverse cohort of users, and involving them in the development and beta testing of new programs and tools.

Oversight by humans may be more expensive than automation, but it is essential and unavoidable if we want to improve our digital environment. Over time, such work might broaden into questions about political power. Might we foster a culture that lionises technology for peaceful purposes, and start to challenge technology in the service of prisons and the military? In short, create a culture that is the opposite of “move fast and disrupt things” – one that takes the time to build things respectfully, in ways that empower people.

• Lizzie O’Shea, a human rights lawyer, is writing a book on the politics of technology

 

Leave a Comment

Required fields are marked *

*

*