John Naughton 

Google might ask questions about AI ethics, but it doesn’t want answers

The departure of two members of the tech firm’s ethical artificial intelligence team exposes the conflict at the heart of its business model
  
  

Dr Timnit Gebru.
Google employees have protested against the company’s treatment of Dr Timnit Gebru. Photograph: Kimberly White/Getty Images

If I told you that an academic paper entitled “On the Dangers of Stochastic Parrots” had caused an epochal row involving one of the most powerful companies in the world, you’d have asked what I’d been smoking. And well you might: but stay tuned.

The paper has four co-authors, two from the University of Washington, and two from Google – Dr Timnit Gebru and Dr Margaret Mitchell. It provides a useful critical review of machine-learning language models (LMs) like GPT-3, which are trained on enormous amounts of text and are capable of producing plausible-looking prose. The amount of computation (and associated carbon emissions) involved in their construction has ballooned to insane levels, and so at some point it’s sensible to ask the question that is never asked in the tech industry: how much is enough?

Which is one of the questions the authors of the paper asked. In answering them, they identified “a wide variety of costs and risks associated with the rush for ever-larger LMs, including: environmental costs (borne typically by those not benefiting from the resulting technology); financial costs, which in turn erect barriers to entry, limiting who can contribute to this research area and which languages can benefit from the most advanced techniques; opportunity cost, as researchers pour effort away from directions requiring less resources; and the risk of substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest”.

These findings provide a useful counter-narrative to the tech industry’s current Gadarene rush into language modelling. There was, however, one small difficulty. In 2018, Google created a language model called Bert, which was so powerful that the company incorporated it into its search engine – its core (and most lucrative) product. Google is accordingly ultra-sensitive to critiques of such a key technology. And two of the co-authors of the research paper were Google employees.

What happened next was predictable and crass, and there are competing narratives about it. Gebru says she was fired, while Google says she resigned. Either way, the result was the same: in English employment law it would look like “constructive dismissal” – where an employee feels they have no choice but to resign because of something their employer has done. But whatever the explanation, Gebru is out. And so is her co-author and colleague, Mitchell, who had been trying to ascertain the grounds for Google’s opposition to the research paper.

But now comes the really absurd part of the story. Gebru and Mitchell were both leading members of Google’s Ethical AI team. In other words, in co-authoring the paper they were doing their job – which is to critically examine machine-learning technologies of the kind that are now central to their employer’s business. And although their treatment (and subsequent online harassment by trolls) has been traumatic, it has at least highlighted the extent to which the tech industry’s recent obsession with “ethics” is such a manipulative fraud.

As the industry’s feeding frenzy on machine learning has gathered pace, so too has the proliferation of ethics boards, panels and oversight bodies established by the same companies. In creating them they have been assisted by entrepreneurial academics anxious to get a slice of the action. In that sense, lucrative consultancies to advise on ethical issues raised by machine learning have become a vast system of out-relief for otherwise unemployable philosophers and other sages. The result is a kind of ethics theatre akin to the security theatre enacted in airports in the years when people were actually permitted to fly. And the reason this farcical charade goes on is that tech companies see it as a pre-emptive strike to ward off what they really fear – regulation by law.

The thing is that current machine-learning systems have ethical issues the way rats have fleas. Their intrinsic flaws include bias, unfairness, gender and ethnic discrimination, huge environmental footprints, theoretical flakiness and a crippled epistemology that equates volume of data with improved understanding. These limitations, however, have not prevented tech companies from adopting the technology wholesale, and indeed in some cases (Google and Facebook, to name just two) from betting the ranch on it.

Given that, you can imagine how senior executives respond to pesky researchers who point out the ethical difficulties implicit in such Faustian bargains. It may not be much consolation to Gebru and Mitchell, but at least their traumatic experience has already triggered promising initiatives. One is a vociferous campaign by some of their Google colleagues. Even more promising is a student campaign – #recruitmenot – aimed at persuading fellow students from joining tech companies. After all, if they wouldn’t work for a tobacco company or an arms manufacturer, why would they work for Google or Facebook?

What I’ve been reading

Evolution or revolution?
There’s a thoughtful post by Scott Alexander on his new Astral Codex Ten blog about whether radical change is better than organic development.

Online extremism
Issie Lapowsky asks why social media companies were better at deplatforming Isis than at moderating domestic terrorists in an interesting Protocol essay.

The one certainty
Doc Searls has written an astonishingly vivid meditation on Medium about the planetary impact of our species, and why death is a feature, not a bug.

 

Leave a Comment

Required fields are marked *

*

*