John Naughton 

Google’s image-scanning illustrates how tech firms can penalise the innocent

New technology helps to track child abuse images, but in the case of false positives, companies don’t always rescind suspensions
  
  

A person looking at a phone outside Google's San Francisco office
Google has refused to reinstate the account of Mark, a father living in San Francisco who sent images of his son to a doctor. Photograph: Jeff Chiu/AP

Here’s a hypothetical scenario. You’re the parent of a toddler, a little boy. His penis has become swollen because of an infection and it’s hurting him. You phone the GP’s surgery and eventually get through to the practice’s nurse. The nurse suggests you take a photograph of the affected area and email it so that she can consult one of the doctors.

So you get out your Samsung phone, take a couple of pictures and send them off. A short time later, the nurse phones to say that the GP has prescribed some antibiotics that you can pick up from the surgery’s pharmacy. You drive there, pick them up and in a few hours the swelling starts to reduce and your lad is perking up. Panic over.

Two days later, you find a message from Google on your phone. Your account has been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal”. You click on the “learn more” link and find a list of possible reasons including “child sexual abuse and exploitation”. Suddenly, the penny drops: Google thinks that the photographs you sent constituted child abuse!

Never mind – there’s a form you can fill out explaining the circumstances and requesting that Google rescind its decision. At which point you discover that you no longer have Gmail, but fortunately you have an older email account that still works, so you use that. Now, though, you no longer have access to your diary, address book and all those work documents you kept on Google Docs. Nor can you access any photograph or video you’ve ever taken with your phone, because they all reside on Google’s cloud servers – to which your device had thoughtfully (and automatically) uploaded them.

Shortly afterwards, you receive Google’s response: the company will not reinstate your account. No explanation is provided. Two days later, there’s a knock on the door. Outside are two police officers, one male, one female. They’re here because you’re suspected of holding and passing on illegal images.

Nightmarish, eh? But at least it’s hypothetical. Except that it isn’t: it’s an adaptation for a British context of what happened to “Mark”, a father in San Francisco, as vividly recounted recently in the New York Times by the formidable tech journalist Kashmir Hill. And, as of the time of writing this column, Mark still hasn’t got his Google account back. It being the US, of course, he has the option of suing Google – just as he has the option of digging his garden with a teaspoon.

The background to this is that the tech platforms have, thankfully, become much more assiduous at scanning their servers for child abuse images. But because of the unimaginable numbers of images held on these platforms, scanning and detection has to be done by machine-learning systems, aided by other tools (such as the cryptographic labelling of illegal images, which makes them instantly detectable worldwide).

All of which is great. The trouble with automated detection systems, though, is that they invariably throw up a proportion of “false positives” – images that flag a warning but are in fact innocuous and legal. Often this is because machines are terrible at understanding context, something that, at the moment, only humans can do. In researching her report, Hill saw the photos that Mark had taken of his son. “The decision to flag them was understandable,” she writes. “They are explicit photos of a child’s genitalia. But the context matters: they were taken by a parent worried about a sick child.”

Accordingly, most of the platforms employ people to review problematic images in their contexts and determine whether they warrant further action. The interesting thing about the San Francisco case is that the images were reviewed by a human, who decided they were innocent, as did the police, to whom the images were also referred. And yet, despite this, Google stood by its decision to suspend his account and rejected his appeal. It can do this because it owns the platform and anyone who uses it has clicked on an agreement to accept its terms and conditions. In that respect, it’s no different from Facebook/Meta, Apple, Amazon, Microsoft, Twitter, LinkedIn, Pinterest and the rest.

This arrangement works well as long as users are happy with the services and the way they are provided. But the moment a user decides that they have been mistreated or abused by the platform, then they fall into a legal black hole. If you’re an app developer who feels that you’re being gouged by Apple’s 30% levy as the price for selling in that marketplace, you have two choices: pay up or shut up. Likewise, if you’ve been selling profitably on Amazon’s Marketplace and suddenly discover that the platform is now selling a cheaper comparable product under its own label, well… tough. Sure, you can complain or appeal, but in the end the platform is judge, jury and executioner. Democracies wouldn’t tolerate this in any other area of life. Why then are tech platforms an exception? Isn’t it time they weren’t?

What I’ve been reading

Too big a picture?
There’s an interesting critique by Ian Hesketh in the digital magazine Aeon of how Yuval Noah Harari and co squeeze human history into a tale for everyone, titled What Big History Misses.

1-2-3, gone…
The Passing of Passwords is a nice obituary for the password by the digital identity guru David GW Birch on his Substack.

A warning
Gary Marcus has written an elegant critique of what’s wrong with Google’s new robot project on his Substack.

 

Leave a Comment

Required fields are marked *

*

*