Your face has a defender: the ICO. On Monday, the UK’s data protection regulator issued its third largest fine ever, against facial recognition provider Clearview AI. From our story:
The UK’s data watchdog has fined a facial recognition company £7.5m for collecting images of people from social media platforms and the web to add to a global database.
US-based Clearview AI has also been ordered to delete the data of UK residents from its systems by the Information Commissioner’s Office (ICO). Clearview has collected more than 20bn images of people’s faces and data from Facebook, other social media companies and from scouring the web.
Picking up a thread from last week’s newsletter, plenty of startups are built around the basic idea of quietly breaking rules until you get large enough that you can afford to comply (or even better, until you get large enough that the rules are changed to promote innovation and make what you do legal anyway). But it’s rare for a company to be as belligerent about that approach as Clearview.
The company burst into public awareness in early 2020, after a New York Times exposé described it as “the secretive company that might end privacy as we know it”. Clearview’s product was simple: facial recognition technology, marketed to law enforcement in particular, that could take a picture of a suspect and return a solid guess as to their name.
As technology, it wasn’t actually hugely novel. Similar tech was, of course, already built in to Facebook, and Russian company FindFace offered a similar service domestically since 2016. The impressive part of Clearview’s work was, instead, in building up the database required to make the system functional.
Any facial recognition system like that needs at its heart a massive collection of people’s faces, linked with their names. And Clearview gathered that in the most brazen manner possible: it just took it all from social media. As the New York Times’ Kashmir Hill wrote:
The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
It would have been impossible to get that information with Facebook’s permission, even pre-Cambridge Analytica, and so Clearview just didn’t bother. Instead, it took the gamble that even if it did get caught, it was unlikely to be forced to delete data it had already collected.
The gamble paid off. In the furore following the NYT’s publication, Facebook, YouTube, Twitter and LinkedIn all demanded the company stop collecting images from their sites. But their ability to do so is limited: scraping data from public sites is legal under US law, and American data protection regulations are slim, and mostly bound up in contract law – by which Clearview is unencumbered, because it didn’t make any agreements with the people whose data it processed. A few state regulators have taken action against Clearview, and Illinois, which has the strongest biometric privacy law in the country, secured a ban this month on the company working with the private sector in the state (it had already voluntarily ceased such deals in 2020, Clearview says). But, other than that, the company continues to operate more or less unchallenged in the US.
That’s not the case, thankfully, in the UK – as the ICO’s fine demonstrates. You have rights over the use of your data that remain relevant even if it gets scraped, passed around, reformed and deanonymised.
Unfortunately, that might not help that much. Clearview’s response has been vituperative. A spokesperson for its lawyers suggested that the company simply will not comply. “While we appreciate the ICO’s desire to reduce their monetary penalty on Clearview AI, we nevertheless stand by our position that the decision to impose any fine is incorrect as a matter of law. Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the UK at this time.”
The ICO’s position is that any company handling the data of UK citizens is bound by UK law; Clearview disagrees. The company isn’t the only one to raise such questions. Take the Chinese dataset released as WebFace260M, built for training facial recognition AI using faces and names scraped from IMDb and Google Images. The dataset is governed – or, not – by Chinese law, but the faces in it certainly aren’t exclusively Chinese citizens.
This precedent matters far beyond the narrow question of facial recognition. Large public datasets scraped from the internet are the fuel powering the latest burst of progress in the AI sector. Text generator GPT3 is built on posts and links harvested from Reddit; Dall-E 2, the groundbreaking visual AI that we discussed here a few weeks ago, is built on images scraped from the web. The company is very aware that Dall-E 2 can be enticed to generate real people’s faces, to the point that beta testers are banned from posting realistic human faces publicly.
It would be a bold move for the ICO to turn its eye to policing such models, in the face of a UK government that is already discussing watering down data protection law to make Britain a hub for innovation. But what if the thing that distinguished Clearview from the competition is less the substance of its data harvesting, and more the style with which it defended it?
If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Wednesday.