Veena Dubal 

San Francisco was right to ban facial recognition. Surveillance is a real danger

Civil rights advocates are right to be leery of the technology, given the US’s history of political and racial surveillance
  
  

The government’s embrace of facial recognition technology has red flags all over it, argues Veena Dubal.
The government’s embrace of facial recognition technology has red flags all over it, argues Veena Dubal. Photograph: Ian Davidson/Alamy Stock Photo

San Francisco’s recent municipal ordinance banning the use of facial recognition technology by city and county agencies has received international attention. The first of its kind anywhere in the US, the law is a preemptive response to the proliferation of a technology that the city of San Francisco does not yet deploy but which is already in use elsewhere. Since the passage of the ordinance, a debate has erupted in cities and states around the country: should other localities follow San Francisco’s example?

The answer is a resounding yes. The concerns that motivated the San Francisco ban are rooted not just in the potential inaccuracy of facial recognition technology, but in a long national history of politicized and racially-biased state surveillance.

Detractors who oppose the ordinance in the name of “public safety” acknowledge the technology’s current limitations (recent studies have shown that facial recognition systems are alarmingly inaccurate in identifying racial minorities, women, and transgender people). But they argue that as machine-learning becomes less biased the technology could actually upend human discrimination. They — mainly corporate lobbyists and law enforcement representatives — maintain that this absolute ban (rather than the limited regulations advocated by Big Tech) is a step backwards for public safety because it leaves surveillance to people and not machines.

Based on my years of working as a civil rights advocate and attorney representing Muslim Americans in the aftermath of September 11th, I recognize that the debate’s singular focus on the technology is a red herring. Even in an imaginary future where algorithmic discrimination does not exist, facial recognition software simply cannot de-bias the practice and impact of state surveillance. In fact, the public emphasis on curable algorithmic inaccuracies leaves the concerns that motivated the San Francisco ban historically and politically decontextualized.

This ordinance was crafted through the sustained advocacy of an intersectional grassroots coalition driven not just by concerns about hi-tech dystopia, but by a long record of overbroad surveillance and its deleterious impacts on economically and politically marginalized communities. As Matt Cagle, a leader in this coalition and an attorney at the ACLU of Northern California, told me, “The driving force behind this historic law was a coalition of 26 organizations. Not coincidentally, these Bay Area groups represented those who have been most harmed by local government profiling and surveillance in our city: people of color, Muslim Americans, immigrants, the LGBTQ community, the unhoused, and more.”

Indeed, while San Francisco is known across the world as an “incubat[or] of dissent and individual liberties,” the local police department — like many across the United States — has a decades-long, little-known history of nefarious surveillance activities.

A reported 83% of domestic intelligence gathering for J Edgar Hoover’s notorious Counter Intelligence Program (commonly known as Cointelpro) took place in the Bay Area — much of it at the hands of local police. From the 1950s well into the 1970s, the information gathered through this covert state program — which, when discovered, shocked the conscience of America — was used to infiltrate, discredit, and disrupt the now-celebrated civil rights movement.

After Cointelpro was congressionally disbanded and procedural safeguards put in place, community members in the 1980s and early 1990s learned that some San Francisco police officers continued to surreptitiously spy — without any evidence of criminal wrongdoing — on individuals and groups based on their political activities. In at least one instance, information gathered by local police officers on law-abiding citizens was alleged to have been sold to foreign governments.

Despite the subsequent passage of additional local procedural safeguards, which limited intelligence-gathering on First-Amendment-protected activities to instances where reasonable suspicion of criminal activity could be articulated, in the years following September 11th, members of San Francisco’s Muslim American community again found themselves under unjust, non-criminally-predicated surveillance.

These past and present chronicles of injustice highlight how face recognition systems — like other surveillance technology before it — can disproportionately harm people already historically subject to profiling and abuse, including immigrants, people of color, political activists, and the formerly incarcerated. And they demonstrate that even when legal procedures and oversight are thoughtfully put into place, these safeguards can both be rolled back (especially in times of hysteria) and violated.

As the debate about facial surveillance technologies and “public safety” continues to rage, policy makers (and corporate decision-makers) should deliberate not just over the technology itself, but on these shameful political histories. In doing so, they should remember (or be reminded) that more information gathering — while certainly lucrative and occasionally comforting — does not always create safer communities.

Even if face surveillance is 100% neutral and devoid of discriminatory tendencies, humans will determine when and where the surveillance takes place. Humans — with both implicit and explicit biases — will make the discretionary decisions about how to utilize the gathered data. And humans — often the most vulnerable — will be the ones disproportionately and unjustly impacted.

Amid the seemingly inevitable conquest of our everyday lives by new forms of technological surveillance, San Francisco’s ban — and the diverse coalition-based movement that achieved it — proves that local democracy can still be leveraged to shift power- and decision-making into the hands of the people. The real, chilling histories and impacts of past surveillance on freedom of association, religion, and speech — and not imagined fears about information collected through machine-learning systems — motivated the broad coalition of community groups to push for the San Francisco face surveillance ban. Their example could — and should — spark a movement that spreads across the country.

  • Veena Dubal is an associate Professor of Law at the University of California, Hastings

 

Leave a Comment

Required fields are marked *

*

*