
A police officer stands at the corner of a busy intersection, scanning the crowd with her body camera. The feed is live-streamed into the Real Time Crime Center at department headquarters, where specialized software uses biometric recognition to determine if there are any persons of interests on the street.
Data analysts alert the officer that a man with an abnormally high threat score is among the crowd; the officer approaches him to deliver a “custom notification”, warning him that the police will not tolerate criminal behavior. He is now registered in a database of potential offenders.
Overhead, a light aircraft outfitted with an array of surveillance cameras flies around the city, persistently keeping watch over entire sectors, recording everything that happens and allowing police to stop, rewind and zoom in on specific people or vehicles …
None of this is techno-paranoia from the mind of Philip K Dick or Black Mirror, but rather existing technologies that are already becoming standard parts of policing.
The California city of Fresno is just one of the police departments in the US already using a software program called “Beware” to generate “threat scores” about an individual, address or area. As reported by the Washington Post in January, the software works by processing “billions of data points, including arrest reports, property records, commercial databases, deep web searches and the [person’s] social media postings”.
A brochure for Beware uses a hypothetical example of a veteran diagnosed with PTSD, indicating they also take into account health-related data. Scores are colour-coded so officers can know at a glance what level the threat is: green, yellow or red.
This is just one of many new technologies facilitating “data-driven policing”. The collection of vast amounts of data for use with analytics programs allows police to gather data from just about any source and for just about any reason.
The holy grail is ‘predictive policing’
“Soon it will be feasible and affordable for the government to record, store and analyze nearly everything we do,” writes law professor Elizabeth Joh in Harvard Law & Policy Review. “The police will rely on alerts generated by computer programs that sift through the massive quantities of available information for patterns of suspicious activity.”
The holy grail of data-fueled analytics is called “predictive policing”, which uses statistical models to tell officers where crime is likely to happen and who is likely to commit it.
In February 2014, the Chicago police department (CPD) attracted attention when officers pre-emptively visited residents on a computer-generated “heat list”, which marked them as likely to be involved in a future violent crime. These people had done nothing wrong, but the CPD wanted to let them know that officers would be keeping tabs on them.
Essentially, they were already considered suspects for crimes not yet committed.
From Fresno to New York, and Rio to Singapore, data analysts sit at the helm of futuristic control rooms, powered by systems such as IBM’s Intelligent Operations Center and Siemens’ City Cockpit. Monitors pipe in feeds from hundreds or even thousands of surveillance cameras in the city.
Is this really the promise of a ‘smart city’?
These analysts have access to massive databases of citizens’ records. Sensors installed around the city detect pedestrian traffic and suspicious activities. Software programs run analytics on all this data in order to generate alerts and “actionable insights”.
Neither IBM nor the Chicago Police Department responded to a request to comment. But it this is the new model of how policing should be done in an era of “smart cities”?
These analytics can be used to great effect, potentially enhancing the ability of police to make more informed, less biased decisions about law enforcement. But they are often used in dubious ways and for repressive purposes. It is unclear – especially for analytics tools developed and sold by private tech companies – how exactly they even work.
What data are they using? How are they weighing variables? What values and biases are coded into them? Even the companies that develop them can’t answer all those questions, and what they do know can’t be divulged because of trade secrets.
So when police say they are using data-driven techniques to make smarter decisions, what they really mean is they are relying on software that spits out scores and models, without any real understanding of how. This demonstrates a tremendous faith in the veracity of analytics.
It is absurd for police not to know what decisions, weights, values and biases are baked into the analytics they use. It obscures the factors and decisions that influence how police operate – even to the police themselves, and means these methods are not open to the scrutiny of public oversight and accountability.
Yet even in the face of controversy, police departments continue to use and defend these technologies when questioned by the public, the media and politicians.
Contrary to the common defense trotted out by technocrats, the judgments produced by the algorithms of big data analytics are not objective arbiters of discretion and punishment. Rather, they easily become ways of laundering subjective, biased decisions into ostensibly objective judgments.
Many researchers and advocates have argued that techniques like predictive policing don’t so much see the future as entrench and justify longstanding practices of racial profiling in policing certain urban neighborhoods, aggressively targeting already disempowered groups.
Technology tools are not created without politics or prejudice
Those affected lose their right to individualized treatment and understanding, as technical systems treat people as a mere collection of data points.
Moreover, by integrating these technologies deeper into their operations, police departments are opening the way for corporations to have disproportionate influence over what policing means in society. Technologies are not just neutral tools, and they are not divorced from politics; they are designed with certain values and goals in mind. And when large corporations and entrepreneurial startups establish those values and goals, you are likely to end up with policing modeled after technocratic motivations for total efficiency and control.
It is telling that the companies creating these analytics programs sell modified versions of their software to more than just police departments. As the Intercept reported in October 2015, the same types of analytics are used from marketing to the military; in essence, the Pentagon uses a data program to monitor drones in the same way businesses monitor customers.
IBM has its own vision of an internet-connected world, and its “Smarter Planet” concept happens to align with the goals of business, the military and the police. While its program has already been sold to law enforcement, it is also developing systems to process refugees and even predict terrorist attacks.
Many researchers have documented how police forces are increasingly technologized and militarized. Enthusiastic and rapid adoption of data-driven analytics has happened at the expense of meaningful public input, safeguards against misconduct and other basic procedures of accountability.
The question remains: are these technologies accurate, appropriate and legitimate?
Our governments and police departments are already deploying this technology, and need to be held to account. This technology encourages police officers to view the public as a threat, as criminals in waiting. These opaque procedures and questionable outcomes need to be scrutinized, before the efficiencies of connected data are used to justify taking power away from citizens and further concentrating it in police forces.
