A Double-Edged Sword for Racial Equality: The Impact of Defunding the Police on Technology Usage
By Kamran Kara-Pabani. July 10, 2020.
The Black Lives Matter movement has been reignited in the American consciousness after the brutal killings of George Floyd and other African Americans have been exposed to the public. Much of this movement focuses around the fundamentally flawed model of policing in the United States and the propensity of police officers to engage in racially biased action. One of the most prominent phrases capturing the renewed impetus for police reform is “defund the police”.
What “defunding the police” actually means, however, seems to be an open question. For some, it means reallocating some of the police’s budget to invest in local communities and social services, as well as stopping the police’s acquisition of military-grade material. For others, it can mean the wholesale abolishment of the police as an organ of the state permitted to use force to maintain the law.
Although “defunding the police” is meant to reduce racial bias in policing, there is a real risk that hastily lowering police budgets without clear guidance on police’s new mandate could lead to over-reliance on digital-policing technologies, which have been shown to be biased against marginalized communities. As expert Dr. Ruha Benjamin mentions in an interview, defunding the police could be an excuse for police departments to employ technologies that “take up the job that [officers] are currently doing.” Even worse, technology could not only perpetuate existing problems with policing, but also be far more dangerous.
Communities across the United States have seen a dramatic increase in the use of technology in police departments over the past few years, and serious problems with such technologies have come to the fore.
Most recently, Kashmir Hill of the NY Times exposed how a flawed facial recognition algorithm led to the arrest of Robert Williams, an African American man in Detroit, for a crime he did not commit. Had the police deigned to check, they would have realized that he had an alibi for the time in question: he was driving home from work to be with his family. A similar event in Detroit was reported on July 10. The concern with facial recognition technology is threefold: (1) facial recognition algorithms are less accurate at recognizing darker skin tones and women (resulting in false matches), (2) police officers using facial recognition are untrained as to how and when to use it, and (3) facial recognition poses a significant threat to (predominantly Black/brown) citizens’ privacy and anonymity when engaging in lawful behavior (such as protesting).
Although Microsoft, IBM, and Amazon have each pulled back from the law enforcement facial recognition market in the past few weeks, this move is largely devoid of impact given that most facial recognition suppliers to law enforcement are smaller companies that most people don’t recognize. In fact, most of these companies, such as the recently notorious Clearview AI, have doubled-down on the law enforcement market after Microsoft, IBM, and Amazon dropped out.
The danger of defunding the police whilst keeping their mandate and metrics of success (e.g. arrest and fine numbers, crime rates) the same is that police departments may replace costly police officers with less costly (and more easily scaled) technology, including facial recognition. Although facial recognition doesn’t actually seem to make police more effective, police departments might find it an attractive way to replace some of the investigative work of conventional police officers. It also can help boost traditional measures of police success, such as the number of arrests and fines issued, by generating many investigative leads of dubious quality. Hastily adopting these technologies, therefore, can not only endanger the public’s anonymity and privacy, but also lead to false accusations of crimes based on racially biased algorithms. By perpetuating racially-biased policing and trying to optimize traditional metrics of police success, these technologies exacerbate two of the main problems with policing today: (1) different communities are policed differently, and (2) the very goals of the police, and the criminal justice system more broadly, are misguided.
Digital policing technologies not only mimic existing human police bias, but they are also less accountable, less transparent, and more easily scaled than human officers. This should be a large concern given that a culture of indemnity and secrecy is what helps fuel racial bias in policing in the first place. The “black-box” nature of facial recognition (and other deep learning algorithms) means that they struggle to be audited and the nature of their decision making is unclear. It is also unclear who is responsible for a faulty facial recognition system: is it the company making the software, the person using it, the police chief, or someone else?
Thus, humans are more easily held accountable for their actions (though accountability is still woefully lacking). At least in today’s political environment, they are more likely to be fired as a result of improper or racially charged policing than an algorithm is to be replaced or altered.
Ultimately, the way to reduce police budgets without precipitating a rush to flawed digital policing techniques is to fundamentally rethink the metrics of public safety. Arrest numbers and fines issued are outdated measures for the wellbeing of a community. Instead of police focusing on a narrow and punitive definition of public safety, their success should be measured by a more holistic set of metrics on the health of a community. Determining such measures are best left to individual communities, but might include metrics such as the percentage of children that attend school regularly, the rates of depression and mental health trauma, recidivism rates, public perception of police, and others.
Of course, simply changing the metrics of police success is not enough to overhaul American policing – such an initiative must be supported by a rethought approach to prisons, sentencing guidelines, and the rest of our flawed justice system. Dr. Benjamin goes on to note in her interview the contrast between medical professionals cutting up garbage bags for protection whilst police stand armed to the teeth — that’s just one indication of the values reprioritization that America badly needs.
Kamran Kara-Pabani (@KaraPabani) is a Duke junior double-majoring in computer science and political science. He is the Co-President of Ethical Tech; a member of the American Grand Strategy Council; and a Cyber Policy Researcher at the Sanford School of Public Policy under Professor David Hoffman. He is the 2020-2021 Communications Director for the Duke Cyber Team.