(Bloomberg Opinion) -- When people talk about defunding the police, they usually mean reallocating money toward programs that will better achieve justice, racial and otherwise. But how can governments figure out where to put the resources and energy that are now being misused in punitive and self-defeating ways?
I have an idea: Let’s repurpose the same algorithms that authorities have used to profile potential criminals.
Crime risk scoring algorithms -- such as LSI-R, developed in the 1980s -- are supposed to remove some of the human subjectivity from high-stakes decisions, such as how harshly to sentence a person and whether to grant parole or bail. They typically compare a long list of attributes to a database of previous offenders to predict the chance that a person will be re-arrested within two years of leaving custody.
They’re demonstrably unfit for purpose. A 2016 statistical investigation by ProPublica found that COMPAS, a widely used crime-risk algorithm, was biased against Black men: They were, for example, twice as likely as white men to be given a high risk score but not get re-arrested. This means that thousands of people, predominantly Black, have been sentenced to longer in prison, denied parole, and kept incarcerated before trial, on the incorrect assumption that they posed a heightened risk to public safety.
That said, there’s one thing the algorithms are good for: providing a sense of the vast and complex problems that society has pushed onto the police and the criminal justice system. Consider some of the questions that, in most such tools, contribute to a person’s crime-risk score:
Are you or have you ever had a mental health problem? Do you have an addiction problem with drugs or alcohol? Did you finish high school? Did you have a job? Did you live in a “high crime neighborhood” or have “gang friends”? Did your father go to prison?
Many of these are completely out of a person’s control, and the rest have at least something to do with the environment in which a person grows up. Black people, for example, are more likely to live in high-crime, low-opportunity neighborhoods because a litany of factors put them there — including overtly racist federal lending policies and exclusionary zoning rules. As a result, they are less likely to graduate from high school or be employed, and more likely to have all kinds of problems.
In other words, the algorithms implicitly recognize deep and long-running racial injustices. But they are used to criminalize the victims.
It doesn’t take a mathematics Ph.D. to realize we could be doing this differently. Let’s turn the tool around, and use it to set priorities. Which elements contribute the most to a person’s crime risk score? Neighborhood? Education? Mental health? Taken together, what share of the risk score do they determine? This could serve as a starting point for deciding how much of our enormous law enforcement and incarceration budgets should be diverted to addressing these issues more directly — most likely in combination, given how they are all linked. Instead of using data to profile people for punishment, we could employ them to help people realize their potential, and to break intergenerational feedback loops of imprisonment.
The algorithms aren’t evil in themselves. They merely reflect how Americans have chosen to contend with their failures. We must do better.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”
For more articles like this, please visit us at bloomberg.com/opinion
Subscribe now to stay ahead with the most trusted business news source.
©2020 Bloomberg L.P.