U.S. Markets closed

Yes, Government Should Regulate Automated Decision-Making

Cathy O'Neil

(Bloomberg Opinion) -- A backlash against big tech has sent lawmakers all over the world scrambling for ways to restrain the influence of computers over daily life. Now, Congressional Democrats are offering up an Algorithmic Accountability Act of 2019, an expansive and ambitious new take on how to regulate automated decision-making. Whether or not it becomes law, it’s a necessary effort to reassert human control as opaque algorithms take over bureaucratic processes.

Algorithms are being used everywhere: in credit decisions, mortgages, insurance rates, who gets a job, which kids get into college, and how long criminal defendants go to prison to name a few proliferating examples. Messy, complicated human decisions are being made, typically without an explanation or a chance to appeal, by artificial intelligence systems. They provide efficiency, profitability, and, often, a sense of scientific precision and authority.

The problem is that this authority has been bestowed too hastily. Algorithms are increasingly found to be making mistakes. Whether it’s a sexist hiring algorithm developed by Amazon, conspiracy theories promoted by the Google search engine or an IBM facial-recognition program that didn’t work nearly as well on black women as on white men, we’ve seen that large companies that pride themselves on their technical prowess are having trouble navigating this terrain.

And if that’s what we know about, imagine what we don’t. Most of the critically important algorithms in use have not been opened up for scrutiny, in large part because of laws protecting intellectual property.

The Democratic bill, introduced in the Senate and House of Representatives last week, would give the Federal Trade Commission power to require and monitor procedures by big companies to keep track of their algorithms and audit them for fairness and accuracy. It would apply only to companies with at least $50 million in annual revenue and would pertain even if intellectual property rights are involved, although it looks like the companies would have leeway in terms of whether they make the audits publicly available.

The idea is that obvious mistakes, or indeed subtle detours around existing anti-discrimination law, should be caught before they’re embedded in computer programs for deployment. (To its credit, Amazon.com Inc. didn’t use its sexist hiring algorithm.) Instead of assuming the best, in other words, companies will be required to provide evidence to the FTC that they follow relevant laws against discrimination. The inquiries would be done via third party auditors. (Disclosure: I run an algorithmic auditing company.) Companies would need to provide evidence that new algorithms are fair and accurate before being allowed to use them. This would be a huge step forward in terms of accountability, and is far from the case today.

Whatever the legislative fate of the Democratic bill, it’s an indication of what is to come. Evidence is mounting for the idea that algorithms should be subjected to public policy tests, and political will is gaining momentum. Even Facebook chief executive Mark Zuckerberg is asking for federal regulation. The alternative is to let black-box algorithms control people’s lives and subvert the popular will.

To contact the author of this story: Cathy O'Neil at coneil19@bloomberg.net

To contact the editor responsible for this story: Jonathan Landman at jlandman4@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

For more articles like this, please visit us at bloomberg.com/opinion

©2019 Bloomberg L.P.