Microsoft (MSFT) sent one of its top executives to Washington with an unusual request: To ask the government to regulate the use of the company’s automated facial-recognition software. In fact, Microsoft president Brad Smith went beyond past calls for a conversation about appropriate rules, and suggested a new law should be put in place now.
“The world needs to have confidence that this technology will be used well,” Smith said in a speech at the Brookings Institution Thursday afternoon. “Then we’ll be able to innovate in ways that benefit society.”
But if nothing happens and facial recognition remains a rules-free space, commercial pressures will push companies to “choose between being societally responsible and gaining market share,” Smith said. In other words, a race to the bottom.
Rules for recognition
Outside of a lack of privacy, facial recognition technologies, if left unchecked, could allow governments to crack down on dissidents, be used as a form of mass surveillance and, when inaccurate, wrongly identify innocent citizens as criminals.
In the speech and in a post on Microsoft’s corporate blog, Smith outlined a few key planks of Microsoft’s desired regulatory framework over automated systems that identify people via their facial characteristics.
Transparency constitutes the first big plank. Companies offering facial-recognition services should clearly document how they work and when they can fall short. And they should allow independent testing of the accuracy of these systems by third parties via web interfaces.
Microsoft also wants to stop facial recognition from reinforcing existing discriminatory patterns. It would require companies to provide for human review of decisions that would affect a person’s privacy or freedom or subject them to possible harm and ban the use of facial-recognition services for illegal discrimination.
The idea of adequate notice plays another big part. Places and online services using facial-recognition systems to identify customers must post conspicuous notices warning people about that use. In turn, customers continuing through to those places or services after seeing those notices would be regarded as consenting to their facial-recognition systems.
Finally, the government must get a court order to use facial recognition for any ongoing surveillance of people in public, unless death or serious injury is about to result.
Now is the time
Smith emphasized that while facial recognition has been a topic of conversation since the 1960s, advances in the deployment of connected cameras and cloud-based software to analyze their feeds have been rapidly turning it into a reality.
He accentuated the positive by noting such ventures as New Delhi police use of this technology to trace 3,000 missing children, the recent rollout by Delta (DAL) of facial-recognition authentication throughout the international terminal at Atlanta’s Hartsfield-Jackson International Airport and the Microsoft Seeing AI app’s spoken descriptions of whatever’s in front of your phone’s camera (a frequent source of amusement to our eight-year-old).
“The facial-recognition genie is just beginning to leave the bottle,” Smith said, calling this “a time for action.”
But, he added, some companies and countries in this space place less value on ethics and transparency.
“We’ve turned down deals because we didn’t believe that the technology would be used well,” he said. “We’ve turned down deals because we worried that the technology would be used to put people’s rights at risk.”
Smith didn’t name any of these problematic actors, although since his statement was a response to a question about China’s use of facial recognition, there wasn’t much room for imagination: “There are definitely countries where we are and will not be comfortable providing our artificial-intelligence technology to governments.”
Smith’s call for allowing systemic and independent audits of facial-recognition services addresses a real problem, one that often seems to get worse when this technology is used on non-white populations.
The American Civil Liberties Union dramatized that this summer when it tested Amazon’s (AMZN) Rekognition service on photos of member of Congress and saw 28 representatives, disproportionately people of color, incorrectly matched with mugshots of criminal suspects.
A report released hours before Smith’s speech Thursday by New York University’s AI Now Institute included a similar endorsement of requiring that facial-recognition services allow third-party auditing.
But the prospect of public shaming by outside researchers may not motivate a company to tweak a profitable service. Nicol Turner-Lee, a fellow with Brookings who watched Smith’s talk, offered a pithy summary of her expectations that these audits would compel meaningful improvement: “None at all!”
The AI Now report noted how organizational and economic structures can discourage progress and endorsed instituting “protections for conscientious objectors, employee organizing and ethical whistleblowers.”
Microsoft’s proposal is silent on those factors. It also doesn’t discuss such factors as how long a company or organization should keep facial data after using it for a match, when people can compel the deletion of that data and whether they should be able to refuse particular uses of facial data.
There’s also the lingering problem that Congress has been particularly apathetic about addressing privacy. But Smith said getting a sufficiently important state or even city to pass facial-recognition regulations would represent an adequate start.
“It’s a decade that’s proven generally difficult to get things passed,” he said of Washington. “It may happen in a state capital or even a municipality, and that’s okay.”
More from Rob:
- Congress will grill Google’s CEO this week — here’s what to expect
- Deputy AG Rosenstein calls on Big Tech to protect users
- Facebook still hasn’t fixed this loophole for fake accounts