U.S. markets close in 2 hours 47 minutes
  • S&P 500

    +14.65 (+0.33%)
  • Dow 30

    +177.02 (+0.51%)
  • Nasdaq

    +84.79 (+0.57%)
  • Russell 2000

    +32.92 (+1.50%)
  • Crude Oil

    +0.92 (+1.35%)
  • Gold

    -6.10 (-0.34%)
  • Silver

    -0.17 (-0.65%)

    -0.0001 (-0.01%)
  • 10-Yr Bond

    +0.0300 (+2.53%)

    +0.0048 (+0.35%)

    +0.2660 (+0.24%)

    +376.29 (+0.95%)
  • CMC Crypto 200

    +9.97 (+1.02%)
  • FTSE 100

    -3.43 (-0.05%)
  • Nikkei 225

    +144.04 (+0.52%)

Policing AI: Is it a task for government, industry, consumers or all of the above?

AI bias panel
Social-media entrepreneur Sean Langhi poses a question for panelists during a discussion of AI bias. From left are University of Washington law professor Ryan Calo, ChatMode’s Chad Oda, EqualAI’s Miriam Vogel, Microsoft’s Navrina Singh and LivePerson’s Alex Spinelli. (GeekWire Photo / Alan Boyle)

It may not yet be clear how societies will guard against the potential downside of artificial intelligence — including algorithmic bias, invasions of privacy and unjustified profiling — but it’s already abundantly clear that safeguards are needed.

That’s the bottom line from Wednesday night’s panel discussion on AI bias, presented in Seattle by EqualAI and LivePerson.

Both of the panel’s presenters have a stake in figuring out how to address AI’s downsides: LivePerson is interested in how chatbots and other AI-enabled tools can smooth interactions between companies and the customers they serve, while EqualAI is an initiative supported by the likes of Arianna Huffington, Wikipedia’s Jimmy Wales and LivePerson CEO Robert Locascio to reduce AI bias.

“Companies are creating AI to change the world,” said EqualAI executive director Miriam Vogel, who focused on equal-pay issues and bias training for law enforcement during her time at the Obama White House and the Justice Department.

“They’re trying to do good, they’re trying to reach people who have not been reached, start conversations that haven’t happened otherwise — knowing that [implicit bias] is not necessarily coming from a malicious act. It’s coming from human actions,” she said. “So it’s about starting the conversation from a place of understanding and respect for the mission.”

There are plenty of examples to show where AI can go wrong:

Mindful of the potential abuses, Microsoft set up a high-level internal group called the Aether Committee (where “Aether” stands for AI and Ethics in Engineering and Research) to decide how its AI software should or shouldn’t be used.

Just this week, Microsoft President Brad Smith provided further details about the workings of the Aether Committee. He said Microsoft turned down a California law enforcement agency’s request to use its facial-recognition software to check people pulled over for traffic stops, as well as a request from a foreign country to use the software with surveillance cameras in its capital city.

Navrina Singh, principal/director project manager for Microsoft AI, said the Aether review system is “just a great example of what companies can do.”

Vogel agreed that AI companies can do themselves a favor by policing themselves.

“There’s actually enough evidence now that it’s a business case if you are not thinking through implicit bias,” she said. “If you are baking into your AI, unconsciously, this implicit bias, you could be alienating potential customers — and you could be excluding potential consumers from being able to buy and use your products.”

But she’s also hearing tech executives say that governments will need to regulate AI.

“I find that interesting, because that’s not a talking point that I’ve heard technology companies say at any other point in history,” Vogel said. “I think there’s a lot of merit in that statement, but I also think it’s a little too easy to say right now — knowing the other recurrent theme I’m hearing, which is that D.C. is unable to have this conversation.”

Alex Spinelli, LivePerson’s chief technology officer, said safeguards against AI bias have to cover the algorithms used for data analysis, as well as the data that’s used to train those algorithms.

“I think the gold-standard goal would be, if you’re building a conversation, you need to train that on data that’s representative of the population that your conversation will serve. I think that’s obvious. It’s not easy.”

Data privacy is another bugaboo for AI, and University of Washington law professor Ryan Calo said the privacy debate is sparking multiple calls for tighter regulation.

“I do worry a little bit about the ‘p-word,’ which is ‘pre-emption,’ ” Calo said. “If too many states, like California, pass too many laws that are onerous for industry; sometimes what you see is, you see this groundswell of support for federal legislation. But one of the goals of that legislation is to pre-empt all the things that the innovative states are doing.”

Spinelli said consumers and voters should get themselves up to speed about what’s at stake in the debate over privacy, AI and other issues on the data frontier.

“I would say, ‘inform yourself’ would be the first thing,” he said.

More from GeekWire: