Advertisement
U.S. markets closed
  • S&P Futures

    5,208.00
    -6.75 (-0.13%)
     
  • Dow Futures

    39,200.00
    -23.00 (-0.06%)
     
  • Nasdaq Futures

    18,187.50
    -44.00 (-0.24%)
     
  • Russell 2000 Futures

    2,047.70
    -2.10 (-0.10%)
     
  • Crude Oil

    82.56
    -0.16 (-0.19%)
     
  • Gold

    2,164.70
    +0.40 (+0.02%)
     
  • Silver

    25.32
    +0.06 (+0.22%)
     
  • EUR/USD

    1.0877
    0.0000 (-0.00%)
     
  • 10-Yr Bond

    4.3400
    +0.0360 (+0.84%)
     
  • Vix

    14.33
    -0.08 (-0.56%)
     
  • GBP/USD

    1.2724
    -0.0005 (-0.04%)
     
  • USD/JPY

    149.3340
    +0.2360 (+0.16%)
     
  • Bitcoin USD

    65,830.81
    -2,087.38 (-3.07%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • FTSE 100

    7,722.55
    -4.87 (-0.06%)
     
  • Nikkei 225

    39,596.29
    -144.15 (-0.36%)
     

Microsoft argues facial-recognition tech could violate your rights

Microsoft president Brad Smith says facial-recognition technology needs to be regulated.
Microsoft president Brad Smith says facial-recognition technology needs to be regulated.

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon. The nuances are complex, while Congress remains as reluctant as ever to regulate privacy. We may find ourselves stuck struggling to agree on norms well after the technology has redefined everything from policing to marketing.

My face is my passport

The underlying problem is simple: the proliferation of connected cameras, databases of facial images and software linking the two has made this technique not just cheap but increasingly unavoidable.

Or as Nicol Turner-Lee, a fellow at the Brookings Institution in Washington, put it: “We live in an economy of images.”

Of course, this can be good. Encrypted, on-device facial-recognition systems such as Apple’s (AAPL) Face ID and Microsoft’s Windows Hello let you sign into a phone or laptop easily without putting your facial characteristics in a cloud-hosted database.

Or facial recognition may come as a choice subject to your own calculus of convenience versus privacy. If you value streamlining air travel enough, you can join such experiments in biometric identification as a test of facial-recognition boarding at LAX.

Or it can be beyond your control. Maybe your state shares its driver’s-license database with other government agencies—a 2016 study by the Georgetown Law Center’s Center on Privacy and Technology found that at least 26 states had opened those databases to police searches, covering the faces of more than 117 million American adults. Or the passport-control checkpoint at an international border—where you already have minimal rights—may demand you look into an automated camera.

What rules should we have?

The problem is you can’t assume to know when a camera and its software identify you, that their recognition algorithms are accurate or that the underlying databases are always secure.

Facial recognition is often done clandestinely. Its accuracy is iffy, especially among non-white populations. Some 39% of the false matches in the ACLU test involved legislators of color, who only account for 20% of Congress. What’s more, companies can’t seem to stop data breaches from happening.

As Microsoft’s Smith noted, those conditions usually lead to government intervention. What should that look like?

Turner-Lee and Michigan State University professor Anil K. Jain agreed on some basic principles.

  • Neither a police department nor a private company should use facial-recognition software to put names to random faces passing by. As Jain said, “comparing you to social media is a no-no.”

  • Using the same software to look for specific people could be okay. But a police department should need level of judicial permission, while a company should get your opt-in.

  • That commercial opt-in shouldn’t be a blanket approval—both Turner-Lee and Jain, for instance, agreed that Facebook (FB) should let users choose between having the social network deploy facial recognition to spot fake accounts and also employing that to find you in friends’ photos.

(In a December corporate blog post, Facebook deputy chief privacy officer Rob Sherman wrote that “most people would find it easier to manage one master setting.”)

  • You should see some notice that you’re in an area where facial-recognition technology can be used—although your ability to spot that in a distraction-filled business such as a casino is another thing.

But those principles still allow for enormous flexibility. For instance, does a government or company have to identify what databases it uses in its facial-recognition regime? An announcement last week from Gov. Andrew Cuomo (D.-N.Y.) of facial-recognition scanning at some New York bridges and tunnels said nothing about that.

And how long should a camera feed be kept around? Some police departments already set surprisingly short limits for footage not classified as crime evidence–30 days in New York, 10 business days in Washington. And for now, processing power limits long-term archival; as Jain said, “the cost of storing is not that much, but the cost of searching could become prohibitive.” But history suggests those limits will keep expanding.

And who will write them?

Because the consequences of a false positive can be so much higher in a law-enforcement context, we may first only see rules for government use.

That Georgetown Law Center study, for instance, proposed model legislation that would, among other things, require that arrest-photo databases remove images of those found innocent, then demand court approval for police queries of driver-license databases or real-time searches of a camera feed.

That proposed bill, however, doesn’t cover commercial facial recognition.

Individual cases brought by the Federal Trade Commission could help set some norms, Turner-Lee said—but she doesn’t expect comprehensive rules.

“I don’t think we’re anywhere near any type of regulatory or legal framework,” she concluded.

And considering how ineffectual Congress has been on privacy issues, even a bill covering just police use might be too difficult.

Meanwhile, the cameras will keep getting plugged in. Jain pointed to China as one example of what technology left unchecked could make possible.

“They say by 2020, there will be 600 million surveillance cameras in China,” he said. “You will be covered no matter where you go.”

More from Rob:

Email Rob at rob@robpegoraro.com; follow him on Twitter at @robpegoraro.

Advertisement