We all talk back to our TVs. Most of the time, they don’t bother to listen.
It would be terrible, actually, if our TVs were actually tuned in to us, if they listened to what we said and uploaded our conversations to the Internet.
And that’s just what you probably think they do, if you’re read the breathless coverage concerning Samsung TVs’ voice-recognition features and the privacy policies governing them.
Neither the technology nor the policy is new. But on Sunday, Electronic Frontier Foundation copyright-activism director Parker Higgins tweeted about the uncanny resemblance of a line in the policy to a description of the “telescreen” in George Orwell’s Nineteen Eighty-Four. The contract clause: “If your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party.”
Boom, viral story. See: “Watch out: Samsung’s TV is watching YOU as you watch it.”
First, Press The “Voice” Button
But most of this rewrite coverage has neglected how the feature in question works — a topic Higgins’ tweet didn’t cover. And as he wrote in an email Monday: “I haven’t been able to take a look at one of the TVs in question.”
I haven’t either, beyond brief inspections at CES and other tech events. But Samsung’s documentation and promotional videos indicate that the microphone on the TV stays off until you command it to listen. You do that by pressing a large “VOICE” button on the remote and waiting to see a microphone icon appear on the screen before speaking your query.
Then the TV’s software uploads your speech for processing. And then displays the results on its screen.
Samsung at CES. (Photo: Rob Pegoraro)
A Samsung spokesperson confirmed that speech recognition only happens when the microphone icon appears on the screen.
The rep didn’t name the third party who does the voice recognition, but a BBC post has Samsung identifying the firm as Nuance, in Burlington, Mass. In 2012, that company took credit for bringing this feature to Samsung sets, and it has provided similar services for the voice-recognition features in LG smart TVs, Ford’s Sync and even Apple’s Siri.
How Other Gadgets Do This
Now, it’s possible that Samsung is engaged in a monstrous, diabolical conspiracy to archive of our living-room banter — a massive privacy violation that will be instantly caught when somebody plugs a Samsung TV into a packet sniffer to monitor its transmissions. That hasn’t happened yet.
Other speech-recognition systems work in the same on-demand manner, although the companies behind them do a better job of explaining how they work.
The Amazon Echo, for instance, doesn’t go online until you address it by its “wake word”, and then it “streams audio to the Cloud, including a fraction of a second of audio before the wake word.”
In Google’s Android, the same basic rule applies: Nothing gets sent to a speech-recognition server until you ask the phone to listen, either by tapping a microphone icon or saying “OK Google.”
You still have the issue of a database being built from your speech, but there are ways to address that concern. Jules Polonetsky, executive director of the Future of Privacy Forum, noted that Amazon lets Echo owners delete some or all of their voice recordings, while Apple’s Siri automatically anonymizes clips.
There are sound engineering rationales, to say nothing of the privacy issues, for keeping network-based voice recognition system strictly offline by default. Having a mobile device constantly uploading your speech would kill its battery life, and having any system constantly stream, record, and analyze voice would require massive storage and software development to process the massive amount of junk data.
And, yeah, it would also get companies sued into oblivion.
Tech Companies, You Are Terrible at Privacy Policies
Samsung is arguably only ripping off other tech companies with this move. Both established firms and startups make the mistake of having lawyers write privacy policies that only other lawyers can parse. Such verbiage is liable to get read in the worst possible light, especially when journalists are rushing to crank out rewrites of somebody else’s privacy-scare scoop.
Then Samsung dug itself deeper with that strange but true caution about “personal or other sensitive information” going to the cloud.
But you could put the same text into the privacy policies for Siri, Google Now, and the Amazon Echo. Because if you start talking about your bank balance or your weird rash while they’re listening, that confession’s going to the cloud too.
Samsung and the rest of the tech industry would do their customers and themselves a huge favor if they focused first on answering three questions before they got all lawyerly in their terms of service statements. Those questions: What data do we collect? Where does it go? How long goes it stay there?
You don’t need a 4,000-word document to answer. A flow chart, a Venn diagram, or even words written by real humans would do the trick. And by clearly outlining these data flows, you could trust any half-decent network engineer to prove or debunk the company’s claims.
Instead, we’re left guessing. As Higgins told me: “It’s a black box, and
people are obviously freaked out from a privacy perspective.”