(Photo: Phil Romans/Flickr)
Soon, possibly even later today, you will read a story about some terrifying thing a big-name corporation is doing with your devices or your data. The story will sound convincing, name-check the right jargon, and even link to some intriguing evidence. You will be appropriately terrified.
Remember when we found out Samsung TVs were recording all our living-room conversations, Microsoft’s Windows 10 was sharing everyone’s WiFi passwords, and Facebook would make commercial use of our data unless we posted the correct privacy notice to our profiles?
Do you also remember when all of these claims fell apart a few days later, once people returned to their senses?
Techno-panic stories nearly always splinter on closer inspection. You can’t stop them from running across the Internet, but you can avoid having them fill you with pointless anxiety. The cure is almost always a liberal application of critical judgment.
Don’t believe everything you read, no. 347 in a series
The problem usually starts when different sites try to jump on a tantalizing report of some massive privacy violation or security risk. That’s when the tech-news universe becomes a giant game of Telephone.
If the first post to break the news uses conditional verbs like “may” or “could” to convey uncertainty, you can bet those qualifiers will become unconditional declarations in the fifth or tenth iteration of the story. Subtlety is almost always the first casualty of the race to rewrite.
The solution here is usually easy: Follow the links in the story to the original source, so you can measure how solid the claim probably is. (Provided, of course, the news source you’re reading provides said links; many do not.)
And that’s the other problem: The Internet rewards speed above all else. Getting the story right usually involves independent reporting, which requires more time. So avoiding panic also means withholding judgement until all the facts are in.
By then, of course, the poorly reported and largely inaccurate rewrite has hit Facebook and been shared by all of your friends. I like Facebook, but I have never seen a more efficient mechanism for spreading urban myths.
Do believe the documentation you read
Or you could just skip the rushed rewrites on Facebook and read the documentation. That’s where the Windows 10 WiFi-sharing scare, as seen under headlines like “Windows 10 will share your Wi-Fi key with your friends’ friends,” fell apart—because by default Windows 10 does no such thing.
Microsoft’s explanation is clear enough about the layers of permission that are required before you can share your Wi-Fi log on. (You must select a network and re-type its password to share it with friends, and Facebook sharing requires a second confirmation.) But anybody could verify this for themselves by simply trying it out.
Samsung’s manual also stated pretty clearly that the microphone on its smart TVs stays off until you activate it yourself. A security researcher later confirmed that—although he did find that some voice commands transcribed during those periods of listening were sent unencrypted to the company that provides Samsung’s voice-recognition technology, Nuance.
Horrifying? Depends. You know what else goes over the Web unencrypted? The vast majority of the news stories you read, which can reveal far more about your tastes and opinions than voice commands to a TV. (This is one of those bits of context about digital life that I wish more people realized.)
Legalese and other foreign languages
You’d think that after the last 350 rounds of privacy-scare stories started with badly-written “ToS” and privacy-policy documents, “how to write terms of service without looking like a jerk” would be first-year material in law schools. Apparently, it’s not.
That means the safest way to write that policy is to preserve the company’s freedom of action by reserving as many rights over your data as it possibly can. That strikes me as an even more likely outcome with startups that must allow for the possibility of a pivot—dot-com-speak for “our original idea was a loser, so we’ll try its technology in a new market.”
In some cases, that pivot may involve selling the personal information the original startup once vowed to protect.
Mistrust never sleeps
All that said, let’s not throw a pity party for the giant multinational corporations that find their good names besmirched in episodes like this. These stories wouldn’t take hold if the companies involved didn’t already have trust issues.
Take Facebook, for example. I have followed this company closely—its first chief privacy officer was a friend of mine from college—and think it means well. But it’s also tried to expand its reach into everything from e-mail to digital currency, while for years it showed little hesitation about changing privacy settings retroactively to make once-private data more public.
(That move got the Federal Trade Commission interested, resulting in a settlement in which Facebook agreed to make future privacy changes opt-in, with 20 years of third-party audits to verify its compliance.)
The Facebook I see today seems more considerate and is readier to defend its users’ rights against government curiosity. It’s even spending its own money to help fix defects in other people’s apps—last week, it awarded $100,000 to the developers of a fix for a development flaw that had left Mozilla Firefox and other software vulnerable. The point: To make the Internet as a whole more secure, so that we’ll feel comfortable spending more time on it.
And yet I can already see many of you chuckling bitterly at the thought of Facebook as a force for good. It may take a long time for Facebook to overcome that reputation—and meanwhile, bogus but viral stories will continue to distract us from widespread, dangerous mistakes–think of the Heartbleed Web-encryption flaw or grotesquely insecure voting machines–that don’t provide a big-name company as an obvious villain.