Facebook didn’t get the memo about fake news. Of course it didn’t.

Facebook CEO Mark Zuckerberg in his trademark gray tee. (Photo: Getty)
Facebook CEO Mark Zuckerberg in his trademark gray tee. (image: Getty)

It shouldn’t be news that fake news is a problem on Facebook and other social networks: We’ve had decades of practice with hoaxes, urban legends and other fraudulent “facts” floating around the Internet.

It’s been 21 years since Snopes.com founder David Mikkelson began busting myths online, seven years since FactCheck.org had to debunk fake reports about Snopes’ funding, and two and a half years since the Washington Post launched a “What was fake on the Internet this week” blog… which it abandoned in frustration just over a year ago.

But this election season saw fake news run amok. A BuzzFeed study found that over the last three months of the campaign, the 20 most-shared fake election stories got a slightly larger audience than the top 20 election stories from legitimate sites.

Commence the blamestorming

Now, some people are now wondering if Facebook’s role as a vector for fake news played a part in Donald J. Trump’s shocking Election Night win.

Gizmodo’s recent report that Facebook (FB) quashed a News Feed update to call out hoaxes because it would have nailed too many conservative sites — something Facebook denies — amped up that angst. So did the Washington Post interview with fake-news entrepreneur Paul Horner featuring this quote: “I think Trump is in the White House because of me.”

Facebook has responded with a stages-of-denial sequence, first saying this isn’t a big problem that couldn’t have made much of a difference, then taking a step to address the issue.

Last Thursday, founder Mark Zuckerberg called the fake-news issue “small” and inconsequential, then expanded on his thoughts in a Facebook post.

“Of all the content on Facebook, more than 99% of what people see is authentic,” Zuckerberg said. “The hoaxes that do exist are not limited to one partisan view, or even to politics.”

He added that “we don’t want any hoaxes on Facebook” and said the company is helping users to flag fake content.

A few days later, Facebook (along with Google) said it would ban fake-news sites from using its advertising systems. It’s unclear how they will decide which sites deserve this exile.

Friday night, Zuckerberg posted another note, saying that Facebook was working on automatic classification and third-party verification of stories and was “exploring” adding warning labels for fake news.

Zuckerberg is probably right that fake news didn’t sway the election. It almost certainly has less of an effect than the Clinton campaign’s decisions about allocating resources in the Midwest or voter-ID laws suppressing turnout in some of those same states.

But fake news can still hurt Facebook. When I see obvious nonsense overrunning the site — like when a click on a link about Trump’s possible tech policy led Facebook to suggest a story from a fake-news factory called “Ending the Fed” — I have to wonder how stupid it thinks I am.

If a site can’t resist this kind of junk (see also, Google Now featuring Breitbart News in its election-stories spot this summer, or Google search results for a query about election totals leading off with an amateur and wrong blog post), why should we trust its other recommendations?

Stop being surprised about trolling

If Facebook is serious about making itself less of an accelerant for lies, it’s got no shortage of advice about what to do next.

Thursday, a group of 20 fact-checking organizations posted an open letter to Zuckerberg urging that the social network “strengthen users’ ability to identify fake posts and false news by themselves.”

Journalism professor Jeff Jarvis and Betaworks CEO John Borthwick posted a 15-item to-do list Friday that starts with making it easier for users to flag fake news. A team of college students have already shipped code that automatically marks news items as verified or not.

Meanwhile, Facebook’s existing tools for flagging fake stories are too obscure: You must click or tap the arrow in the top right corner of a News Feed post, select “Report post,” select “I think it shouldn’t be on Facebook” and then choose “It’s a false news story.”

Then your vote apparently vanishes. If friends see the same link, they won’t know you called it out.

(This function doesn’t work on the related-stories links that Facebook displays after you follow a shared link. Zuckerberg’s Friday-night note said the site was “raising the bar” for those links.)

But Facebook and other social-media sites need to a proactive approach to fake-news publications that exploit their rules and norms.

This represents a stark contrast with how their security professionals go about their business.

They don’t watch for network intrusions and then research what happened — they assign people to try to break in. They pay bug bounties to researchers who find new vulnerabilities. They make their own staffers think they’re being hacked, then grade how they respond to the drill.

But when it comes to the sort of social engineering that can make a network unfriendly or unlivable, the managers of our social networks remain shocked, shocked, to find that trolling is going on in here.

It’s past time for them to start doing the same sort of aggressive searching for vulnerabilities that their “infosec” experts already undertake.

“I’ve long thought that this is something they should be doing” said Sarah Jeong, author of “The Internet of Garbage,” a book chronicling how social networks treat abuse as an outsource-able cleanup problem. She has yet to see any such efforts.

Facebook says it’s factored in the possibility of abuse when launching features like Facebook Live. But in too many other cases, it’s shown itself to be as reactive as every other social network — and the way it’s gotten played by fake-news sites and now must pledge that it’s “committed to getting this right,” as Zuckerberg wrote Friday, fits into that pattern.

We now have yet another example of how optimizing for users of goodwill doesn’t work. The trolls and the con artists and the fraudsters aren’t going to stop coming, any more than the hackers will let up. We need to stop being surprised by their arrival.

Email Rob at rob@robpegoraro.com; follow him on Twitter at @robpegoraro.

Advertisement