If the Russian-bought election interference ads hadn't been bought by fraudulent accounts, "Most of them would be allowed to run" Facebook COO Sheryl Sandberg said this morning. "The responsibility of an open platform is to allow people to express themselves" she said during the first of an Axios interview series with Facebook execs.
"The thing about free expression is when you allow free expression you allow free expression" Sandberg said, noting that "we don't check what people post" and that she doesn't think people should want Facebook to.
The linchpin quote of the interview was when Sandberg said "The question is should divisive, political, or issue ads run … our answer is yes, because when you cut off speech for one person, then you cut off speech for all people."
The perspective maintains Facebook's neutrality across the political spectrum and absolves it from being the truth police. But it also means that it's knowingly creating a platform where people can misinform each other.
That raises the question of how free speech scales to user-generated content sharing networks that lack the curation and editorial oversight of traditional news distribution systems. Sandberg dodged Axios editor Mike Allen's question about whether Facebook is a media company, and wasn't pressed about how it accepts money for ads like other media companies.
Facebook plans to hire 1,000 more human moderators to protect election integrity, make all ads transparent to everyone rather than visible just to those targeted, and increase scrutiny on political ad buys. But can the fake news issue ever be solved if Facebook actually permits fake news under the banner of free speech?
During her talk, Sandberg also confirmed that Facebook will support the plan of congressional investigators probing election interference to release the Russian-bought ads to the public. She said she met with congress yesterday, Facebook is fully cooperating, and that it will provide congress any content investigators want. That includes non-ads. "A lot of them, if they were run by legitimate people, we would let them run" Sandberg explained.
She also said targeting information about the ads will be released to the public as well. "We have a responsibility to do everything we can do to prevent this kind of abuse" said Sandberg. "We're hoping to set a new standard in transparency in advertising." Though at the same time, she blatantly dodged a question about whether the Russian-bought ads and Donald Trump's campaign ads had matching targeting.
As for the accusation that Facebook causes filter bubbles by surrounding us with information shared by our social graph instead of a more impartial news source, Sandberg said Facebook actually broadens our perspective through exposure to our weak ties and acquaintances. She cited studies showing we see a wider view of the news through the lens of Facebook than traditional sources.
You can watch the full talk with Sandberg below:
Sandberg's comments come alongside newly exposed information about the effectiveness of Facebook's fight against fake news. In an email obtained by BuzzFeed, Facebook's manager of news partnerships Jason White wrote to one of the company's third-party fact checkers:
"Once we receive a false rating from one of our fact checking partners, we are able to reduce future impressions on Facebook by 80 percent . . . we are working to surface these hoaxes sooner. It commonly takes over 3 days, and we know most of the impressions typically happen in that initial time period."
But while Facebook is willing to demote the News Feed prominence of a news story that's unequivocally established as false by third-parties, it still allows this content on its platform.
A Slippery Slope Worth Navigating
This is all boils down to the fact that Facebook's News Feed is sorted by engagement. Normally, low quality content simply receives too few Likes or comments to be seen by many people. But fake news is so tantalizing in how it stokes our biases and political leanings that it breaks this system. People will click-through, Like, and share this content because they agree with or are entertained by it, not because it's high quality.
This in turn incentivizes publishers of false news hoaxes. Facebook demotes hoaxes when identified, and is blocking monetization and ad buys from these publishers. But these mechanics also incentivize publishing of highly polarized opinion, exaggeration, and sensationalism. And when advertisers pay to boost the reach of fake news, its click-baityness attains these ads a level of engagement that wins them a lower price in Facebook's auction system.
That's how Facebook profits from fake news and polarization, even as it vows to work harder to protect us from it. While Facebook might want to offer an open platform where it's not the opinion police or even the truth police, it's simultaneously earning money from some of the most malicious uses of free expression.
It's all a slippery slope. One person's fake news busting is another's censorship. But at the same time, Facebook is not legally obligated to maintain a free speech platform. Its rules prohibiting nudity, hate speech, and graphic imagery for the sake of 'safety' already show it's willing to make judgement calls about when free speech crosses the line. But fake news is unsafe too.
Some critics take a cynical approach, saying Facebook cracks down harder on that stuff because it scares away advertisers, while fake news actually brings in dollars. It's certainly true that it's easier to detect those banned content types at scale with algorithms searching for nipples, racial slurs, and blood. Yet even if Facebook could reliably spot not just unabashedly false news but sensationalized content, its current policy is to allow it as long as it doesn't preach violence or pure hate.
Something has to change. Sandberg said the public deserves "Not just an apology, but determination" to fix the problem. Now it's time to see that determination in action. In my opinion, either:
- Facebook must evolve its policy to more broadly and forcefully define and delete fake news, whether that means taking the political heat of vetting content in-house and being accused of bias or massively funding third-party fact-checkers to staff up so they can handle the volume of moderation Facebook requires.
- Facebook must continue to technically allow fake news, but add overt "Report as fake news" buttons, strictly and swiftly demoting links that are reported so they're hardly visible. This would require protection against abuse of the Report button, and again either in-house policing or funding for third-party fact checking at Facebook's scale.
Or at least
- Facebook should set a much higher bar for legitimacy of advertisers that buy ads promoting news articles. That might mean restricting ad buys from anyone who isn't the original news publisher or that hasn't been verified by Facebook. Or limiting ad buys promoting any content that's been flagged as fake.
All of these enforcement options could potentially ensnare legitimate news, be misused by trolls, or prevent innocent ad buys. But if Facebook commits to minimizing these false positives, the result could better protection for democracy and civil society compared to the alternative of knowingly allowing false news to proliferate in the name of free speech.