U.S. Markets closed

Despite Likely Section 230 Cover, Removing 'Fake News' Can be Tricky

[caption id="attachment_6185" align="aligncenter" width="620"] Logos for Facebook and Twitter. [/caption] With new technology making it easier to create fraudulent video content and growing revelations about Russia’s efforts to influence the 2016 U.S. presidential election, the clarion call to deal with the scourge of “fake news” has become ever louder. Dealing with fraudulent content, however, is not as simple as one would think. In many cases, it is protected under the First Amendment and poses little to no liability to internet publishers who host such content on their websites or social media platforms. But the law that protects publishers from legal action also likely enables them to remove fraudulent content at their discretion. In practice, however, removing such content can be quite tricky. Section 230 of the Communications Decency Act grants certain types of immunity to “interactive computer services,” which have been defined broadly to include internet publishers like social media companies and websites. And though the statute is likely to be changed in the near future to allow prosecutors to go after internet services that enable or support sex trafficking, such changes do not impact law’s immunity with regards to fraudulent content. Under Section 230(c)(1) of the law, internet publishers are not liable for any content created and posted on their service by third-party content providers, such as users on Facebook or people posting on an online forum. Nor are they liable, under Section 230(c)(2), for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” There has been much case law on Section 230, including in Zeran v. America Online, Blumenthal v. Drudge and Milo v. Martin, affirming that internet publishers are not liable for fraudulent content posted by third parties, nor need to verify the accuracy of content posted by third parties. Still, Section 230 doesn’t address fraudulent content directly. “The fact is, it doesn’t fall within any of those categories” in Section 230(c)(2), said Thomas Cafferty, director of commercial and criminal litigation, and leader of the media law team at Gibbons. However, he added that “you could curate fake news under the terms 'otherwise objectionable.'” To be sure, even though there is no “fake news” category, there is little question that “fake news” could be removed by internet publishers without them losing Section 230 immunity. “An internet company would be in their rights and within the immunity that Section 230 provides to take down and remove speech of a fraudulent nature,” said Markham Erickson, partner at Steptoe & Johnson and chair of the firm’s internet, telecom and technology practice group. Over the years, courts have interpreted Section 230(c)(2) broadly, providing flexibility to publishers to run their services in the way they choose. The content that internet publishers can remove “is anything that in good faith a service might deem to be improper, offensive or otherwise undesirable, so it's important to recognize those criteria it spells out are not exclusive,” said Jeffrey Hermes, deputy director at the Media Law Resource Center, a trade organization for media attorneys. As an example, Hermes pointed to the recent decision in Dawn Bennett v. Google, where the U.S. Court of Appeals for the District of Columbia ruled Google was protected under Section 230 from having to de-index a blog post that the plaintiff alleged was defamatory. What’s more, the court rejected the argument that since Google enforces a “Content Policy” for its bloggers, it is influencing, and thereby creating, the content it publishes, and therefore does not qualify for Section 230 immunity. That Google’s content policy goes beyond the wording in what internet publishers can remove under Section 230(c)(2) underscores the law’s malleability in granting publishers a wide range of editorial discretion. “Internet service providers are allowed to enforce content guidelines without losing immunity, and allowing for this kind of control is one of the reasons Section 230 was created,” said Desiree Moore, partner at K&L Gates. But she added, “platforms should be careful that content guidelines are applied fairly across all types of content.” Indeed, if an internet publisher is found to not act in “good faith,” such as when it removes third-party content because it “competes with something they themselves are publishing, or removes it in order to advance another third party’s interest,” the publisher may lose their Section 230, immunity, Hermes said. This doesn’t mean the publisher will automatically incur liability, just that it is more exposed without the Section 230 immunity. “The important thing to remember here is that you don’t trigger liability under Section 230(c)(2) because you remove content. You only trigger liability if there is some other reason why the removal is wrongful,” Hermes said.

Removing Fake News? Good Luck

While internet publishers are likely protected against removing fraudulent content, how to go about it is a matter of much debate. “I think we are in this phase where different companies are experimenting with different ways to try to go after content that is fraudulent or violates their terms of service in some way,” Erickson said. Twitter, for example, has begun to restrict users' ability to perform coordinated actions, such as posting or liking tweets simultaneously from multiple accounts, and has moved to take down automated “bot” accounts that post fake news. Facebook has also taken a slew of actions over the years to try and combat the spread of fake news on its platform. In late 2016, the social media company partnered with fact-checking organizations and reconfigured its news feed algorithm to flag fraudulent content. In January 2018, it further changed its algorithm so that users would see more posts from friends and family and less from news or video publishers. The move, however, has been panned by some as exacerbating the fake news problem rather than solving it. Facebook has also since released a two-question survey to some of its users to determine which news sources they trust. Yet some have questioned how the social media company can combat fraudulent content by polling its audience using a simple survey, and Facebook hasn’t been clear on how it intends to use the survey data. Of course, social media companies like Facebook are likely sensitive to how they handle fake content, lest they alienate different communities. Facebook has already come under fire for allegedly suppressing news stories that expressed conservative views. The social media company has also been rocked by criticism over how it handled fake accounts linked to Russian actors trying to influence the 2016 U.S. presidential election, and the harvesting of personal data by Cambridge Analytica, which is linked to the 2016 presidential campaign of Donald Trump. For most social media companies, there is likely also the concern that exercising editorial discretion too broadly may stifle the free flow of communication and ideas. As Hermes explained, “It can be difficult to tell the difference between what’s a false statement of fact and what is an opinion in terms of what is colloquially labeled as fake news.” What’s more, though Section 230 offers social media companies broad discretion when curating or removing third-party content, they still may want to hedge against the law—for instance, by making sure they define “fake news” in such a way that, even if they lost Section 230 immunity, they would still be free of any liability for removing it. “There is a very strong First Amendment opinion from the U.S. Supreme Court in U.S. v. Alvarez, which talks about whether or not falsity in and of itself can be constitutionally restricted, and [the court] eventually says that if there [is] no specific identifiable concrete harm that flows from it, then the First Amendment still protects falsity,” Hermes said. “And so you really need to define what you mean by fake news; you need to figure out what harm is flowing form it. It’s not enough to just say this stuff is deleterious to our culture,” he noted.