U.S. Markets closed

The FBI is calling conspiracy theories a terrorism threat — Here's what that means for Facebook, others

Erin Fuchs
Deputy Managing Editor
Kori and Danielle Hayes at a Pizzagate demonstration, outside the White House in Washington, DC on March 25, 2017. (Photo by Michael E. Miller/The Washington Post via Getty Images)

The FBI has pegged “conspiracy theory-driven domestic extremists” as a burgeoning terrorist threat for the U.S., raising questions about the legal implications for websites where fringe groups sow those conspiracies.

In an exclusive article for Yahoo News published Thursday, contributor Jana Winter reported on an FBI bulletin from May 30, 2019 that, for the very first time, named fringe extremists as a domestic terrorist threat.

That bulletin specifically pointed to QAnon, a far-right theory claiming there’s a plot against President Donald Trump, as well as Pizzagate, a now-debunked claim that Hillary Clinton and her aides were operating a human-trafficking ring out of a Washington, D.C., pizza restaurant.

Neither of these theories could have spread as rampantly as they did without the internet — specifically, Google’s (GOOGGOOGL) YouTube, Facebook (FB), Reddit, and Twitter (TWTR). Now that the FBI is officially calling fringe theories that spread on these platforms a terrorist threat, do these sites have to worry about the FBI or state prosecutors coming after them?

The short answer: Probably not. “It is unlikely that a company would fall afoul of ... criminal statutes if found just hosting some crazy person’s conspiracy theories,” said Alan Rozenshtein, a cybersecurity expert and associate professor at the University of Minnesota Law School.

‘People talking crazy on websites’

Websites like Facebook enjoy immunity from civil lawsuits or from local or state criminal prosecution over content posted by users.

Under Section 230 of the Communications Decency Act, websites like Facebook enjoy broad immunity from civil lawsuits and prosecution at the state or local level for content posted by their users. The CDA passed in 1996, eight years before Facebook even existed, to police obscenity on the then-nascent internet. Congress added Section 230, as one judge stated, “to encourage the unfettered and unregulated development of free speech on the internet.”

Section 230 says, in part, “If people are talking crazy on websites, that’s not the website’s legal responsibility,” says Eric Goldman, a professor of law at Santa Clara University School of Law in the Silicon Valley, and a leading expert on internet law.

While Section 230 doesn’t protect websites from federal criminal prosecution, U.S. prosecutors would still be loath to prosecute mainstream websites for conspiracies spread on their platforms. The First Amendment would likely protect big websites like Facebook and Twitter from prosecution because they’re “almost certainly not involved in the conspiracy,” Goldman noted.

Mark Zuckerberg, founder of Facebook, speaks as he attends the unveiling ceremony of the new Samsung S7 and S7 edge smarthphones at the Mobile World Congress in Barcelona, Spain, February 21, 2016. REUTERS/Albert Gea

But the story might be different for “sketchy, small sites” operated by individuals who might actively be involved in propagating conspiracies, Goldman pointed out. “The venue for the conspiracy theory discussions might matter to the legal liability of the operator. Were they trying to foment problems or was that an unfortunate misuse of tools for other purposes?”

‘Caught between a rock and a hard place’

Regardless of whether they face federal criminal liability, Facebook, Twitter, and other websites still have an interest in doing what they can to stop conspiracy theorists from exploiting their websites to spread fake news — especially fake news that could do real damage.

As Jana Winter points out in Yahoo News, the Pizzagate conspiracy spurred a 28-year-old man to fire an automatic weapon in a pizza restaurant to try to rescue children he believed were being held there. The conspiracy that led to that action proliferated in more than 30 Facebook groups, NBC News reported in February.

In a July 2018 interview with Recode’s Kara Swisher, Facebook CEO Mark Zuckerberg sparked controversy when he said he didn’t believe the social network should remove content posted by Holocaust deniers. But he did contend that Facebook would remove content that incited violence.

Facebook and other sites are “caught between a rock and a hard place” when it comes to fake news, according to Goldman. On the one hand, he noted, the government wants them to do more to combat terrorist content on their sites. But, at the same time, Trump and others have accused websites like Google and Facebook of “conservative bias.” That could provide a disincentive for removing content that’s arguably on the fringe, especially considering that the president of the United States himself has shared Twitter accounts tied to QAnon — the very group the FBI warned about.

“The internet has allowed the craziest theories to flourish, and even worse, our own government has been validating and endorsing those theories, which has emboldened the purveyors of the worst views to feel like they’re engaged in credible activity,” Goldman noted. “It’s a crazy, crazy time.”

When asked to comment on the FBI bulletin, Twitter pointed to its web pages detailing its policies on terrorist content and on platform manipulation. Facebook, Google, and Reddit did not immediately respond to requests for comment.

Erin Fuchs is deputy managing editor at Yahoo Finance.

Read more

Capital One had ‘ample warnings of weaknesses and risks’: Lawsuit

Where is Elizabeth Holmes now? Dealing with the Theranos fraud case

The problem with Elizabeth Warren’s plan to break up Google, Facebook, and Amazon

Read the latest financial and business news from Yahoo Finance

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, SmartNews, LinkedIn,YouTube, and reddit.