The fact that Facebook runs political ads that it knows are untrue is crazy.
Mark Zuckerberg and his minions have triangulated themselves into an intellectual corner which is illogical and, I believe, ultimately untenable.
Their loopy policy has huge implications, as it speaks to an outdated and now irrelevant distinction between social media and legacy media, and even more to the point, Facebook’s dominance in the business of news and information.
I should pause to point out that Facebook has just introduced a new Facebook News section where it will pay publishers for content. Lots to talk about there, but I want to focus on the political ads because I don’t think we fully understand their consequences and because election season is nigh. It’s also the case that Facebook has tried to woo news organizations before with promises of a commitment to journalism and news, only to change its mind shortly thereafter. In other words, I’ll believe it when I see it.
So let’s stick with the political ads.
The logic behind Facebook’s stance on political ads goes something like this: “We know the ad that says Elizabeth Warren has six heads and was birthed by dragons is untrue, but we choose to let our users [ponder that term for a moment] decide. You don’t want us [Facebook] telling you what’s true or not true, that would impede free speech.”
Meanwhile, Facebook makes some money off these ads, and more importantly collects data from all of us and cements its position as the world’s primary forum for news and discourse in the process.
Zuckerberg acknowledges that he could simply ban all political ads if he wanted to, which is not a bad idea. And in fact the company does prohibit some political ads.
Here’s Zuck and Alexandria Ocasio-Cortez during hearings in Washington this week:
REP. ALEXANDRIA OCASIO-CORTEZ: “So you won’t take down lies or you will take down lies? I think it’s a pretty simple yes or no. I’m not talking about spin. I’m talking about actual disinformation.”
MARK ZUCKERBERG: “In a democracy, I believe people should be able to see for themselves what politicians that they may or may not vote for are saying, and judge their character for themselves.” [So, no AOC. He won’t.]
REP. ALEXANDRIA OCASIO-CORTEZ: “But you said you’re not going to fact check my ads.”
MARK ZUCKERBERG: “...if anyone, including a politician, is saying things that can cause, that is calling for violence or could risk imminent harm, or order census suppression when we roll out the census suppression policy, we will take that content down.”
So, no ads that incite violence or suppress census-taking allowed on Facebook. (Lies are OK though.)
What about traditional media like TV, radio, billboards and print? What kind of political ads can they run or not run? Generally speaking old-school news outlets abide by legal rulings anchored by the landmark 1964 Supreme Court decision, New York Times Co. v. Sullivan.
In that case, a group of ministers in Alabama ran an ad in the Times supporting Martin Luther King Jr. and criticizing the local police. The ad contained some inaccuracies, which the Montgomery police commissioner L. B. Sullivan used as a pretext for a defamation suit even though the ad didn’t name him. Local courts ruled for Sullivan, but ultimately the Supreme Court sided with the Times, establishing a new higher threshold for defamation, where not only must the information be untrue, it must be that the publisher knows that the statement is made with “actual malice,” meaning that it knows it to be untrue and with reckless disregard for the truth.
New York Times Co. v. Sullivan is mostly understood as a protection for news organization against libel lawsuits. But it also serves as a redline to determine what content a publisher will not accept. Meaning if a publisher knows that content is actually malicious and untrue, it will likely not publish it, even if that means not running an otherwise juicy story, or in the case of ads, sacrificing commercial gain.
Interestingly, in a speech at Georgetown University recently Zuckerberg used New York Times Co. v. Sullivan to defend his position on political ads, noting that the case was “about an ad with misinformation.” In other words he seemed to suggest that, “see, the New York Times did it too.”
But that’s misleading.
Yes it’s true the Supreme Court court did not find the paper at fault for running content with inaccuracies per se, but again only because the ad did not contain “statements [which] are made with actual malice (with knowledge that they are false or in reckless disregard of their truth or falsity.)”
So key difference, Zuck. The Times didn’t know there was incorrect information in that ad.
I know that in practice it’s often difficult to discern, but to be clear, Facebook does acknowledge running political ads that are made with actual malice (with knowledge that they are false or in reckless disregard of their truth or falsity), which violates the legacy media legal standard.
Fortunately for Zuckerberg though, his company doesn’t have to hew to New York Times Co. v. Sullivan. Instead Facebook—along with the other platforms, Twitter, YouTube, etc.—is governed by super loosey-goosey Section 230 of the 1996 Communications Decency Act, which gives Facebook and its ilk much more protection from liability of publishing false information.
What does Section 230 allow Facebook to run? Pretty much anything it wants to, as this law designates that the company is essentially an intermediary that acts as pass-through with little or no responsibility for content.
The logic of Section 230 goes something like this: “Hey man, we’re a platform. It’s a new thing. We’re not in the business of deciding what can or can’t be allowed. That would stifle free speech. If people don’t like some content, or it’s bad, they will voice that and our algorithm will suppress it.”
Except that the opposite often happens. Bad content—salacious drivel, fake news or ads with lies—are widely shared and sometimes go viral with dire consequences like shootings, suicide and murder.
Facebook, Twitter and YouTube have also figured out over the years, that they can’t put just anything up on their platforms. They don’t sell opioids. Or allow child pornography (YouTube is still trying to control that.) Or now, as we heard Zuck say, permit ads that incite violence—or suppress the census! (And of course F-bombs, nudity and the like are verboten too.)
But outright, bald-faced, demonstrable lies? That’s OK. Really nothing we can do about that one. Although we do label it, they say.
Imagine if labeling false claims was the standard for other products or services? A retailer slaps a label on some baby food: “The manufacturer says this product expires in 12/19. But it really expired in 12/17. We know that, but we’re selling it anyway. Just wanted to give you a heads up.”
That’s what might happen if Facebook was a supermarket regulated by Section 230. It’s all about do whatever you want.
Except that do whatever you want, doesn’t have to mean do nothing at all.
The late Steve Jobs had strong views about these issues. He saw Apple as a platform over which he held sway. Like Jobs or not, he took responsibility and made choices about what he wanted and didn’t want. In 2010 he famously emailed a customer who complained about the paucity of racy content at Apple: “Folks who want porn can buy and [sic] Android phone.”
Here are some solutions: Zuckerberg needs to understand that his notion of Facebook as an agnostic, infallible platform is an anachronistic excuse for an ethically-challenged business model. He needs to take responsibility for the content he monetizes. As for the government, it should strike down Section 230. That would once and for all acknowledge platform companies as media companies, or at least companies with media businesses which must be treated like any other media business.
Yes, then Facebook would come under the umbrella of New York Times Co. v. Sullivan. Nothing wrong with that, Mark. It’s a legal precedent that has served our country well for 55 years.
‘An information monopoly’
Maybe the real issue here though is simply the size of Facebook. According to a Pew Research Center report this month, “more than half of U.S. adults get news from social media often or sometimes (55%), up from 47% in 2018. And Facebook is far and away the social media site Americans use most commonly for news. About half (52%) of all U.S. adults get news there.”
“This is the trouble that comes with having an information monopoly,” says Dave Karpf, an associate professor of media and public affairs at George Washington University. “Facebook in many ways is the marketplace for a big portion of online speech and that means they can’t just be a company saying hey let the marketplace decide. But that also leaves us asking do we want one company making the decision? The answer is probably not. This is the problem with information monopolies.”
It gets harder not to argue that Facebook isn’t just too big and powerful. If it was broken up and Facebook competed against Instagram and WhatsApp, leaders at these still-giant companies would likely pursue different ad policies (i.e., not knowingly run fake political ads) which would undoubtedly reduce the amount of misinformation and the harm it causes.
Taking responsibility. Striking down a law. Breaking up the company. Yes, these are big moves—for Facebook. But the stakes are even bigger—for our society.
At some point, I believe Facebook’s nonsensical political ad policy will have to change. It’s simply a matter of how and when.
This article was featured in a special Saturday edition of the Morning Brief on October 12, 2019. Get the Morning Brief sent directly to your inbox every Monday to Friday by 6:30 a.m. ET. Subscribe
Andy Serwer is editor-in-chief of Yahoo Finance. Follow him on Twitter: @serwer.