Facebook Approved Ads With Coronavirus Misinformation

Consumer Reports has no financial relationship with advertisers on this site.

Health experts in the United States have struggled to hold back a tide of false information about the coronavirus pandemic—from racist theories about the origin of the virus to wrongheaded and dangerous home remedies.

A lot of this misinformation circulates on social media platforms, including Facebook, a source of news and information for more than half of all American adults. Conspiracy theories and deliberate disinformation aren’t new to the platform: Around the time of the 2016 elections, fake news articles spread to more than a hundred million Facebook users. But the stakes are even higher now, when bad information can have deadly consequences.

Facebook has been saying for weeks that it’s intent on keeping coronavirus misinformation off its platforms, which include Instagram and WhatsApp. During one recent interview with NPR, Nick Clegg, Facebook’s vice president for global affairs and communication, cited two examples of the kinds of posts the company would not allow: any message telling people to drink bleach, or discrediting urgent calls for social distancing to slow the pandemic.

I’ve been covering Facebook and online misinformation for several years, and I wanted to see how well the company is policing coronavirus-related advertising during the global crisis. So I put the two dangerous claims Clegg brought up, plus other false or dangerous information, into a series of seven paid ads.

Facebook approved them all. The advertisements remained scheduled for publication for more than a week without being flagged by Facebook. Then, I pulled them out of the queue to make sure none of them were seen by the public. Consumer Reports made certain not to publish any ads with false or misleading information.

I set up most of these test ads in the course of an afternoon, starting with a Facebook account under a fake name and a page for a made-up organization I called the “Self Preservation Society.” I loaded in increasingly outrageous ads, probing for the boundaries of what Facebook would approve.

Some of the ads were subtle: One claimed that people under 30 are “safe” and should go to school, work, or parties, but didn’t refer to the coronavirus by name. Others were more blatant. “Coronavirus is a HOAX,” blared one example, while another message encouraged people to ignore social distancing recommendations because they don’t make “any difference AT ALL.” One of the most egregious advertisements told people to “stay healthy with SMALL daily doses” of bleach.

This ad was approved despite violating a Facebook policy against casting doubt on the severity of the outbreak.

After Consumer Reports contacted Facebook, the company disabled the account I used to schedule the ads. Facebook confirmed that all seven ads that I created violated its policies, but did not specify which rules were broken.

“While we’ve removed millions of ads and commerce listings for violating our policies related to COVID-19, we’re always working to improve our enforcement systems to prevent harmful misinformation related to this emergency from spreading on our services,” Facebook spokesperson Devon Kearns said in a statement to CR.

Relying on Automated Ad Screening

Unlike a post you’d write from your own Facebook account, which can go up immediately and only gets fact-checked under special circumstances, Facebook reviews paid ads before allowing them to be published. They are bound by Facebook’s advertising policies—which cover more than two dozen categories, from misinformation to tobacco and vaping products to copyright infringement—as well its community standards, which also apply to regular posts. Additionally, the company has published specific rules governing coronavirus information. Those include bans on “claims that are designed to discourage treatment or taking appropriate precautions,” and “false cures . . . like drinking bleach.”

Facebook’s primary ad-screening system is automated. Human moderators are used mainly to tag various kinds of content, the company tells CR, helping to train the algorithms that handle decisions about which ads can run. In some cases, however, human moderators do look at specific ads to decide whether they follow the rules and should be published, the company says. Facebook didn't say which ads get reviewed by people.

Facebook often removes problematic ads after they’ve started circulating, and it can disable entire advertising accounts if they violate the company’s policies too often or too blatantly. Had I published my test ads, Facebook probably would have found and removed them eventually. But it’s impossible to know how far the ads would have spread first—or how much damage they would have done.

“For something this blatant—and related to what Facebook is saying is currently its top priority—[post-publication removal] is not acceptable,” says Nathalie Maréchal, a researcher at Ranking Digital Rights, a nonprofit that grades tech companies on factors including their privacy and content moderation practices. “They should be aiming for a 99.99 percent detection before the post ever goes up.”

Facebook CEO Mark Zuckerberg appears to agree. On a recent press call, he said that by the time a user flags a harmful item on Facebook, “a bunch of people have already been exposed to it, whereas if our AI systems can get it up front, that’s obviously the ideal.”

This ad was approved despite violating a Facebook policy against downplaying the importance of social distancing to slow the spread of the pandemic.

One obstacle to Facebook's ad-policy enforcement right now is that the company is facing a staffing crunch. Last month, Facebook responded to the coronavirus by following the same social distancing advice as many other companies did: It sent home droves of workers, including its content moderators, who are employed largely by contracting companies rather than Facebook itself. Zuckerberg said that many of these contractors couldn't continue to do their normal work outside the office, but that the company is switching some full-time employees to content moderation.

A Facebook spokesperson tells CR that a "few thousand" reviewers are now able to work from home—far fewer than the reportedly 15,000-person workforce that's usually deployed.

Additionally, the company is relying even more than usual on automation for screening both ads and users’ posts, according to Zuckerberg. Facebook often says artificial intelligence is the future of content moderation—but computer science experts say the technology isn’t ready yet.

“I’m highly critical of this,” says Jevin West, a misinformation researcher and data scientist who directs the Center for an Informed Public at the University of Washington. “I teach classes in machine learning, and we’re not at the point in machine learning where we can rely on automated means.” In particular, computers can stumble when trying to interpret memes, imagery, and deliberate attempts at deception.

On the other hand, West says, an automated system should easily be able to flag an ad containing words like "coronavirus," "COVID-19," or "pandemic" for human review. Then, the company would just need more staffing. “Facebook and other social media companies should be hiring at unprecedented levels. It doesn’t take rocket science to moderate the bulk of misinformation found on Facebook,” he says—particularly the blatant examples I put into the test ads. “It just takes a human willing to read through it.”

The only ad in my experiment that Facebook rejected was flagged because of its image: a stock shot of a respirator-style face mask. This suggests Facebook is using image recognition—one of its strong suits—to flag posts that violate its recently established policy against selling face masks. But when I swapped out that photo for another very similar alternative, Facebook approved it—even though the switcheroo would never have tricked a human.

This ad was approved despite violating a Facebook policy against urging people to ignore proven health precautions.

“If your system is so porous and under-resourced that it lets something like these ads through, you’ve got a problem,” says Sarah Roberts, a UCLA professor of information science who studies content moderation.

The company’s automated review systems also appear to have missed several signals that my ads deserved extra scrutiny. The account and Facebook page I created for the experiment were set up just over a week ago, suggesting that the Self Preservation Society, my fake organization, wasn’t an established advertiser. The page I built uses a rendering of the coronavirus as a profile image, a sign that it’s focused on the pandemic. And neither the page nor the account has ever posted anything or added profile details.

“To me, there are a lot of spammy red flags about this poster that should’ve probably triggered more aggressive human review,” says Hannah Bloch-Wehba, a Drexel University law professor who studies algorithmic decision-making.

'A Higher Moral and Ethical Burden'

Facebook, like other social media companies, has struggled for years to keep misinformation from spreading on its platforms. Experts have told CR the company seems to be working hard to promote good information on the pandemic. For example, it’s actively pushing authoritative sources to the top of the results if you search for "coronavirus" or "COVID-19."

However, researchers and consumer advocates say that the company should do better when it comes to coronavirus-related advertising—even if that puts a dent in the company’s immense revenues.

“Editorial review and curation would increase the price of ads overall, if they had that kind of pre-screening workforce,” says Joan Donovan, a Harvard lecturer and misinformation researcher. “But the damage caused by not doing so can be deadly. It’s a flaw in the entire design of their advertising system.”

Ads are especially sensitive, because they can be interpreted to carry a certificate of authenticity from the company, some experts say. “When it comes to an ad that someone has paid for, you have every reason to believe it has been vetted,” says Maréchal, the policy analyst from Ranking Digital Rights. “Why wouldn’t you believe it?”

Experts tell CR that if Facebook can’t adequately screen ads before they’re published, it should slow down its advertising machine during this crisis.

“In an ideal world, just like if you were to place an ad in the pennysaver, someone would review it—and it would be a person who understands the context and understands the consequences of letting information like that be served through a targeted advertising system,” Donovan says.

UCLA's Roberts agrees that every coronavirus-related ad should be seen by a human before it runs.

“There’s already a responsibility to the public around organic content” that normal users post, says Roberts, who is the co-director of the UCLA Center for Critical Internet Inquiry. “But when it comes to lucrative monetized material that would not exist without this system . . . then I do think there is a higher moral and ethical burden.”

Probing Facebook’s Black Box

My small experiment only provides anecdotal evidence of a problem: It’s a snapshot of Facebook’s approval process, taken at a moment of turmoil. But this type of probe is one of the only ways that outsiders can guess at the inner workings of the company’s immensely complicated platforms.

Nearly every expert CR spoke with said the company’s instinct to keep its content-review algorithms under wraps—claiming that they’re trade secrets, or that troublemakers could game them if they were revealed—has made it difficult to keep the company accountable from the outside.

“Researchers have been grasping for accountability and transparency in advertising on Facebook for several years now,” says Harvard’s Donovan.

Even marketing experts who spend countless hours setting up Facebook ad campaigns say its system is capricious and hard to understand. “It’s incredibly complex and it’s poorly documented and it’s not well explained,” says Miracle Wanzo of Discovery Marketing, a small digital marketing agency. “It’s almost like making your way through a maze.”

Facebook’s human moderators will likely be back online soon, potentially tightening the window through which someone could slip dangerous misinformation. But whatever changes Facebook makes to its policies or enforcement practices—like, say, sending every coronavirus-related ad to human moderators—outsiders likely won’t know what’s different unless the company decides to announce it. And Facebook rarely reveals such details to the public.

“Because of the closed nature of these systems, you’re forced to adopt the reactive experimental crouch—and that means we can’t get the answers to these questions that are really critical to answer,” says Drexel’s Bloch-Wehba. “When you have platforms that are basically as powerful as governments—and serve as essential a function in governing the flow of critical information about issues of public concern—transparency is critical.”



More from Consumer Reports:
Top pick tires for 2016
Best used cars for $25,000 and less
7 best mattresses for couples

Consumer Reports is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. CR does not endorse products or services, and does not accept advertising. Copyright © 2020, Consumer Reports, Inc.

Advertisement