Laura Edelson, New York University PhD Candidate, joins Yahoo Finance Live to discuss the outlook on Facebook’s effectiveness in taking action against users sharing misinformation.
ALEXIS CHRISTOFOROUS: Welcome back. Yesterday, we saw President Biden order an intelligence report into the origins of COVID-19-- that's after reports found three workers at a Wuhan lab in China had an unexplained illness well over a year ago. Now, today, we hear that Facebook has ended its ban on posts asserting that COVID-19 was man-made or manufactured.
Here to talk about it now is Laura Edelson. She is a PhD candidate at New York University. And, Laura, it's good to have you on the show. What do you make of this policy shift by Facebook?
LAURA EDELSON: Well, I think it's a reasonable reaction to a change in what we think we know. I think for the last year or so, this idea that the COVID-19 virus was man-made or was the result of a lab leak just wasn't something that we had a lot of evidence to back, and it was being tied to a lot of conspiracy theories that were, frankly, dangerous. But when what we know changes, what Facebook's policy says should change too.
KRISTIN MYERS: So, Laura, curious to know how successful you think Facebook is going to be moving forward when it comes to their content moderation. It's something that the company has been incredibly harshly criticized for in the past. They say that they have actual people looking at posts and groups, but they've still let so many things slip through the cracks. So how do you imagine they're going to handle this going forward?
LAURA EDELSON: Well, the policy that they announced yesterday of actually making changes to how content gets promoted to downright content from Facebook pages that routinely promote misinformation, I think this is a good step forward. In terms of whether it will be effective, it's really up to Facebook. I think Facebook is always dealing with this central tension that people think that misinformation is rampant on Facebook, and that decreases user trust.
At the same time, misinformation is often very engaging, and that's good for Facebook's bottom line. So as they balance these two things, it seems like right now their focus is shifting more toward building trust. But you know, frankly, they've said things like this in the past, and they haven't always lived up to their commitments. So I think we're going to have to see how this gets implemented.
ALEXIS CHRISTOFOROUS: How-- I mean, realistically, though, on this large platform, how are they really going to police it? You know, it's one thing to come out and announce you're going to do this, and that might sound very PC, but do you have faith that Facebook is actually going to be able to go in there and effectively target this misinformation and take it off their platform?
LAURA EDELSON: I think they certainly have the technical capability. The question is whether they have the will. Facebook has an army of tens of thousands of content moderators, and they have some of the best AI functioning in the business. So I think between those two things, if they want to take this seriously and really decrease disinformation and misinformation on the platform, they can. The question is just whether they're willing to take those aggressive steps and spend the money to do that.
KRISTIN MYERS: Will this have an impact on any advertisers? Facebook definitely cracked down on political advertisements and the content from some political candidates, but what about companies and businesses?
LAURA EDELSON: So this policy doesn't apply to advertising. Facebook already does have a policy in place that bans fake content in ads. But one of the things we actually learned from the 2020 election is they didn't do a great job enforcing that policy. So right now, there is quite a bit of fake information in ads that just slips through the cracks. You know, I don't think that this is going to have an immediate impact on advertising except in the sense that this may wind up decreasing user engagement with misinformation on Facebook.
ALEXIS CHRISTOFOROUS: And are there sort of two different sets of rules when it comes to misinformation and what you can and cannot post when it comes to public figures? I'm thinking about presidents, or lawmakers, celebrities-- what are the rules and are they any different for them?
LAURA EDELSON: Yeah, this is something that we've learned a lot about recently-- that there is often a double standard on Facebook for who can say what. Obviously, they have a really well-known exemption for their political ads, where politicians are allowed to lie in Facebook ads in a way that other groups aren't. But additionally, something we learned that actually came out of the recent Facebook oversight board result is that Facebook, additionally, has a double-check policy.
When major public figures get fact checked, those fact checks are automatically reviewed. So people like President Trump, like other political figures get a lot more leeway in terms of telling lies in a way that most people don't.
KRISTIN MYERS: Curious to know how some of these policy changes that Facebook might influence or impact some of the regulatory battles that they'll be facing. Facebook has been brought in front of Congress a couple of times and blasted by both Democrats and Republicans for very different reasons about the content moderation and about the ways that they've been handling those crackdowns on content. Might this at all shape or change any of those conversations going forward?
LAURA EDELSON: Well, I'm sure to a certain extent, that was the intent. The other thing that came out yesterday was Facebook's threat intelligence report about influence operations on Facebook. And I think what that report made clear is that there is a lot of this kind of activity going on, and Facebook really needs to start being seen as acting seriously with this threat. Otherwise, frankly, regulation is coming.
ALEXIS CHRISTOFOROUS: All right, we're going to leave it there. Laura Edelson, PhD candidate at NYU, thanks so much for sharing your insights today.