Facebook has tried numerous methods in a bid to fight fake news. Not all of them are successful, though, and it's now clear that they can even backfire. Its decision to flag false stories led to more sharing as those determined to believe the claims were incensed. The social network has a new strategy, though: rather than draw attention to the links, it's shrinking them. Facebook told TechCrunch that it's reducing the "visual prominence" of known false stories. You may only see a tiny thumbnail and brief text description for a hoax, but an accurate story will have a large image and bold text. The aim, as you've no doubt guessed, is to boost the chances you'll miss a bogus story while scrolling through your News Feed.
The company is simultaneously improving the odds of identifying those stories. It's using machine learning to speed up its fact checking by scanning new articles for signs of false claims and prioritizing the suspicious ones for human reviewers. The AI technology should not only save time, but increase the likelihood that reviewers will catch fake news in the first place. They shouldn't have to sift through as many false positives.
Facebook believes its combined efforts (including removing fake accounts and punishing malicious pages) can reduce the spread of fake news by 80 percent. With that said, we wouldn't count on these newer methods being effective. The shrinking is only going to help if you aren't attentive, and it might provoke some readers if they realize Facebook is trying to downplay stories. If nothing else, though, this illustrates the fine line Facebook has to walk: it wants to fight news and avoid controversies, but it also doesn't want to completely block content unless it's absolutely necessary.