Algorithms That Curate Feeds & Tech Company Secrecy

Consumer Reports has no financial relationship with advertisers on this site.

Later this month, your Facebook feed will start looking less political. The company says it’s testing a tweak that will surface fewer politics-related posts in users’ feeds, in a bid to keep political content from “taking over” what people see—an adjustment Facebook says users often ask for.

But it’s not clear what the change will look like. In its announcement and in comments made to Consumer Reports, Facebook didn’t share any details about how its systems would assemble the new feeds, or even decide what counts as political content. And if you like your feed just the way it is, well, too bad—you don’t have a say in the matter.

That’s almost always the case when tech companies tweak their algorithmic decision-making systems. These are the systems that determine which products people see when they search for a power drill; what they learn about vaccinations, rolling blackouts, or election results from social media or Google search results; and which of their friends’ posts they see at the top of their feeds—plus the posts they never notice because they are buried so deep.

Despite Facebook’s recent announcement, companies often make tweaks and adjustments to the way they recommend or moderate content without announcing the changes. A new study published Wednesday by Ranking Digital Rights, a nonprofit research group that grades companies on their privacy and content policies, found that tech companies typically offer very few details about how these critical systems function. (RDR partnered with Consumer Reports in developing the Digital Standard, a set of principles and criteria for evaluating how well digital products and services respect consumers’ rights.)

That means that why you see what you see on sites such as Amazon or YouTube is largely a mystery.

“What’s clear is we need much more transparency and accountability for the most heavily trafficked tech platforms, especially where algorithms are concerned,” says Marta Tellado, CEO of Consumer Reports.

When CR contacted Amazon, Google, and Facebook, the companies said they do publish some public information about their algorithmic systems. Facebook, for example, pointed to several blog posts that announce changes to the way it constructs your news feed. Amazon said that factors that affect search rankings include how often an item is purchased, its price, and availability. However, consumers can typically learn few details about the changes, and they aren’t offered alternatives.

“You might not even know what content is actually being shaped by these algorithms in the first place,” says Christo Wilson, a Northeastern University professor who studies algorithmic systems. “And even if you do know there’s something in place, it’s almost always heavily obfuscated.”

What If You Could Choose?

When social media platforms like Facebook, Twitter, and YouTube, which is owned by Google, cook up a complex algorithm to curate your feed, they’re making the choices they think will push you to spend more time on their platform, experts say, boosting their ad revenue. You don’t get to choose how the platforms select the posts or videos to deliver to you first, and in what order.

But what if you did have a say? Jack Dorsey, Twitter’s CEO, recently suggested that Twitter users may one day get to pick from a menu of algorithms.

In this imagined marketplace, you might choose to pick an algorithm that heavily features political debate, or one that hides it entirely. Or maybe you’d tell the system to ignore the posts or videos you’ve clicked on in the past when choosing what else to show you now. You’d get to choose what you value, rather than relying on the company’s preferences.

Queuing up a new sorting algorithm could give you a radically different online experience. Take the system that a group of researchers at Finland’s Aalto University recently proposed: To balance out the polarizing nature of today’s social media news feeds, which have been shown to sort people into like-minded bubbles, they created a system that would go out of its way to display a diversity of political views.

“We believe that users should have more control on their feed, and there should be more transparency in the ranking and filtering methods employed by social media,” says Cigdem Aslay, one of the researchers behind the algorithm.

Opaque Rules Frustrate Users

The evaluation by Ranking Digital Rights found that powerful American technology companies are cagey with even the most basic details about their algorithmic systems.

“Across the board, companies disclosed almost nothing about how they use algorithms, and even less about how they develop and train these systems,” the researchers wrote in the report, which was published Wednesday.

This is the first year the nonprofit is considering the way tech companies create algorithms and explain how they work in its annual rankings, which now evaluate 26 companies in more than a dozen countries. The index rates companies’ public positions about their privacy practices and their commitment to freedom of expression. The American cohort includes Amazon, Apple, Facebook, Google, Microsoft, Twitter, and others.

“We’re asking companies to tell us how their machines work, and they’re not telling us,” says Amy Brouillette, research and editorial manager at RDR. That’s a serious problem, Brouillette and other researchers say, because the algorithms can foster the spread of extremism and misinformation.

YouTube has long been accused of pushing some users toward incendiary, radicalizing videos. One study from Swiss and Brazilian researchers that was published in 2020 found that YouTube users who start out commenting on relatively mild anti-establishment content often end up leaving comments on extreme far-right videos later on, suggesting that the platform recommended more extreme videos over time.

Google says it disagrees with the methodology of the study, which did not examine the recommendations delivered to logged-in users. “It’s hard to draw broad conclusions from research that does not take into account the personalized nature of recommendations,” a spokesperson says.

Facebook’s largely automated content moderation systems can make mistakes that are hard to predict and even harder to understand. For example, the New York Times reported that Facebook didn’t allow several companies to post ads for clothing designed especially for people with disabilities, because an algorithm thought that the ads were selling medical devices, which the platform forbids. Meanwhile, CR found last year that Facebook’s automated system approved several ads with dangerous coronavirus misinformation that went against the company’s own rules.

Social media companies use a combination of human eyes and algorithmic systems to decide which posts to take down for violating their rules. For many users, those decisions are hard to decipher—like when users who discuss racism find that their accounts are flagged for hate speech, as USA Today and others have reported. Often, you can’t even tell if a decision was made by a person or a machine.

“You don’t know if your content is being flagged; when your stuff is removed, you often don’t know why,” says Northeastern’s Wilson. “The rules are opaque and ever-changing.”

Limited Options for Consumers

Companies that won’t discuss their algorithms in detail often say they’re protecting valuable trade secrets, Wilson says, or that revealing too much about their systems would make it easier for someone seeking to do harm to game them.

Critics say those aren’t good excuses. “They don’t have to show us the code,” Brouillette says. “We’re not asking for the special sauce.” Instead, she says, consumers deserve to know: “What are the ingredients that go into it? How much control do users have over them?”

For now, you have very few options for tweaking the algorithms that govern your online life. Facebook and Twitter allow you to sort your feeds chronologically, rather than relying on the platform to arrange your news feed for you. But neither product lets you make chronological sorting the default.

Facebook told CR that it gives users plenty of tools to control what they see, including setting individual friends and pages as “favorites” or unfollowing or blocking them. Users can also click “Why am I seeing this post?” on a specific item they’re curious about. (When I tried it on the first post that popped up in my feed, Facebook just said that “other factors” were at play.)

If you don’t like the way one product works, you don’t have many alternatives. There are so few large social media companies that you can’t really find an equivalent to a Facebook or a Twitter if you choose to abandon it.

Brouillette hopes that companies will respond to public pressure to make it easier for consumers to understand how their products work. Some of these opaque algorithms are still relatively new, which might explain why they’re still rarely discussed, she says. “Anything that gives users more control over the content they see would be a step forward.”

Advertisement