- President Donald Trump has steadily ratcheted up accusations that Facebook, Google and Twitter content is politically biased against conservatives.
- The attacks on Silicon Valley are working: A recent Pew Research poll found that 64 percent of Republicans think major technology companies as a whole support the views of liberals over conservatives.
- The technology companies have themselves to blame. As long as they keep consumers in the dark about their algorithms, and how and why the public receives certain information, there will be no way to successfully refute these allegations.
We live in the era of the Twitter presidency. With a few short missives before breakfast, President Donald Trump speaks directly to his base and still manages to drive the news cycle of the mainstream media he circumvented. On top of this innovation, the success of his presidential campaign team is credited in part to a brilliant social media strategy .
Indeed, the #MAGA phenomenon is among the all-time most successful online movements. The Trump Republican Party has turned the tables on the Obama-era dominance of Democratic digital politicking.
And yet, quixotically, here we are with this: Through the summer and fall, Donald Trump has steadily ratcheted up the debate about whether U.S. tech firms are intentionally and illegitimately suppressing conservative viewpoints. Republican congressional leaders have conducted hearings to grill Silicon Valley chiefs about secret bias in their algorithms. The attorney general announced he will meet with numerous state attorneys general to discuss allegations that tech companies are suppressing free speech.
A recent Pew Research poll found that 85 percent of Republicans and Republican-leaning independents think it likely that social media sites intentionally censor political viewpoints, and 64 percent of Republicans think major technology companies as a whole support the views of liberals over conservatives.
It may be that some cynical politicians are just looking to work the refs harder in a game they are already winning (drawing attention away from the more unsavory things happening in online political media). But clearly, for many social media users this #stopthebias critique rings true.
Trust in these companies is low
Surely, this is all nonsense. If we've learned anything from Facebook FB , Google GOOGL and Twitter TWTR since the scandals over fake news began, it is that they are laser focused on their business interests. And it is plainly bad for business to take sides in partisan media. And yet there is a sliver of doubt that keeps the debate alive and even raging. Why is that?
A steady stream of scandals doesn't help the trustworthiness of these companies. Facebook just announced that hackers compromised the accounts of millions of users (including Mark Zuckerberg's). Google didn't bother sending a representative to congressional hearings on the topic, though its CEO did recently meet with lawmakers privately, and its chief privacy executive testified on Capitol Hill, but his answers about China and a censored version of its search engine were criticized for reverting to a corporate script. Twitter danced around in circles about whether or not Alex Jones deserved to tweet before Jones and his Infowars account were banned by Twitter in early September in response to public outrage.
There is a history here. Back in May 2016 a whistleblower alleged that Facebook's Trending Topics feature was proactively designed to suppress conservative-minded content. Facebook's response to this incident did not dispel the doubts created among partisans on the right. Although, ironically, the removal of humans from the curation process (and total reliance on algorithms) may well have increased the amount of partisanship on the average news feed.
Since then, all of the major digital media platforms have announced and implemented numerous product changes intended to block organized disinformation operations, reduce the toxicity of the content they serve up, and increase the quality of the time users spend with their services.
Conspiracy thrives when the public is left in the dark
It may be working. It may not. No one is really sure. And this is why the conspiracy of social media bias continues to thrive. Opacity. We don't know what's going on behind the curtain. Despite all of the reasons why this accusation is most likely false, there is no way to say for sure. Why do we get the content that we get through our social media feeds and search queries? What drives the ad-targeting engine that serves up our attention to the highest bidder? How are those decisions made? Who (or what) makes them? What is it about the personal data they collect about us that guides those decisions? None of the answers to these questions are knowable outside the companies. The average consumer is in the dark. And in the dark we can imagine a lot of things that probably aren't true but could be true.
The companies hold all the answers to these questions. They have all the data to show us those answers. But they don't do that. And since we cannot examine a representative data set that demonstrates definitively how these digital media platforms work, we rely on our own anecdotal experiences, or those of people we know. On any given day, if we search for "Trump news" on Google, we might indeed get more stories produced by liberal outlets than conservative ones – at least at the time the president made his comments. And on any given day, the Facebook News Feed might serve up a disproportionate share of posts critical of conservative views. Or there might be a viral YouTube video or a trending topic on Twitter that seems partisan.
Is this the left-leaning employees of Silicon Valley companies twisting media for their own benefit? Not likely. But it's easy to see why people jump to that conclusion. And it's not just conservatives. Plenty of liberals blame Facebook for favoring the Trump campaign over the Clinton campaign and tipping the scales in an extremely close election.
The companies make a clear and obvious counterargument. They are not in the business of making value judgments. It's simply not in their commercial interests to do so. They don't want to be the "arbiters of truth"; they don't want to determine what constitutes nudity or profanity and what does not; and they don't want to determine whether certain novel forms of extreme content deserve to be taken offline or not. No matter what they decide, someone will accuse them of bias. That is why they are desperate to transfer the responsibility (and legal liabilities) of making these decisions to someone else. They want to act upon the policies set forth by a third party, and they don't care who that third party might be — whether government or civil society or industry organization — so long as the public thinks that third party is credible and so long as the regulations they set are favorable, meaning the rules favor the industry's desires to innovate, even if that innovation comes at the expense of some public interest.
In the end this won't work. Because the tech companies do decide. They are both publishers and they are technology platforms. Every day, they sort political information and deliver it to billions of people. And we do not know the rationale for those choices. Until we do, this controversy is here to stay, because these companies are the new masters of public information. While we've never had a perfect system of news production and distribution (far from it), we have always had a pretty clear understanding of how it came to us, who decided, and why. And now we don't. And the gatekeepers are now media monopolists that likes of which would turn Citizen Kane green with envy.
The answer to the problem of #stopthebias is to pull back the curtain on the digital media marketplace. If we are going to have the most valuable companies in the history of the world decide how all of our news and information is sorted and delivered to us, we are going to need radical transparency. We are going to need a new digital social contract that guarantees our rights in this market.
A blueprint for more clarity and oversight
A new report we authored and which was published by the Harvard Kennedy School and New America lays out a blueprint for how this might work. The starting point is to require a lot more clarity about how digital media works — how we are targeted with particular kinds of ads, why we are getting more of one kind of content than another and what makes our digital media menu different from that of our neighbors'. A lot of this information could be given directly to consumers. Even more can be made available to independent researchers and journalists.
There also should be a form of auditing and oversight for the automated systems that control political information. Just like we insist on independent audits of pharmaceuticals before they hit the market and we conduct spot checks of the health and safety standards of restaurants and hotels, we should be able to check and see whether the masters of the digital universe are playing games or playing fair.
And finally, we need a major increase in consumer protection of our privacy rights in this industry. The valuable asset in this market is personal data, which is used to form audience profiles that are in turn sold to advertisers and marketers. What really determines who sees what on digital media isn't a partisan engineer, but an algorithm that predicts based on a vast amount of personal data what kind of content is most likely to keep a particular consumer clicking. If that's not the logic we want deciding what political information we get, there is very little that can be done about it. Give more control over data to individuals, and it will fundamentally change who holds the power of information.
If Silicon Valley's answer to conspiracies of political bias continues to be "trust me," they will never see the end of it and ultimately it will hurt their businesses. In the long run, showing consumers what's happening behind the curtain is not only good for the bottom line, it's good for the country.
—By Dipayan Ghosh, Pozen Fellow at Harvard University's Shorenstein Center on media, politics and public policy; and Ben Scott, director of policy and advocacy, Omidyar Network. They are the co-authors of "Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet."
More From CNBC