U.S. Markets close in 4 hrs 40 mins
  • S&P 500

    +15.87 (+0.34%)
  • Dow 30

    -60.10 (-0.17%)
  • Nasdaq

    +72.14 (+0.45%)
  • Russell 2000

    -13.42 (-0.56%)
  • Gold

    -0.20 (-0.01%)
  • Silver

    -0.02 (-0.08%)

    -0.0006 (-0.0568%)
  • 10-Yr Bond

    -0.0150 (-0.94%)
  • Vix

    +0.48 (+2.81%)

    -0.0003 (-0.0216%)

    +0.0520 (+0.0455%)

    +232.33 (+0.41%)
  • CMC Crypto 200

    -65.80 (-4.48%)
  • FTSE 100

    -35.24 (-0.48%)
  • Nikkei 225

    +84.43 (+0.29%)

Twitter finds its own algorithms amplify ‘political right’ but it doesn’t yet know why

  • Oops!
    Something went wrong.
    Please try again later.
·3 min read
In this article:
  • Oops!
    Something went wrong.
    Please try again later.
Twitter amplified right-wing political posts more than left-wing ones in six out of seven countries  (PA)
Twitter amplified right-wing political posts more than left-wing ones in six out of seven countries (PA)

Twitter’s algorithm amplifies right-wing news outlets more than others – but the social network is not exactly sure why – according to internal research posted on its website on Thursday.

Since April, the company has examined if, and how, its algorithm that recommends content to users amplifies political content.

In six out of seven countries – all but Germany – tweets posted by accounts from the political right receive more exposure by the algorithm than the political left when studied as a group.

The first part of the study examined millions of tweets posted by elected officials, such as MPs, in seven countries – Canada, France, Germany, Japan, Spain, the UK, and the US – between 1 April to 15 August 2020.

The company used this data to test whether or not these tweets are amplified more on the algorithmically-ordered “timeline” of tweets than the reverse-chronological feed, and whether there was variety of results within a political party.

Twitter also studied whether its recommendation algorithms amplify political content from news outlets.

To do this, the company also analysed hundreds of millions of tweets containing links to news stories shared by people on Twitter between April and August last year.

Right-leaning news outlets see greater algorithmic amplification on Twitter compared to left-leaning news outlets – the researchers found. The initial results only show bias in amplification, and not what causes it.

Rumman Chowdhury, the head of Twitter’s machine learning, ethics, transparency and accountability team, called it “the what, not the why” in an interview with tech news website Protocol.

Since 2016, people on Twitter have been able to choose between viewing algorithmically-ordered posts first in the Home timeline, or viewing the most recent tweets in reverse-chronological order.

Twitter found that tweets about political content from elected officials, regardless of party or whether the party is in power, are algorithmically amplified on the Home timeline when compared to political content on the reverse-chronological timeline.

The first setting displays a stream of tweets from accounts that the account holder has chosen to follow, as well as recommendations of other content that Twitter thinks the person might be interested in based on their existing list of people that they follow.

Group effects did not translate to individual effects, Twitter said, since party affiliation or ideology has not been a factor that the network’s systems consider when recommending content to users.

Therefore, “two individuals in the same political party would not necessarily see the same amplification” – Twitter said.

Twitter wrote on its blog: “As a result, what an individual sees on their home timeline is a function of how they interact with the algorithmic system, as well as how the system is designed.”

It added that it hopes its findings will “contribute to an evidence-based discussion of the role these algorithms play in shaping political content consumption on the internet.”

Twitter argues that “algorithmic amplification is not problematic by default” as “all algorithms amplify”, but that it would be an issue if there is “preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it.”

The company said it is willing to share the aggregated datasets it used in the study to third-party researchers “upon request”.

Read More

Tech founder considering legal action against new Donald Trump social network that ‘copies’ his work

Facebook whistleblower Frances Haugen’s accusations against the social network

Ex-Facebook manager alleges social network fed Capitol riot

Amazon reveals huge scale of fake reviews on its site – and calls on social networks to do more to stop them

Gab: Right-wing social network hacked with posts, passwords, and private messages revealed