U.S. markets close in 2 hours 55 minutes
  • S&P 500

    3,381.87
    -83.52 (-2.41%)
     
  • Dow 30

    27,513.51
    -822.06 (-2.90%)
     
  • Nasdaq

    11,283.80
    -264.49 (-2.29%)
     
  • Russell 2000

    1,595.91
    -44.59 (-2.72%)
     
  • Crude Oil

    38.45
    -1.40 (-3.51%)
     
  • Gold

    1,905.50
    +0.30 (+0.02%)
     
  • Silver

    24.48
    -0.20 (-0.81%)
     
  • EUR/USD

    1.1826
    -0.0042 (-0.35%)
     
  • 10-Yr Bond

    0.7990
    -0.0420 (-4.99%)
     
  • GBP/USD

    1.3026
    -0.0013 (-0.10%)
     
  • USD/JPY

    104.8510
    +0.1610 (+0.15%)
     
  • BTC-USD

    12,839.27
    -310.75 (-2.36%)
     
  • CMC Crypto 200

    256.34
    -7.07 (-2.69%)
     
  • FTSE 100

    5,792.01
    -68.27 (-1.16%)
     
  • Nikkei 225

    23,494.34
    -22.25 (-0.09%)
     

YouTube Finds Machines Really Suck At Moderating Content

Aditya Raghunath
·2 mins read

Alphabet Inc’s (NASDAQ: GOOGL) (NASDAQ: GOOG) video-sharing platform YouTube is shifting back to human moderators for monitoring online content after months of increased reliance on artificial intelligence, according to the Financial Times.

What Happened: It may seem that using technological innovations could improve the quality of content moderation on social media platforms. However, algorithm-based AI-driven content moderation techniques have proven to have a higher threshold for censorship.

YouTube Chief Product Officer Neal Mohan remarked that although “YouTube’s machines” could detect potentially harmful videos, it wasn’t capable of making a capable judgment as compared to human moderators.

The human evaluators “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment,” Mohan told FT.

With the onset of the Covid-19 pandemic forcing employees to stay home, YouTube deployed its machine learning algorithms to track harmful content. This led to over 11 million videos being taken down between April to June this year. FT reported that almost 160,000 videos were reinstated after an appeal. 

Why It Matters: Alphabet, Facebook Inc (NASDAQ: FB), and Twitter Inc (NYSE: TWTR) warned in March of the errors that AI could make in moderating content, as a majority of its workers stayed home due to the coronavirus pandemic, Reuters reported at the time.

Advertisement revenues are a significant source of income for social media platforms. Having strict content moderation parameters could drive advertisers away and directly impact this revenue source. 

In 2017, a lot of major brands pulled out of their ad campaigns on YouTube after their ads were showcased alongside objectionable content, as reported by the Guardian. At the time, YouTube addressed this issue by changing some of its policies and allowed a high degree of control to advertisers with new tools. 

Price Action: After a 2.42% drop during trading hours, Google Class A common stock further dropped marginally at the end of Friday’s extended trading hours to close at $1,451.09 per share. Class C shares fell 2.38% during the trading session and further dropped 0.55% in the after-hours to close at $1,452.

See more from Benzinga

© 2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.