Advertisement
U.S. markets open in 1 hour 39 minutes
  • S&P Futures

    5,190.50
    -24.25 (-0.47%)
     
  • Dow Futures

    39,129.00
    -94.00 (-0.24%)
     
  • Nasdaq Futures

    18,109.75
    -121.75 (-0.67%)
     
  • Russell 2000 Futures

    2,040.20
    -9.60 (-0.47%)
     
  • Crude Oil

    82.72
    0.00 (0.00%)
     
  • Gold

    2,158.10
    -6.20 (-0.29%)
     
  • Silver

    25.15
    -0.12 (-0.47%)
     
  • EUR/USD

    1.0854
    -0.0022 (-0.21%)
     
  • 10-Yr Bond

    4.3400
    0.0000 (0.00%)
     
  • Vix

    14.84
    +0.51 (+3.56%)
     
  • dólar/libra

    1.2693
    -0.0036 (-0.28%)
     
  • USD/JPY

    150.4140
    +1.3160 (+0.88%)
     
  • Bitcoin USD

    63,100.79
    -5,129.61 (-7.52%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • FTSE 100

    7,700.67
    -21.88 (-0.28%)
     
  • Nikkei 225

    40,003.60
    +263.20 (+0.66%)
     

Why it’s so hard for Facebook and Google to stop violent content from being streamed

Flower rest at a road block, as a Police officer stands guard near the Linwood mosque, site of one of the mass shootings at two mosques in Christchurch, New Zealand, Saturday, March 16, 2019. (AP Photo/Mark Baker)
No matter how hard Google, Facebook and other sites work to scrub their services of video of the Christchurch shootings, they'll never be able to contain it. (AP Photo/Mark Baker)

Facebook (FB), Google’s (GOOG, GOOGL) YouTube, and Twitter (TWTR) are facing backlash after a suspected gunman live-streamed a mass shooting on Facebook.

The three companies are working to scrub any trace of the video from their services, after dozens of people were killed in two mosques in Christchurch, New Zealand on Friday, with some of the killings being live-streamed.

But the sheer scale of their networks and incredible number of users they support makes that a difficult task—even for the largest technology firms in the world.

Fighting a losing battle

Facebook said it began trying to stop the video from being spread after New Zealand police alerted the company about the stream.

“We're also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues,” Facebook New Zealand’s Mia Garlick said in a statement.

But the social network hasn’t moved fast enough for some critics. “Facebook should be able to take something like [the live stream] down within a minute, within 2 minutes,” said Sinan Aral, David Austin professor of management at MIT and founding partner at Manifest Investment Partners.

Google, meanwhile, says it has taken down thousands of YouTube videos related to the attacks, and is using its smart-detection technology to automatically flag and keep videos from being viewed. According to Google, the technology can remove 73% of all flagged videos before they’re ever viewed. Google also says it can remove the majority of videos with violent extremism before they can attract 10 views.

But with more than 1 billion people on YouTube and more than 2 billion using Facebook, policing so much content is like trying to hold back a tsunami with a screen door. There’s simply no way for Facebook or Google to stop people from uploading violent or exploitative content.

WELLINGTON, NEW ZEALAND - MARCH 16: New Zealand Prime Minister Jacinda Ardern addresses the media on March 16, 2019 in Wellington, New Zealand. At least 49 people are confirmed dead, with more than 40 people injured following attacks on two mosques in Christchurch on Friday afternoon. 41 of the victims were killed at Al Noor Mosque on Deans Avenue and seven died at Linwood mosque. Another victim died later in Christchurch hospital. Three people are in custody over the mass shootings. An Australian man has been charged with murder and will appear in court today. (Photo by Mark Tantrum/Getty Images)
New Zealand Prime Minister Jacinda Ardern addresses the media on March 16, 2019 in Wellington, New Zealand. (Photo by Mark Tantrum/Getty Images)

Short of enabling some kind of buffered delay that allows the companies to view content before it’s available to the general public, there’s not much either company can do in preventing the live streaming of violent acts.

Then there’s the fact that one of the internet’s greatest strengths — the ability to share and spread information with ease — can be used to quickly disseminate violent content. Individuals can crop portions of the video or alter it slightly to slip past censors, and propagate the video further. I was still able to find a version of the video that blurs the victims, but shows the shooter’s face and the attack taking place on YouTube at 4 p.m. on Friday.

Facebook is working to detect visually similar videos, and using audio technology to pick up parts of the video in cases where the image has been altered to defeat the visual detection systems.

Regardless of how hard Facebook, Twitter, Google, and other social media sites work to contain the virality of the mosque video, they’re fighting a losing battle. The moment something is available online, it is all but certain to live there forever.

Someone, somewhere has saved it and will be able to upload it again and again with ease, spreading it to others who download and share and upload it at will — if not on Facebook and YouTube, then somewhere else.

Do tech companies bear responsibility for content?

Even though the shooter streamed the attack on Facebook, and others uploaded it to YouTube, neither Facebook or Google probably won’t face legal consequences for the content being on their services.

“It is unlikely that they will face legal issues,” explained Kendra Albert, clinical instructional fellow at the Cyberlaw Clinic at Harvard Law School. “That’s because of a particular piece of law called Section 230 of the Communications Decency Act.”

According to Albert, Section 230 is what ensures that companies like Facebook, YouTube, Twitter, and others can’t be held responsible for what third-party users post on their services.

As Albert explains it, Section 230 explains why companies like Yelp aren’t liable for users posting fraudulent reviews of restaurants, or why news sites aren’t liable for commenters posting inflammatory comments. Most large tech firms have their own user guidelines that they work to ensure are followed, and remove particularly egregious posts — but even commenter systems can be gamed.

More from Dan:

Email Daniel Howley at dhowley@oath.com; follow him on Twitter at @DanielHowley. Follow Yahoo Finance on Facebook, Twitter, Instagram, and LinkedIn.finance.yahoo.com/

Advertisement