Facebook is testing more tools to combat child exploitation

The company suggests not all such content is shared with malicious intent.

Engadget· NurPhoto via Getty Images

Facebook says it is testing more tools to help the social network prevent people from sharing content that exploits children. One is a pop-up message that it plans to display across Facebook apps to people who use search terms linked to child exploration. It will detail the consequences of viewing such content and provide information on how to get help from offender diversion organizations.

The other measure is focused on the "non-malicious sharing" of "viral, meme child exploitative content." People who share that material will see a safety alert about the harms it can cause. The alert includes a warning that the content violates Facebook's rules along with the legal ramifications of sharing such material. Facebook will remove the content and report it to the National Center for Missing and Exploited Children (NCMEC), and delete accounts that promote such material.

Elsewhere, Facebook has updated its child safety policies. According to global head of safety Antigone Davis, Facebook will take down "profiles, Pages, groups and Instagram accounts that are dedicated to sharing otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary about the children depicted in the image." While images or videos that people share may not break Facebook's rules by themselves, Davis notes that "the accompanying text can help us better determine whether the content is sexualizing children and if the associated profile, Page, group or account should be removed."

In addition, the company has updated its reporting menu in more areas of Facebook and Instagram. Users can select an option called "involves a child" under the Nudity and Sexual Activity section. Facebook says material reported in this way will take priority for its content reviewers. It has also adopted Google’s Content Safety API to detect when posts may contain child exploitation and prioritize them for reviewers.

The company has long used various detection systems to root out content that exploits children and to find potentially inappropriate actions with children or possible cases of child grooming. Facebook says it's on the lookout for networks that violate its rules on child exploitation, in a similar way to how it tackles "coordinated inauthentic behavior and dangerous organizations."

Facebook recently conducted a study to better understand the intent behind sharing child exploitative content. It analyzed all of the content it reported to NCMEC in October and November. It found that over 90 percent of the content was similar to or the same as material that was reported in the past. Versions of six videos accounted for over half of the content Facebook flagged to MCMEC in that timeframe.

The company also looked at 150 accounts it reported to NCMEC for sharing child exploitative content. It believes that more than three quarters of these people didn't share the content with the intent of harming a child. Instead, they seemed to do so for reasons like outrage or making jokes in poor taste. Facebook gave "a child’s genitals being bitten by an animal" as an example. The company says that it's continuing to work on understanding intent.

Advertisement