Undercover Facebook Moderator Claims Company Failed to Remove Child Abuse

The Associated Press
The Associated Press

A reporter went undercover as a Facebook moderator and discovered that the company fails to remove examples of child abuse and violence.

The reporter spent time undercover at CPL Resources, a contractor which Facebook hires for content moderation, and revealed instances of child abuse which were allowed to remain on the platform for years.

“Moderators were given three options when reviewing a queue of material: Ignore, delete, or mark as disturbing. The latter means it remains on Facebook, but places a restriction on who is able to view the content,” reported Business Insider. “The reporter found that shocking examples of child abuse, racism, and violence were allowed to remain on Facebook.”

One example was a 2012 video of a man beating a little boy, which was reported to Facebook but subsequently allowed to stay on the platform.

“In its first two days on Facebook, the video was shared 44,000 times, and it was still up years later when Channel 4 investigated,” Business Insider explained, adding that the video was still up despite Facebook’s insistence it should have been removed.

Another post which was allowed to remain on the platform was “a meme of a little girl having her head held underwater with the caption ‘when your daughter’s first crush is a little negro boy.'”

This month, a former Facebook censor claimed she had become “desensitized” to graphic content on the platform, which included child porn.

One example which the censor repeatedly saw was a video featuring two children between the ages of nine and twelve, who were “standing facing each other, wearing nothing below the waist, and touching each other.”

Despite the graphic nature of the video, however, accounts which shared it frequently went unpunished.

“It would go away and come back, it would appear at multiple times of the day. Each time the user location would be different. One day shared from Pakistan, another day the US. It’s kinda hard to track down the initial source,” the censor explained, adding that she was “disturbed” when Facebook told her not to remove the accounts sharing such material.

“If the user’s account was less than 30 days old we would deactivate the account as a fake account,” she declared. “If the account was older than 30 days we would simply remove the content and leave the account active.”

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.

 

COMMENTS

Please let us know if you're having issues with commenting.