Facebook Accidentally Exposes Moderators to Suspected Terrorists

facebook
Facebook

A bug in Facebook’s software exposed the identities of moderators to potential terrorists, the Guardian has revealed.

Discovered last year, it affected over 1000 workers across 22 departments who had taken down content including sexual material, offensive speech, and, critically, terrorist propaganda. The bug allowed the personal profiles of moderators to appear in activity logs visible to administrators of Facebook groups if another admin of that group had had their profile removed for breaching the terms of service.

The issue was first noticed by Facebook moderators after they began receiving friend requests from people they were monitoring on the social media network. Facebook brought together a “task force of data scientists, community operations and security investigators” when they themselves picked up on the leak in November of last year, along with warning anyone they believed were affected. “We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.” The bug was not fixed until November 16; however, not only did it show moderators who were working during that time, but also those who had censored accounts from August 2016 onwards.

40 of those who were at risk were part of an anti-terrorism unit based in Dublin at Facebook’s European headquarters. Of those, six were judged to be “high priority” victims of the error, with potential terrorists having apparently viewed their accounts. Facebook’s head of global investigations, Craig D’Souza, talked directly to those six individuals, assuring them there was a “good chance” that nothing would come from it.

“Keep in mind that when the person sees your name on the list, it was in their activity log, which contains a lot of information… there is a good chance that they associate you with another admin of the group or a hacker,” D’Souza told them, according to chat logs. One moderator, who ended up fleeing Ireland, replied that while he understood nothing may come of it, but he wasn’t “waiting for a pipe bomb to be mailed to my address until Facebook does something about it.”

The moderator in question was an Iraqi-born Irish citizen, who spoke to the Guardian about the incident. Fearing for his life, he quit his job and moved to Eastern Europe for five months. During that time he kept a very low profile, forced to use savings to support himself; he spent his time keeping fit and liaising with his lawyer and Dublin police.  “It was getting too dangerous to stay in Dublin,” he said. “The only reason we’re in Ireland was to escape terrorism and threats,” explaining that his father had been kidnapped and his uncle murdered when he was in Iraq. He told the Guardian that the others designated high risk had their accounts viewed by people linked to ISIS, Hezbollah, and other recognized terrorist groups, such as the Kurdistan Workers Party:

When you come from a war zone and you have people like that knowing your family name, you know that people get butchered for that. The punishment from ISIS for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here.

In the end, he was forced to move back to Dublin after running out of money, but he still worries for his safety. “I don’t have a job, I have anxiety and I’m on antidepressants,” he said. “I can’t walk anywhere without looking back.”

The moderator was one of many who was contracted out to Facebook by the global outsourcing company Cpl Recruitment. Both companies offered counselling, with Facebook’s being provided thorught their employee assistance program. However, the moderator felt that staff hired by Cpl were treated like “second-class citizens” compared to standard Facebook employees. He has filed a legal claim against both Facebook and Cpl, seeking compensation for the psychological damage he has suffered due to the bug.

One simple rule set out for the moderators was arguably what led to the situation escalating as it did, with employees put at serious risk. This was that those monitoring Facebook were required to use their personal profiles to log into the moderating system. “They should have let us use fake profiles,” the moderator said. “They never warned us that something could happen.”

Facebook released a statement to the press, saying that its investigation would determine “exactly which names were possibly viewed and by whom, as well as an assessment of the risk to the affected person.” It went on to say that Facebook had contacted anyone affected in order to “offer support, answer their questions and take meaningful steps to ensure their safety… Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter.” They also confirmed that as a result of the leak, they would be testing using administrative accounts that are not linked to those doing the moderation.

Jack Hadfield is a student at the University of Warwick and a regular contributor to Breitbart Tech. You can like his page on Facebook and follow him on Twitter @ToryBastard_ or on Gab @JH.

COMMENTS

Please let us know if you're having issues with commenting.