Facebook has suffered a security breach that exposed personal profiles of moderators to would-be terrorists they were trying to ban from the platform Reuters

Facebook has suffered a major security breach, exposing the personal details of over 1,000 content moderators to people known to support terrorist organisations.

A security flaw in the moderation software exposed the personal Facebook profiles of employees from 22 departments whose job is to routinely review and remove inappropriate posts, images and videos from the social network, including pornography, hate speech and extremist propaganda.

Out of the 1,000 moderators affected, about 40 employees work for Facebook's counter-terrorism unit based in its European headquarters in Dublin, Ireland. As such their roles are particularly sensitive.

Many of these moderators routinely have to shut down private Facebook groups featuring inappropriate content, and they are required to use their personal profiles to log into their accounts.

The software bug, which was active for a month, caused the personal profiles of moderators to show up in the activity logs for the groups they had shut down.

Facebook discovered the security breach in early November 2016 and took two weeks to fix it. However, the security flaw also exposed previous activity logs showing which moderators were banning groups dating back to August 2016.

As a result of the bug, Facebook identified that seven members of an Egypt-based group that supports Hamas and Isis had viewed the personal profiles of six of the moderators in the counter terrorism unit. These employees now live in fear for their lives.

Moderators living in fear for their lives

"You come in every morning and just look at beheadings, people getting butchered, stoned, executed," one of the six affected moderators told the Guardian. "They should have let us use fake profiles. They never warned us that something like this could happen."

The moderator is an Iraqi-born Irish citizen who worked for a third-party contractor called CPL Recruitment, which provided hundreds of "community operations analysts" to Facebook. He was chosen for the anti-terrorism unit because of his Arabic language skills.

The salary was €13 ($15) an hour, and moderators were required to sift through incredibly upsetting content and make decisions at speed over whether the people sharing extremist content were condemning or encouraging violent acts.

Although Facebook offered to install a home alarm system in their home and provide transport to and from work for the six employees, as well as counselling, the moderator feared he would not be safe. He is living in hiding and now seeking compensation from Facebook and CPL for the psychological harm he experienced as a result of the leak.

It is interesting that news of the data breach has come to light now, less than 24 hours after the social network made a big deal about the fact that it is using both artificial intelligence and a team of 150 human moderators and counter terrorism experts to combat extremist content.

"We care deeply about keeping everyone who works for Facebook safe. Last year, we learned that the names of certain people who work for Facebook to enforce our policies could have been viewed by a specific set of Group admins within their admin activity log. As soon as we learned about this issue, we fixed it and began a thorough investigation to learn as much as possible about what happened. This included determining exactly which names were possibly viewed and by whom, as well as an assessment of the risk to the affected person," a Facebook spokesperson told IBTimes UK.

"Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter. Even so, we contacted each of them individually to offer support, answer their questions, and take meaningful steps to ensure their safety.

"In addition to communicating with the affected people and the full teams that work in these parts of the company, we have continued to share details with them about a series of technical and process improvements we've made to our internal tools to better detect and prevent these types of issues from occurring."