Facebook's closely guarded guidelines and rules for monitoring controversial content such as violence, pornography, terrorism, racism and self-harm have been revealed in a Guardian investigation on Sunday (21 May). The Guardian reviewed over 100 internal documents, training manuals, spreadheets and flowcharts exposing the inner workings of the social media giant and how it decides what can be shared on its platform.

The trove of documents, dubbed the Facebook Files, touches on various sensitive topics, such as revenge porn, violent content, hateful speech, self-harm and animal abuse, for which Facebook has drawn severe criticism in recent months.

"Keeping people on Facebook safe is the most important thing we do," Facebook's head of global policy management Monika Bickert said in a statement to Recode. "We work hard to make Facebook as safe as possible and often difficult questions and getting it right is something we take very seriously.

"In addition to investing in more people, we're also building better tools to keep our community safe. We're going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help."

Here are some of the key takeaways from Facebook's leaked guidelines:

Videos of graphic violence may not be automatically deleted

Videos that show graphic violence, such as violent deaths or self-harm, are marked as disturbing, may not be automatically deleted because they can "help create awareness for self-harm afflictions and mental illness or war crimes and other important issues".

"For videos, we think minors need protection and adults need a choice. We mark as 'disturbing' videos of the violent deaths of humans," Facebook's files read.

Are threats of violence credible?

The popular social media network aims to allow "as much speech as possible" but says it does draw the line at "content that could credibly cause real world harm". Facebook said many people often turn to its platform to "express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways".

According to Facebook, "violent language is most often not credible until specificity of language gives us reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design."

As per the guidelines, remarks such as "I hope someone kills you," "Kick a person with red hair" or "To snap a b**ch's neck, make sure to apply all your pressure to the middle of her throat" are not deemed credible threats.

Facebook
Hundreds of internal documents from Facebook reveal the policies that guide moderators on what content users can and cannot post on the site Reuters/Philippe Wojazer

Threats against President Donald Trump must be deleted

However, language such as "Someone shoot Trump" must be deleted because President Donald Trump is in a protected category as a head of state. Even if the threat against Trump does not seem credible, it must be removed.

Non-sexual child or animal abuse may not be automatically scrubbed

When addressing content that features child abuse, Facebook's policies state: "We do not action photos of child abuse. We mark as 'disturbing' videos of child abuse. We remove imagery of child abuse if shared with sadism and celebration."

However, Facebook does not immediately delete evidence of non-sexual child abuse "to allow for the child to be identified and rescued". However, it does add protections "to shield the audience".

In terms of animal abuse, photos and videos of animal abuse can be shared on the site "for awareness". Photos of animal mutilations and videos of repeated beating or torture of a living animal may be marked as "disturbing".

People are allowed to livestream attempts to self-harm on the site

Facebook allows users to livestream attempts to self-harm because it does not want to "censor or punish people in distress who are attempting suicide", the files read. However, the video may be taken down "once there's no longer an opportunity to help the person".

"We also need to consider newsworthiness, and there may be particular moments or public events that are part of a broader public conversation that warrant leaving up," Facebook's policies read.

Revenge porn, nudity and sexual activity

Facebook's policies on sexual content are deemed the most complex and confusing, The Guardian reports.

Sharing images of someone without their permission can be deemed "revenge porn" if the imagery was "produced in a private setting", the person in the image is "nude, near nude or sexually active" and if the lack of consent was confirmed by vengeful context or independent sources such as media coverage.

In terms of nudity, any "handmade" artwork that show nudity and sexual activity may be allowed. However, digitally made art that show nudity are not allowed on the platform. Footage of abortions that do not include nudity are allowed on the site as well.

Facebook's policies also take "newsworthiness" into account and allow images of adult and child nudity in the context of the Holocaust.