Misogyny, bullying are generally ok, threats against Trump are not.

Just as the Tories—in their bid to form the next UK government—push for greater policing of free content ad networks, a trove of documents revealing the secret guidelines used by Facebook's moderators to deal with posts from child abuse to suicide to terrorist propaganda has been leaked online.

The Guardian published the Facebook files on Sunday night. It reported some disturbing findings about what can and can't be moderated on Facebook, after the newspaper was passed more than 100 internal training manuals that included spreadsheets and flowcharts on how the Mark Zuckerberg-run company deals with hate speech, violence, self-harm, and a whole range of other issues.

So, it's absolutely fine—under Facebook rules—to leave up a violent, deeply misogynistic post that reads: "To snap a bitch's neck, make sure to apply all your pressure to the middle of the throat." Likewise for comments such as "kick a person with red hair," or "let's beat up fat kids." But one that carries a message such as "Someone shoot Trump" is banned from the site, with moderators being advised to remove such a post. (Threatening a president is a criminal act in the US.)

Facebook tells its moderators that things people say on the site—which has a userbase close to two billion worldwide—differ from what they would say in person to someone. The guidelines explain:

We aim to allow as much speech as possible but draw the line at content that could credibly cause real world harm. People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.
We aim to disrupt potential real world harm caused from people inciting or coordinating harm to other people or property by requiring certain details to be present in order to consider the treat credible. In our experience, it's this detail that helps establish that a threat is more likely to occur.
The leaked rules land not only at the height of election season in the UK, but also follow politicians of all stripes attacking Google, Twitter, and Facebook for failing to effectively police the content posted on their ad-stuffed services.

Facebook has long insisted that it isn't a publisher, preferring instead to be seen as a benign "platform"—much like ISPs consider themselves to be dumb pipes with no responsibility for the content accessed or shared online by its subscribers.

Here in Europe, the multi-billion dollar firm founded by Zuckerberg can fall back on current regulation by citing the EU's E-commerce Directive. Article 15 of the law states that providers acting as a "mere conduit," "caching," or "hosting" service aren't obliged "to monitor the information they transmit or store." The directive also makes it clear that there is no "general obligation actively to seek facts or circumstances indicating illegal activity."

But the question is how long this claim will hold water, especially given that Facebook is clearly intervening on some of the content that is posted on its site. Troubling, too, is use of the word "news" in one of its flagship features: the newsfeed.

Facebook's global policy management boss, Monika Bickert, told the Guardian that it's impossible to police a "diverse global community"—in turn implying that moderating English language posts on a country-by-country basis that takes into account local laws, is out of the question. The company recently said, in the face of growing criticism, that it was "hiring" an extra 3,000 content moderators in an effort to crack down on "hate speech and child exploitation." However, it has refused to say whether the jobs will be in-house or outsourced.

"We have a really diverse global community and people are going to have very different ideas about what is OK to share. No matter where you draw the line there are always going to be some grey areas. For instance, the line between satire and humour and inappropriate content is sometimes very grey. It is very difficult to decide whether some things belong on the site or not," Bickert told the newspaper.

"We feel responsible to our community to keep them safe and we feel very accountable. It’s absolutely our responsibility to keep on top of it. It’s a company commitment. We will continue to invest in proactively keeping the site safe, but we also want to empower people to report to us any content that breaches our standards."

Zuckerberg's free content ad network—which continues to have a very strict policy about nudity on the site—is also dodging the publisher tag for a very expensive reason: if it were to edit and curate the posts on its site, the company would suddenly be exposed to libel laws. Arguably, its algorithm does this already, of course.

Last week, the Tory party published its manifesto ahead of the general election on June 8, in which it pledged to "put a responsibility on industry not to direct users—even unintentionally—to hate speech, pornography, or other sources of harm." Among other things, the Conservatives have vowed to bring in fines for sites that fail to remove illegal content in a timely manner.

It appears to be something of a departure from the party's previous noises about the regulation of free content ad networks.

In the last parliament, Tory culture minister Matt Hancock said: "Social media companies are already subject to a variety of different regulations and we have no plans to amend Ofcom's duties to regulate in this area. Government expects online industries to ensure that they have relevant safeguards and processes in place, including access restrictions, for children and young people who use their services."