"We don't want Facebook to be used for any terrorist activity whatsoever," says FB.

Facebook has admitted that "AI can't catch everything," and it remains heavily dependent on human moderators to flush out terrorist posts on the free content ad network.

In a blog post that comes days after the UK and France signalled a crack-down on big tech firms that fail to take action against the sharing of extremist content on their sites—Facebook said it was, for the first time, talking publicly about the methods it employs to try to combat terrorism.

It was keen to highlight how artificial intelligence was being used to attempt to limit the proliferation of such content on a site that has close to two billion users worldwide.

But AI and its wonky slant on social context serves only as an add-on to the tireless work being carried out by Facebook's 4,500-and-counting moderators who are tasked with having to mop up vile posts on the network.

The company's boss, Mark Zuckerberg, recently confirmed plans to grow that team by 3,000 over the next year. Controversially, Facebook didn't reveal whether those moderators would be employees or outsourced contractors.

Facebook rattled off a number of areas where it says AI could help it to squish extremist posts on the network.

Among other things, it uses image matching to determine if "terrorism" photos or videos have been previously posted; the system should then prevent other accounts from uploading it again. Language understanding using software that relies on "text-based signals" is also employed to apparently determine whether support is being given to Daesh or Al Qaeda.

Facebook said its system is getting better at "detecting new fake accounts created by repeat offenders." The company added:

Through this work, we’ve been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We’re constantly identifying new ways that terrorist actors try to circumvent our systems—and we update our tactics accordingly.
It also took the opportunity to justify some of its thinking behind the decision to share WhatsApp and Instagram users' phone numbers and selected other data with Facebook in a U-turn that upset plenty of folk.

"Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram," it said. "Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe."

Nonetheless, AI merely plays a support role to human moderators whose job it is to deal with endless reports from users flagging up dodgy accounts and hateful content.

On the heated topic of end-to-end encryption, which is baked into WhatsApp, Facebook said it can only "provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies." But it can't read the contents of encrypted messages.