Lawmaker accuses Twitter, Google, and Facebook of "commercial prostitution."

Facebook has claimed that it tweaked its community standards review system that allows users to report abusive, offensive, and illegal images and posts in light of a BBC investigation that highlighted the ease with which obscene material could be found on the site.

In a clash with MPs, the company's UK policy director Simon Milner told the home affairs committee chair Yvette Cooper that the images reported by the BBC were "rather innocent" but added that comments below the pictures were "horrible."

Facebook's community standards team—made up of thousands of people based in Dublin, Texas, California, and Hyderabad—didn't scrutinise, in detail, reports made via the company's review tool because, Milner said, it was the comments rather than the image that was abhorrent. It meant the system failed to flag up the abusive content.

Milner, who appeared alongside representatives from Google and Twitter during a grilling from MPs on the topic of hate crime, said: "There was no illegal child sexual abuse material that he [the BBC reporter] had found, instead it was rather innocent pictures of children… the child at the swimming pool or on the beach with horrible comments underneath them."

He added that Facebook had been "very clear, 'send us the links'… instead, the journalist sent us a document that contained screenshots… one of those screenshots—not 82—one was an image of child sexual abuse as determined by our expert team and we were clearly under an obligation to let the authorities know about that."

Ars reported last week on the BBC's investigation, which found that Facebook had failed to take down 82 of 100 images after the broadcaster flagged them to the site. But the news organisation somewhat fudged its probe of the free content ad network by subsequently sending screenshots, rather than links, to Facebook—potentially in breach of the UK's current legal guidelines.

However, Facebook has now admitted that there was a problem with its system for reporting abuse. Milner told the committee:

It was also clear that the journalist had identified something where our system was not working, there was something going wrong… it was that the reviewers were not always being able to see the comment alongside the picture, so the picture was perfectly innocent and therefore he was getting a message back saying 'this picture does not breach our terms.' But it was the connection between the picture and the comments and something wasn’t working in our review tool, we have now fixed that problem.
He claimed that Facebook had "addressed and reviewed all of the content we were told about" in relation to the BBC's report. He then rowed back on his comment, when challenged by Cooper about whether all of the images had since been removed, by saying: "I can't hand on heart tell you they're no longer there."

However, Cooper said that the images were still live on Facebook two days ago. "We're doing a lot already and we can still do more," Milner told the committee later in the hearing.

Politico accuses Twitter, Google, and Facebook of “commercial prostitution”

At one point during the heated exchange, Labour MP David Winnick asked Google's Peter Barron, Twitter's Nick Pickles, and Milner if they "have no shame" for failing to take a tougher stance against online hate crime. "I would be ashamed, absolutely ashamed, to earn my money in the way you three do," he said, before accusing the companies of "commercial prostitution."

Cooper had opened the spiky hearing with a heavy rebuke of Google, Twitter, and Facebook's abuse and hate crime reporting mechanisms. She criticised Google for failing to actively search for and then remove examples of YouTube videos that had apparently been uploaded by "proscribed terrorist organisations." When pressed on Google's policy, Barron said: "we react to take downs."

Google—like Facebook and Twitter—doesn't want to be seen as a publisher of the content that is found on its services, because it would then be exposed to libel laws. It's one of the reasons that the trio of US companies rely so heavily on their users to report offensive material.

Pickles, when quizzed by Cooper on a vile tweet aimed at German chancellor Angela Merkel that was still live despite it being reported via Twitter's tool, admitted that the firm's system "isn't effective enough." He added that it was deploying new technology that would be "a step change in how we deal with abuse."

"You all have a terrible reputation," Cooper said. "Surely, you should be able to do a better job in order to keep your users safe."

The hearing came on the same day that the German government threatened to bring in multi-million dollar fines against the likes of Facebook, Twitter, and YouTube if they fail to adequately muzzle hate speech on their networks.