There is no "magic algorithm" for identifying extremist content, company says.
Twitter said Thursday it has shut down 235,000 accounts linked to violent extremism in the last six months alone. That brings the total number of terminated Twitter accounts associated with terrorism to 360,000 since mid-2015.

San Francisco-based Twitter, which had come under fire for allegedly not doing enough to crack down on extremist speech on its site, said it condemns acts of terrorism and that it is "committed to eliminating the promotion of violence or terrorism on our platform."

The announcement on Twitter's blog comes as lawmakers mull legislation demanding that Internet companies report suspected terrorist activities to the government. It also comes days after Twitter fended off a lawsuit (PDF) accusing the company of providing material support to terrorists and of being a "tool for spreading extremist propaganda." Twitter's successful defense was, among other things, that the Communications Decency Act shields the company from being legally liable for content posted on its site.

According to Twitter:

Daily suspensions are up over 80 percent since last year, with spikes in suspensions immediately following terrorist attacks. Our response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically. We have also made progress in disrupting the ability of those suspended to immediately return to the platform. We have expanded the teams that review reports around the clock, along with their tools and language capabilities. We also collaborate with other social platforms, sharing information and best practices for identifying terrorist content.
Here's how Twitter says it eliminates the extremist accounts:

As we mentioned in February, and other companies and experts have also noted, there is no one “magic algorithm” for identifying terrorist content on the Internet. But we continue to utilize other forms of technology, like proprietary spam-fighting tools, to supplement reports from our users and help identify repeat account abuse. In fact, over the past six months these tools have helped us to automatically identify more than one third of the accounts we ultimately suspended for promoting terrorism.