Twitter Shut Down 125,000 ISIS-Related Accounts Since Last Year

The social network emphasized its work to curb abusive chatter, but acknowledged “there is no ‘magic algorithm’ for identifying terrorist content on the internet.”

Since the middle of last year, Twitter has suspended over 125,000 accounts that the social messaging company believes promoted terrorist activity largely connected to ISIS.

Facing heightened pressure from consumers and U.S. lawmakers, Twitter published a statement on its official blog Friday highlighting its work to curb violent extremism on its network.

“Like most people around the world, we are horrified by the atrocities perpetrated by extremist groups,” the post begins. “We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service.”

For some citizens fearful of looming violence and the global threat that ISIS poses, Twitter’s efforts to combat extremists have not been enough. The company is currently facing a lawsuit in the U.S. District Court in San Francisco, brought by the family of Lloyd Carl Fields Jr., who was killed in a terrorist attack in Jordan.

"For years, Twitter has knowingly permitted the terrorist group ISIS to use its social network as a tool for spreading extremist propaganda, raising funds and attracting new recruits," the lawsuit said. "Without Twitter, the explosive growth of ISIS over the last few years into the most feared terrorist group in the world would not have been possible."

Twitter’s policing of ISIS content on its network occurs at a time when much of the discussion that once lived on the open web has migrated to social platforms. As new forums for discussion and dissemination of information, companies like Facebook and Twitter must walk the line between enabling free speech and policing harassment, as well as managing incitements to violence. It's an especially difficult challenge, as Twitter notes in its post. “There is no ‘magic algorithm’ for identifying terrorist content on the internet," the company explained. "Global online platforms are forced to make challenging judgement calls based on very limited information and guidance.”

In the aftermath of the terror attacks in Paris and San Bernardino, California, Silicon Valley has also been pushed to do more to help U.S. law enforcement: to limit the reach of ISIS propaganda, and to develop creative solutions to counter the terror group’s messaging. In January, the White House assembled technology and social media heavyweights, including Apple CEO Tim Cook, and Facebook’s Chief Operating Officer Sheryl Sandberg to discuss novel approaches to counterterrorism.

In Congress, concerned lawmakers have pleaded with tech companies to institute aggressive policing of abusive speech. One measure would compel social media companies to notify law enforcement of suspected “terror activity” in posts and videos. Though, opponents have argued that there is no clear definition of what “terror activity” actually means, placing these companies in the awkward position of policing free speech and serving as government informants. Twitter says it maintains a zero-tolerance policy for the promotion of terrorism and violence.

“As the nature of the terrorist threat has changed, so has our ongoing work in this area,” the company said. Twitter has increased the size of its teams charged with monitoring and investigating extremist posts, the company said, and has installed spam-fighting tools to identify accounts that are similar to those that violate the company’s policies.

A Twitter spokesperson declined to comment beyond the blog post.

Skip to footer