Skip to content
Naked Security Naked Security

Twitter takes down 235K extremist accounts

Third parties say that Islamic State traffic on Twitter has plummeted by 45% over the past 2 years.

Twitter has suspended 235,000 accounts that it says were used to promote or threaten terrorism.

That’s in addition to the 125,000 suspensions announced in February, in which the accounts were primarily related to the so-called Islamic State (IS).

Thursday’s announcement brings the total number to 360,000 over the past year.

Twitter pointed to third parties who’ve confirmed that the company’s efforts are in fact stymying extremists:

The efforts Twitter’s made to rid itself of extremist content, which has included horrific images of beheadings and other violence, include expanding the teams that review reported content.

It also includes giving them new tools and language capabilities and collaborating with other social platforms to share information and best practices to identify terrorist content.

All of these moves have resulted in an 80% increase in daily suspensions since last year: suspensions that spike following terrorist attacks. Twitter’s response time to suspend reported accounts has shrunk “dramatically,” it says, along with the amount of time those accounts are on Twitter and the number of followers they accumulate.

Twitter says it’s also disrupted the whack-a-mole syndrome, curbing the ability of those suspended to immediately pop back up on the platform.

Twitter says that the increased speed in spotting, and purging, extremist accounts doesn’t come from one “magic algorithm.” Rather, it’s used tools like proprietary spam-fighting software to supplement reports from users and help identify repeat account abuse.

Such tools have helped Twitter to automate the identification of terrorist accounts. In fact, over the past 6 months, such tools have helped it to automatically identify more than one-third of the accounts it’s shut down, Twitter says.

Twitter didn’t comment specifically about news reports from June about how the big internet platforms are using automatic hashing to block content, but such technology would certainly fit the bill when it comes to automated recognition of a particular type of content.

In June, the Counter Extremism Project (CEP) unveiled a software tool that works in a similar fashion to those that automatically tag images of child abuse, urging the big internet companies to adopt it.

Instead of child abuse imagery, the version the group unveiled works to tag gruesome, violent content spread by radical jihadists to use as propaganda or for recruiting followers for attacks.

And instead of just focusing on images, the new, so-called “robust hashing” technology encompasses video and audio, as well.

It comes from Dartmouth University computer scientist Hany Farid, who also worked on Microsoft’s PhotoDNA technology, which has enabled companies like Google, Microsoft, ISPs and others to check large volumes of files for matches without those companies themselves having to keep copies of offending images. What’s more, PhotoDNA does all this without human eyes having to invade users’ privacy by scanning their email accounts for known child abuse images.

The algorithm works to identify extremist content on internet and social media platforms, including images, videos, and audio clips, with the aim of stopping the viral spread of content.

Whatever content it’s used to identify, the software works in a similar fashion: it looks for “hashes,” which are unique digital fingerprints assigned to content by online platforms. If a media file has already been identified as extremist, it can be quickly removed from wherever it’s posted.

Such technology doesn’t stop new extremist media from being posted. The hashes can’t automatically detect that a video contains footage of a beheading, for example.

But once such a video has been identified as extremist, it can be spotted and removed automatically, instead of having to go through the process of being reported, having humans vet and identify the material, and thereby having the time to spread virally.

Again, we don’t know for sure whether Twitter’s using that particular PhotoDNA-like hashing technology. All we know is that it’s using something, like spam filtering, to automatically sniff out abusive content.

Beyond technology, it’s also expanding partnerships with anti-terrorist organizations, including Parle-moi d’Islam (France), Imams Online (UK), Wahid Foundation (Indonesia), The Sawab Center (UAE), and True Islam (US).

Twitter says it’s going to keep investing in technologies and other resources that can help stop the spread of extremism.

It plans to keep us updated, regularly, via its  Transparency Report, beginning in 2017.

3 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!