Skip to content
Naked Security Naked Security

New AI technology used by UK government to fight extremist content

And it won't rule out forcing big companies like Google and Facebook to use it.

The UK Home Office on Monday unveiled a £600,000 artificial intelligence (AI) tool to automatically detect terrorist content.
The Home Office cited tests that show the new tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. That accuracy rate translates into only 50 out of one million randomly selected videos that would require human review. The tool can run on any platform and can integrate into the video upload process to stop most extremist content before it ever reaches the internet.
The tool was developed by the Home Office and ASI Data Science. It uses advanced machine learning to analyze audio and visuals of a video to determine whether it might be terrorist propaganda.
ASI’s Dr. Marc Warner told BuzzFeed News that the tool’s algorithm works by spotting “subtle patterns” that exist in extremist videos:

We’ve created an artificial intelligence algorithm, which is highfalutin words for a sophisticated computer program to detect extremist content online. It works by spotting subtle patterns in the extremist videos that distinguish them from normal content, from the rest of the internet.

ASI has been reticent about sharing details about how the algorithm works. They want to get the details right first, Werner told the BBC.
What we do know is that the model has been trained using over 1,000 Daesh videos and that, being platform-agnostic, it can be used to support the detection of terrorist propaganda across a range of video-streaming and download sites in real-time.


We can look to bigger platforms that are already working on their own extremism-focused machine-learning projects to make some educated guesses as to what the tell-tale “subtle patterns” might be that Dr. Werner said the algorithm picks out.
When Facebook announced its own project in June, Monika Bickert, the company’s director of global policy management, and Brian Fishman, its counterterrorism policy manager, gave some concrete examples of what the technology was already doing:

  • Image matching. Just as internet services use hash values to automatically detect known child abuse images without having to actually read message content, Facebook’s systems automatically look for known terrorism photos or videos in uploads. If Facebook has ever removed a given video, for example, this automatic hash value matching can, and sometimes does, keep content from being reuploaded.
  • Language understanding. Facebook has experimented with analyzing text it’s removed for praising or supporting terrorist organizations. As of June, it was working on an algorithm to detect similar posts based on text cues.
  • Removing terrorist clusters. When Facebook identifies Pages, groups, posts or profiles as supporting terrorism, it uses algorithms to “fan out” to try to identify related material that may also support terrorism. For example, whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
  • Recidivism. Facebook said in June that it was getting “dramatically” faster at whacking moles.

Twitter, for its part, has been ferociously attacking extremist content: its most recent Transparency Report released in March, said that between July 1 2016 and December 31 2016, a total of 376,890 accounts were suspended for violations related to promotion of terrorism. Twitter emphasized at the time that 74% of those suspensions were accounts surfaced by its internal, proprietary spam-fighting tools. Government requests to shutter accounts represented less than 2% of all suspensions.
But while the larger internet platforms have resources to put into these projects, smaller platforms are on their own. That makes them the target for the UK-funded AI tool, the Home Office said: it wants to put machine learning technology into the hands of online companies such as Vimeo, Telegra.ph and pCloud to remove terrorist content from their platforms.
The Home Office said that such smaller platforms “are increasingly targeted by Daesh and its supporters,” yet they often lack the resources to develop sophisticated technology to weed out the content.
The technology was announced a day before Home Secretary Amber Rudd was heading to Silicon Valley to meet with communication service providers on the subject of tackling terrorist content online.
This is only the latest in the UK’s ongoing battle to get technology providers to stop the spread of extremist material. Last year’s terrorist attacks in London added to what was already a years-long war between Silicon Valley and multiple governments over fighting terrorism, including battles over encryption and proposed curbs on hate speech videos on social media.
While she was in Silicon Valley on Tuesday, the home secretary told the BBC that the AI tool proves that the government’s demand for the tech giants to clamp down on extremist activity is a reasonable one:

The technology is there. There are tools out there that can do exactly what we’re asking for. For smaller companies, this could be ideal.

And as for the bigger companies, if they don’t figure out this problem on their own, Rudd said, the UK government could well force their hands:

We’re not going to rule out taking legislative action if we need to do it.


4 Comments

A very typical political fix: Hide the problem, look everything is fine now, your news feed is pretty again.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?