Skip to content
Naked Security Naked Security

How social media companies are using AI to fight terrorist content

Facebook, Google and other providers are stepping up with techniques ranging from AI detection to human intervention in response to calls from politicians after a rash of terror attacks

Recent terrorist attacks in the UK have added to a years-long war between Silicon Valley and multiple governments over fighting terrorism, including battles over encryption and proposed curbs on hate speech videos on social media.

Now, both Facebook and Google have announced new steps to fight the spread of extremist material. In both cases, the companies will be devoting machine learning and armies of human experts to the battle.

Facebook announced last week that it’s developing artificial intelligence (AI) and employing 150 experts to make the platform “a hostile place for terrorists.”

For its part, Google on Sunday announced that it, too, will devote more machine learning technology and is adding 50 expert NGOs to the 63 organizations that are already part of YouTube’s Trusted Flagger program.

There aren’t new announcements, per se, at other social media platforms such as Twitter or Snapchat, but Twitter pointed out that it is working furiously to battle violent extremism, as evidenced by its most recent Transparency Report released on March 21. According to that report, between July 1 2016 and December 31 2016, a total of 376,890 accounts were suspended for violations related to promotion of terrorism.

Twitter emphasized that 74% of those suspensions were accounts surfaced by its internal, proprietary spam-fighting tools. Government requests to shutter accounts represented less than 2% of all suspensions.

It’s easy to see why social media platforms might be feeling defensive: politicians have been ramping up calls for them to do more.

In the wake of the Westminster terrorist attack in London, in which five people died and many more were injured, UK home secretary Amber Rudd met with tech giants including Microsoft, Google, Twitter and Facebook to tell them that they’ve got to do more to tackle extremism and terrorism, including that law enforcement must be able to “get into situations like encrypted WhatsApp”. Naked Security joined the chorus of experts pointing out that what Rudd was calling for won’t work.

Then, following the attack on London Bridge, in which eight were killed and 48 were injured, UK prime minister Theresa May called for new measures, including working with allied democratic governments to reach international agreements on regulating cyberspace to prevent the spread of extremism and terrorism planning.

The big social media providers responded to May by insisting that they were already working hard to make their platforms safe. “Terrorist content has no place on Twitter,” said Nick Pickles, the company’s public policy head, while Google said it was “already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution.”

What Facebook’s doing

In Facebook’s announcement last week, Monika Bickert, the company’s director of global policy management, and Brian Fishman, its counterterrorism policy manager, gave a look behind the curtain at what the company’s already doing: something Facebook hasn’t publicly talked about before.

Some concrete examples:

  • Image matching. Just as internet services use hash values to automatically detect known child abuse images without having to actually read message content, Facebook’s systems automatically look for known terrorism photos or videos in uploads. If Facebook has ever removed a given video, for example, this automatic hash value matching can, and sometimes does, keep content from being reuploaded.
  • Language understanding. Facebook recently began experimenting with analyzing text it’s removed for praising or supporting terrorist organizations. It’s working on an algorithm that will be able to detect similar posts based on text cues.
  • Removing terrorist clusters. When Facebook identifies Pages, groups, posts or profiles as supporting terrorism, it uses algorithms to “fan out” to try to identify related material that may also support terrorism. For example, whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
  • Recidivism. Facebook says it’s getting “dramatically” faster at whacking moles.

But while it’s getting faster at automatically spotting repeat offenders and thereby shutting them down sooner, and while it’s working to enhance other machine-learning technologies to ferret out extremists, it’s an ongoing project, Facebook says:

It is adversarial, and the terrorists are continuously evolving their methods too.

Facebook is also fully aware that technology alone isn’t enough. AI can’t catch everything, it acknowledged:

Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story… To understand more nuanced cases, we need human expertise.

Facebook has a specialist team of more than 150 people – a team whose members speak a total of nearly 30 languages – that’s focused on countering terrorism. Members include academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.

Facebook’s also aware that it doesn’t have to go it alone. In December it announced that it would be working with Microsoft, Twitter and YouTube to create a shared industry database of hashes for violent terrorist content. That will enable items banned by one platform to also be removed from the other platforms.

Facebook is also collaborating with governments that keep it updated on terrorist propaganda mechanisms and with counterspeech partner programs.

What Google’s doing

Kent Walker, general counsel at Google, said on Sunday that the company’s “committed to being part of the solution” to tackling online extremist content:

Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all.

There should be no place for terrorist content on our services.

Google’s taking four new steps to fight terrorism online:

  • More technology. Google echoed Facebook’s assertion that this can be challenging: “A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user.” Still, technology’s doing some heavy lifting: over the past six months, video analysis models have surfaced more than 50% of ultimately removed content.
  • More humans. Trusted Flagger reports – ie, reports coming from independent experts – are accurate more than 90% of the time, Google says. It plans to expand the program to add 50 expert NGOs to the program’s current 63 organizations and will support them with operational grants. In addition to terrorism, they’ll also focus on hate and self-harm speech. Google also plans to expand its work with counter-extremist groups to help identify extremist recruitment and radicalization content.
  • Less tolerance for borderline content. Google’s going to take a tougher stance on content that doesn’t clearly violate its policies, including inflammatory religious or supremacist content. Such content will be harder to find, tucked behind an interstitial warning. Nor will such posts be monetized or recommended. Comments will be disallowed, as will user endorsements.
  • Working with Jigsaw on counter-radicalization. YouTube is working with this company, which is behind “The Redirect Method” of using ad targeting to send potential IS recruits to anti-terrorist videos that will hopefully change their minds about joining extremist organizations. Google said that in previous trials of the Jigsaw system, potential recruits have clicked through on the ads at an “unusually high rate” and watched over half a million minutes of video content that “debunks terrorist recruiting messages.”

“Collectively, these changes will make a difference,” Google says. Facebook, for its part, says that we’ve got to get better at “spotting the early signals before it’s too late,” both in online and offline communities.

Twitter, for its part, points to tweets sent by one of the foremost international experts on radicalization and terrorism, Professor Peter Neumann of King’s College London, sent in response to PM May’s recent statement. In a series of messages, he said that few people radicalize exclusively online. Thus, it’s inappropriate to focus solely on social media as the sole place where extremists are born or the sole venue in which they spread their propaganda:

Blaming social media platforms is politically convenient but intellectually lazy.


1 Comment

Censorship is a slippery slope, and those being censored typically find a way around things. The only real solution to terrorism is tracking down these naredowells and bring retribution to their doorstep. Frankly, I would think monitoring rather than censoring their social media activity would be beneficial to law enforcement. The fact that May and her ilk are focusing on social media as the “problem” is a diversionary tactic in my view. GOVERNMENTS are the ones who have the moral obligation and ability to surveille, track down, interdict, prosecute, and incarcerate and/or kill terrorists. That’s where government attention should be focused. That is, of course, if they really intend to do something substantive. Fingering Twitter, Facebook, et al, in this regard is smoke and mirrors.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!