Sophos News

How social media companies are using AI to fight terrorist content

Recent terrorist attacks in the UK have added to a years-long war between Silicon Valley and multiple governments over fighting terrorism, including battles over encryption and proposed curbs on hate speech videos on social media.

Now, both Facebook and Google have announced new steps to fight the spread of extremist material. In both cases, the companies will be devoting machine learning and armies of human experts to the battle.

Facebook announced last week that it’s developing artificial intelligence (AI) and employing 150 experts to make the platform “a hostile place for terrorists.”

For its part, Google on Sunday announced that it, too, will devote more machine learning technology and is adding 50 expert NGOs to the 63 organizations that are already part of YouTube’s Trusted Flagger program.

There aren’t new announcements, per se, at other social media platforms such as Twitter or Snapchat, but Twitter pointed out that it is working furiously to battle violent extremism, as evidenced by its most recent Transparency Report released on March 21. According to that report, between July 1 2016 and December 31 2016, a total of 376,890 accounts were suspended for violations related to promotion of terrorism.

Twitter emphasized that 74% of those suspensions were accounts surfaced by its internal, proprietary spam-fighting tools. Government requests to shutter accounts represented less than 2% of all suspensions.

It’s easy to see why social media platforms might be feeling defensive: politicians have been ramping up calls for them to do more.

In the wake of the Westminster terrorist attack in London, in which five people died and many more were injured, UK home secretary Amber Rudd met with tech giants including Microsoft, Google, Twitter and Facebook to tell them that they’ve got to do more to tackle extremism and terrorism, including that law enforcement must be able to “get into situations like encrypted WhatsApp”. Naked Security joined the chorus of experts pointing out that what Rudd was calling for won’t work.

Then, following the attack on London Bridge, in which eight were killed and 48 were injured, UK prime minister Theresa May called for new measures, including working with allied democratic governments to reach international agreements on regulating cyberspace to prevent the spread of extremism and terrorism planning.

The big social media providers responded to May by insisting that they were already working hard to make their platforms safe. “Terrorist content has no place on Twitter,” said Nick Pickles, the company’s public policy head, while Google said it was “already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution.”

What Facebook’s doing

In Facebook’s announcement last week, Monika Bickert, the company’s director of global policy management, and Brian Fishman, its counterterrorism policy manager, gave a look behind the curtain at what the company’s already doing: something Facebook hasn’t publicly talked about before.

Some concrete examples:

But while it’s getting faster at automatically spotting repeat offenders and thereby shutting them down sooner, and while it’s working to enhance other machine-learning technologies to ferret out extremists, it’s an ongoing project, Facebook says:

It is adversarial, and the terrorists are continuously evolving their methods too.

Facebook is also fully aware that technology alone isn’t enough. AI can’t catch everything, it acknowledged:

Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story… To understand more nuanced cases, we need human expertise.

Facebook has a specialist team of more than 150 people – a team whose members speak a total of nearly 30 languages – that’s focused on countering terrorism. Members include academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.

Facebook’s also aware that it doesn’t have to go it alone. In December it announced that it would be working with Microsoft, Twitter and YouTube to create a shared industry database of hashes for violent terrorist content. That will enable items banned by one platform to also be removed from the other platforms.

Facebook is also collaborating with governments that keep it updated on terrorist propaganda mechanisms and with counterspeech partner programs.

What Google’s doing

Kent Walker, general counsel at Google, said on Sunday that the company’s “committed to being part of the solution” to tackling online extremist content:

Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all.

There should be no place for terrorist content on our services.

Google’s taking four new steps to fight terrorism online:

“Collectively, these changes will make a difference,” Google says. Facebook, for its part, says that we’ve got to get better at “spotting the early signals before it’s too late,” both in online and offline communities.

Twitter, for its part, points to tweets sent by one of the foremost international experts on radicalization and terrorism, Professor Peter Neumann of King’s College London, sent in response to PM May’s recent statement. In a series of messages, he said that few people radicalize exclusively online. Thus, it’s inappropriate to focus solely on social media as the sole place where extremists are born or the sole venue in which they spread their propaganda:

Blaming social media platforms is politically convenient but intellectually lazy.