Site icon Sophos News

Terrorists told to hijack social media accounts to spread propaganda

Monika Bickert, Facebook’s global head of policy management, and Brian Fishman, head of counterterrorism policy said in a post on Thursday that the US Department of Justice (DOJ) had recently discovered an alleged IS supporter warning others that it’s gotten tougher to push propaganda on the platform.
As detailed in a criminal complaint, one of the alleged terrorist/sympathizer’s suggestions for fellow propagandists was to try to take over legitimate social media accounts that had been hijacked: to act like wolves pulling on sheepskins to escape from Facebook’s notice, as it were.
Facebook’s continued work on tackling terrorist propaganda is bearing fruit.
Bickert and Fishman also reported that Facebook has removed 14 million pieces of content dubbed likely to come from terrorists, as determined by new machine learning technology; its hashing of images, videos, audio and text to create content fingerprints; and its long-suffering human reviewers (thank you, you poor souls).
They said that most of the content, which is related to the Islamic State (IS), al-Qaeda, and their affiliates, was old material that Facebook dug up by using specialized techniques.
Of course, 14 million pieces of content represents scarcely a drop in the ocean when it comes to the content-stuffed platform. Facebook was reportedly seeing 300 million photo uploads alone, per day, way back in 2012, and 2.5 billion content items shared: numbers that have ballooned since then.
Not to rain on Facebook’s parade, by any means: it’s doing important work, and it’s doing it in a landscape where terrorists keep coming up with new ways to game the platform.

How long does violative content stay up, and is that important?

Facebook emphasized that there are two metrics to measure success in this ongoing battle. One of those, median time for content to stay on the platform before takedown, is getting more attention than it likely deserves, given that old content that’s been around for a long time might not have had much reach at all. From the post:

We often get asked how long terrorist content stays on Facebook before we take action on it. But our analysis indicates that time-to-take-action is a less meaningful measure of harm than metrics that focus more explicitly on exposure content actually receives. This is because a piece of content might get a lot of views within minutes of it being posted, or it could remain largely unseen for days, weeks or even months before it is viewed or shared by another person.

Just as terrorists are always looking for ways to circumvent social media platforms’ detection, platforms need to keep improving their technology, training, and processes to counter their efforts, Facebook says. That takes time, and while the technologies and other improvements are maturing, they may not work all that efficiently.

New machine learning at work

Facebook says a new machine-learning tool produces a score indicating how likely it is that a given post violates its counterterrorism policies, which, in turn, helps its team of reviewers prioritize posts with the highest scores.
Sometimes, when the tool rates a post as highly likely to contain support for terrorism, it will be automatically removed. Humans are still the backbone of the operation, though: specialized reviewers are evaluating most posts. The only time that a post is immediately, automatically removed is when the tool is so confident about the nature of the content that its “decision” indicates it will be more accurate than Facebook’s human reviewers.
Facebook doesn’t want to show its hand to adversaries, so it isn’t giving away many details on what it’s improved. What it did say was that its machine learning is now working across 19 languages.
Facebook is also sharing some of its new content hashing advances with a consortium of tech partners that includes Microsoft, Twitter, and YouTube.
All of this is leading to an improvement in the removal of terrorist content. But the work never stops, Facebook said, and that includes addressing the threat of terrorism outside of the cyber world:

We should not view this as a problem that can be “solved” and set aside, even in the most optimistic scenarios. We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organizations that generate this material in the real-world.

How to fend off the hijackers

We write about account hijacking quite a bit. Fortunately, many of the big social media platforms are supporting a way – app-based authentication – to protect our accounts from these attacks, which come in such forms as phishing and SIM swaps.
Using application-based 2FA (such as Sophos Authenticator, which is also included in our free Sophos Mobile Security for Android and iOS) mitigates a lot of the risk of SIM swap attacks because these mobile authentication apps don’t rely on communications tied to phone numbers.
Facebook says that besides using hijacked accounts, terrorists have been developing other tactics to get around account shutdown and content takedown:

Others have tried to avoid detection by changing their techniques, abandoning old accounts and creating new ones, developing new code language, and breaking messages into multiple components.

Exit mobile version