Skip to content
Naked Security Naked Security

Terrorist social media posts should be removed within an hour, says EC

The internet bigwigs say it could do more harm than good as they balance fast illegal content removal with protection of users' rights.

The European Commission (EC) on Thursday suggested what it called the one-hour rule: as in, that’s the timeframe within which social media companies and European Union (EU) member states should remove terrorist content.
It’s not a new law. Rather, it’s just a recommendation at this point – and it’s just a “general rule,” at that. The one-hour rule is one of a set of operational measures the EC suggested.
Those recommendations come in the wake of the commission having promised, in September, to monitor progress in tackling illegal content online and to assess whether additional measures are needed to ensure such content gets detected and removed quickly. Besides terrorist posts, illegal content includes hate speech, material inciting violence, child sexual abuse material, counterfeit products and copyright infringement.
Voluntary industry measures to deal with terrorist content, hate speech and counterfeit goods have already achieved results, the EC said. But when it comes to “the most urgent issue of terrorist content,” which “presents serious security risks”, the EC said procedures for getting it offline could be stronger.
Rules for flagging content should be easy to follow and faster, for example. There could be fast-tracking for “trusted flaggers,” for one. To avoid false flags, content providers should be told about decisions and given the chance to contest content removal.
As far as the one-hour rule goes, the EC says that the brevity of the takedown window is necessary given that “terrorist content is most harmful in the first hours of its appearance online.”
While it’s just a recommendation at this point, it could someday become law.
German lawmakers last year okayed huge fines on social media companies if they don’t take down “obviously illegal” content in a timely fashion. The new German law gives them 24 hours to take down hate speech or other illegal content and imposes a fine of €50m ($61.6 million) if they don’t.


The UK has been talking about shortening takedown time for a while. In September, during a summit with the French president and the Italian prime minister, Prime Minister Theresa May urged internet companies to get terrorist content taken down within two hours, down from the average 36 hours it takes to get illegal content taken down.
Although the one-hour rule is only a recommendation, companies and member states still have requirements: they’ll need to submit data on terrorist content with three months and on other illegal content within six months.
As you might imagine, the affected companies and member states aren’t thrilled with the EC’s recommendations.
EdiMA, a European trade association whose members include internet bigwigs such as Google, Twitter, Facebook, Apple, and Microsoft, acknowledged the importance of the issues raised by the EC but said it was “dismayed” by its recommendations.
They could do more harm than good, EdiMA said, describing this as “a missed opportunity for evidence-based policy making”.

Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights – a one-hour turn-around time in such cases could harm the effectiveness of service providers’ take-down systems rather than help.

The trade group also pointed out that it’s already shown leadership through the Global Internet Forum to Counter Terrorism and that collaboration is underway via the Hash Sharing Database.
Here’s what Facebook told TechCrunch:

We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook.
As the latest figures show, we have already made good progress removing various forms of illegal content. We continue to work hard to remove hate speech and terrorist content while making sure that Facebook remains a platform for all ideas.


2 Comments

I think that this could be a bad development because there are no targets on the social media companies to minimise false positives when they take down content. They have been given a one hour deadline to remove material, but no deadlines to process appeals, or any threat of sanction if they take down too much.
This means that the most rational response for the social media companies, is to develop a machine learning text processing engine, that will take down posts on a hair trigger, but then refer appeals to human moderators, who will take weeks to process each case, and be overly careful in what they let through. The net result is that free speech will be limited and bloggers will self sensor.
For example, suppose I write a thoughtful post examining the root causes of militant Islam, and the rise of IS, perhaps mentioning the Arab-Israeli conflict, or the proxy wars in Syria and Yemen, then to a human moderator that would be fine, but to an algorithm it would hit lots of keywords and get banned. If I bothered to appeal the ban, it would take weeks for the post to re-appear, by which time the news event that the post was written in response to would be old news.
The net result is that thoughtful and reasoned debate gets suppressed, and short soundbites (that only take a few seconds to post) will remain because those posting them are willing to play wack-a-mole with the banning algorithm.

Reply

If the Home Office had the same one hour target to physically remove terrorists from the UK, tackling terrorism would be more effective.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!