Skip to content
Naked Security Naked Security

Social networks to be fined for hosting terrorist content

Draft EU legislation, due out next month, will likely incorporate a one-hour takedown window for extremist content flagged by law enforcement.

The European Commission is done with waiting for social platforms to voluntarily fix the problem of extremist content spreading via their technologies. On Sunday, the Financial Times reported that the EC’s going to follow through on threats to fine companies like Twitter, Facebook and YouTube for not deleting flagged content post-haste.
The commission is still drawing up the details, but a senior EU official told the FT that the final form of the legislation will likely impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies.
The EC first floated the one-hour rule in March, but it was just a recommendation at that point: something that the EC let companies implement voluntarily to the best of their abilities.
Or not, as the case may be. Although the one-hour rule was only a recommendation at the time, companies and member states still had requirements they needed to meet, including submitting data on terrorist content within three months and on other illegal content within six months.
Whatever tech companies have done to satisfy those requirements, the EC isn’t happy with it. Julian King, the EU’s commissioner for security, told the Financial Times that Brussels hasn’t “seen enough progress” from the platforms and that it would “take stronger action in order to better protect our citizens”.

We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon.

The recommendations that came in March followed the commission having promised, in September, to monitor progress in tackling illegal content online and to assess whether additional measures were needed to ensure such content gets detected and removed quickly. Besides terrorist posts, illegal content includes hate speech, material inciting violence, child sexual abuse material, counterfeit products and copyright infringement.
Voluntary industry measures to deal with terrorist content, hate speech and counterfeit goods have already achieved results, the EC said in March. But when it comes to “the most urgent issue of terrorist content,” which “presents serious security risks”, the EC said procedures for getting it offline could be stronger.
Rules for flagging content should be easy to follow and faster, for example. There could be fast-tracking for “trusted flaggers,” for one. To avoid false flags, content providers should be told about decisions and given the chance to contest content removal.
As far as the one-hour rule goes, the EC said in March that the brevity of the takedown window is necessary given that “terrorist content is most harmful in the first hours of its appearance online.”
The proposed legislation will have to be approved by the European Parliament and a majority of EU member states before being finalized as law. King told the FT that the new law will help to create legal certainty and would apply for all websites, big or small:

The difference in size and resources means platforms have differing capabilities to act against terrorist content, and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.

The tech companies have protested the one-hour rule, saying it could do more harm than good. In fact, the FT reports, some parts of the commission believe that self-regulation has been a success on the platforms that terrorists most like to use to spread their messages.
In April, Google pointed to success in artificial intelligence (AI) -enabled automatic content takedown: during its earnings call, Google CEO Sundar Pichai said in prepared remarks that automatic flagging and removal of violent, hate-filled, extremist, fake-news and/or other violative videos was having good results on YouTube.
At the same time, YouTube released details in its first-ever quarterly report on videos removed by both automatic flagging and human intervention.
There were big numbers in that report: between October and December 2017, YouTube removed a total of 8,284,039 videos. Of those, 6.7 million were first flagged for review by machines rather than humans, and 76% of those machine-flagged videos were removed before they received a single view.


Back in March, EdiMA, a European trade association whose members include internet bigwigs such as Google, Twitter, Facebook, Apple and Microsoft, acknowledged the importance of the issues raised by the EC but said it was “dismayed” by its recommendations. EdiMA described it as “a missed opportunity for evidence-based policy making”.

Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights – a one-hour turn-around time in such cases could harm the effectiveness of service providers’ take-down systems rather than help.

The trade group also pointed out that it’s already shown leadership through the Global Internet Forum to Counter Terrorism and that collaboration is underway via the Hash Sharing Database.
Here’s what Facebook told TechCrunch at the time:

We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook.
As the latest figures show, we have already made good progress removing various forms of illegal content. We continue to work hard to remove hate speech and terrorist content while making sure that Facebook remains a platform for all ideas.

One EU official told the FT that the EC’s push for an EU-wide law targeting terrorist content reflected concern that “European governments would take unilateral action.”
German lawmakers last year OKed huge fines on social media companies if they don’t take down “obviously illegal” content in a timely fashion. The new German law gave them 24 hours to take down hate speech or other illegal content and imposed a fine of €50m ($61.6 million) if they don’t.
The German law targets anything from fake news to racist content. But the FT reports that with the one-hour rule, the EU is specifically targeting terrorist content, leaving it up to the platforms to determine which content violates the rules when it comes to areas that are less black and white, including hate speech and fake news.


13 Comments

The problem with globalism is the linking of everyone in the same box. I live in the United States, a sovereign nation although many have forgotten. I do not appreciate globalist laws being thrust upon my voice, thought or will. They’re interpretation of “terrorist content” is very much different from mine, which according to their rules and the Marxists creating the algorithms ordains my refusal to adhere to group think and speak a threat. The movie, “Minority Report” should have remained just that.

Reply

Pretty sure the law doesn’t force those companies to delete the content, just to no longer have it available in the EU. If they decide to delete it it’s not because of the law and a choice of the company hosting the platform. A platform that you do not have to use if you don’t like their “Marxist algorithms”.

Reply

“material inciting violence” So anything; Anti-terrorist, pro war, anti war, suggestions of punishments for criminals, revolutions, anything to do with football (particularly in England), news reports of criminals, advertisements for pesticides, anything to do with politics or religion.
Now that the media is under control we can watch wide angle camera shots of dogs and cats while chugging cough syrup (SouthPark). Oblivious to evil while it pours over everyone like lava.
Really gives meaning to “The revolution will not be televised”

Reply

“…self-regulation has been a success on the platforms that terrorists most like to use to spread their messages.”
If self-regulation were successful on those platforms, terrorists would not like to use them.

Reply

So, pretty much any agent of the police and or state , merely at the putting a tick in a box on an online form, will be able to more or less expunge anything they don’t like from the majority internet. That sure looks like respect for free speech, democracy and due judicial process.

Reply

These sites have the worst algorithms ever and while I don’t think terror content should be posted and populated I do think humans should be reviewing these posts. A person could make a counter post showing just how bad terrorism is and get their content taken down cause it has terrorist in it….sigh.

Reply

It’s already happenned when facebook’s algorithm deemed the US Declaration of Independence to be extremism.

Reply

Stamping out extremist and terrorist content sounds like a great idea, until you learn that a woman in the UK was arrested at an airport under ‘anti-terror’ laws, because she was an anti-fracking activist. Kind of makes you wonder where they’re going with ‘anti-terror’ and ‘privacy’ laws doesn’t it?

Reply

It makes me wonder if she was charged, or simply arrested, and how her anti-fracking activism manifested itself.
Perhaps she really was being persecuted, but anti-fracking activists pass through airports everyday without being arrested.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!