Skip to content
Naked Security Naked Security

Are Google and Facebook to block extremist content with automatic hashing?

Internet companies are discussing, or already are, adopting the same technology used to spot copyright infringing content or child abuse imagery.

Since 2008, the National Center for Missing & Exploited Children (NCMEC) has offered to share with ISPs a list of hash values that correspond to known child abuse images.

That list, which was eventually coupled with Microsoft’s own PhotoDNA technology, has enabled companies like Google, Microsoft, ISPs and others to check large volumes of files for matches without those companies themselves having to keep copies of offending images, and without human eyes having to invade users’ privacy by scanning their email accounts for known child abuse images.

Earlier this month, the Counter Extremism Project (CEP) unveiled a software tool that works in a similar fashion and urged the big internet companies to adopt it.

Instead of child abuse imagery, the version the group unveiled works to tag gruesome, violent content spread by radical jihadists to use as propaganda or for recruiting followers for attacks.

And instead of just focusing on images, the new, so-called “robust hashing” technology encompasses video and audio, as well.

It comes from Dartmouth University computer scientist Hany Farid, who also worked on the PhotoDNA system.

The algorithm works to identify extremist content on internet and social media platforms, including images, videos, and audio clips, with the aim of stopping the viral spread of content illustrating beheadings and killings.

Now, sources familiar with the process have told Reuters that YouTube and Facebook – two of the world’s largest destinations for watching videos online – have quietly started to adopt the technology to identify and remove extremist content.

While it’s been adopted for use in targeting child abuse imagery, the technology actually got its start in copyright takedown demands.

But whichever content it’s used to identify, the software works in a similar fashion: it looks for “hashes,” which are unique digital fingerprints assigned to content by online platforms. If a media file has already been identified as extremist, it can be quickly identified and removed wherever it’s posted.

This won’t stop new extremist media from being posted. The hashes can’t automatically detect that a video contains footage of a beheading, for example.

But once such a horrific video has been identified as extremist, it can be spotted and removed automatically, instead of having to go through the process of being reported, having humans vet and identify the material, and thereby having the time to spread virally.

Neither YouTube nor Facebook would confirm or deny to Reuters that they’re using hashes to remove known extremist media.

But why would they? Reuters quoted Matthew Prince, chief executive of content distribution company CloudFlare:

There’s no upside in these companies talking about it. Why would they brag about censorship?

As it is, President Obama, along with other US and European leaders, has increasingly voiced concern about online radicalization.

Two weeks ago, the president said that the Orlando mass shooting was “inspired” by violent extremist propaganda, and that “one of the biggest challenges” is to combat ISIL’s propaganda “and the perversions of Islam” generated on the internet.

According to Reuters’ sources, in late April, representatives from YouTube, Twitter, Facebook and CloudFlare held a call to discuss options including the CEP’s content-blocking system.

The sources said that the companies were wary of letting an outside group decide what defined unacceptable content.

Seamus Hughes, deputy director of George Washington University’s Program on Extremism, noted to Reuters that extremist content differs from child abuse imagery in that it exists on a spectrum, and different web companies draw the line in different places.

Besides not wanting to publicize that they’re automatically censoring content, whoever’s already using hashing to block extremist content have other good reasons not to talk about it. According to Reuters’ sources: they don’t want to clue in terrorists so that they can manipulate the system.

Nor do such companies want to be thrust into the position of having repressive regimes demand that the algorithms be used to censor their opponents.

The companies reportedly raised alternatives including establishing a new industry-controlled nonprofit or expanding an existing industry-controlled nonprofit, but all of the options involved hashing technology.

8 Comments

So now in the very near future (not that it isn’t already happening on a smaller scale) any and all political ideology (including history, religion and science) that does not match the desires of those in with power, will be blocked, censored and dealt with. These powers, just like all powers are; will be abused to the detriment of society.

Reply

That’s and extreme way to look at it. there is a time and a place for all things on the internet. and web sites that are FULL of young people should not allow this kind of material to poison there mind. I’m all for free speech, and a free uncensored internet, but like all things in this life they have to be within reason.

Remember the reporters that got shot on live TV, then posted on Facebook? how many young people had to watch this because facebook automatically started video’s in your news feed? this technology would have prevented that trauma from countless young people around the world.

If you want to see extremest ISIS videos, go to there websites and watch them.

Reply

I agree with the idea of wanting to keep innocence, but ignoring the problems of the world will only let them grow. (like an infection of any kind) The only functional way to prevent children from exposure to content they aren’t mature enough to deal with is to limit their access to the data. Both the data and the kids will always be there. At some point they will see the worst, and we hope the kids have been prepared to be able to deal with it. The only real way to get rid of the offensive material is to encourage society to be better, and that requires understanding things we don’t like. Unfortunately, I don’t see atrocities ending while people exist.

Reply

Count on it. Just like the No-Fly List: a secret process with secret rules enforced by secret agencies, not accountable to the public, not acknowledged by the enforcing body and with no way out for anyone/anything wrongly accused. Once again, anonymous “gatekeepers” get to decide what’s permissible for the rest of us.

Reply

This is a good thing, but it has huge potential for abuse. And, that abuse will never be known about by the people whose content is under attack.

Reply

I can fix that in short order and hereby volunteer for the position of Emperor of the Wurld and pledge that decisions will be based solely on how much cas . . . Oh, Wait! That’s how it works now!

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!