Site icon Sophos News

Google releases free AI tool to stamp out child sexual abuse material

Since 2008, the National Center for Missing & Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images. Provided by ISPs, these hash values (which are like a digital fingerprint) enable companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images or to actually pry open people’s private messages.
More recently, in 2015, the Internet Watch Foundation (IWF) announced that it would share hashes of such vile imagery with the online industry in a bid to speed up its identification and removal, working with web giants Google, Facebook, Twitter, Microsoft and Yahoo to remove child sexual abuse material (CSAM) from the web.
It’s been worthy work, but it’s had one problem: you can only get a hash of an image after you’ve identified it. That means that a lot of human analysts have to analyze a lot of content – onerous work for reviewers, and also an approach that doesn’t scale well when it comes to keeping up with the scourge.
On Monday, Google announced that it’s releasing a free artificial intelligence (AI) tool to address that problem: technology that can identify, and report, online CSAM at scale, easing the need for human analysts to do all the work of catching new material that hasn’t yet been hashed.
Google Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri said in the post that the AI “significantly advances” Google’s existing technologies to “dramatically improve how service providers, NGOs, and other technology companies review violative content at scale.”
Google says that the use of deep neural networks for image processing will assist reviewers who’ve been sorting through images, by prioritizing the most likely CSAM content for review.
The classifier adds on to the historical approaches of detecting such content – matching hashes of known CSAM – by also targeting content that hasn’t yet been confirmed as CSAM.
The faster the identification, the faster children can be rescued, Google said:

Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse.

Google is making the tool available for free to NGOs and its industry partners via its Content Safety API: “a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”
Susie Hargreaves, CEO of the IWF:

We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.

How much speed is speeded up? Google says that it’s seen the system help a reviewer find and take action on 700% more CSAM content than can be reviewed and reported without the aid of AI.
Google said that those interested in using the Content Safety API should reach out to the company by using this API request form.
This won’t be enough to stop the spread of what Google called this “abhorrent” content, but the fight will go on, the company said:

Identifying and fighting the spread of CSAM is an ongoing challenge, and governments, law enforcement, NGOs and industry all have a critically important role in protecting children from this horrific crime.
While technology alone is not a panacea for this societal challenge, this work marks a big step forward in helping more organizations do this challenging work at scale. We will continue to invest in technology and organizations to help fight the perpetrators of CSAM and to keep our platforms and our users safe from this type of abhorrent content. We look forward to working alongside even more partners in the industry to help them do the same.

Fred Langford, deputy CEO of the IWF, told the Verge that the organization – one of the largest organizations dedicated to stopping the spread of CSAM online – first plans to test Google’s new AI tool thoroughly.


As it is, there’s been a lot of hype about AI, he said, noting the “fantastical claims” made about such technologies.
While tools like Google’s are building towards fully automated systems that can identify previously unseen material without human interaction, such a prospect is “a bit like the Holy Grail in our arena,” Langford said.
The human moderators aren’t going away, in other words. At least, not yet. The IWF will keep running its tip lines and employing teams of humans to identify abuse imagery; will keep investigating sites to find where CSAM is shared; and will keep working with law enforcement to shut them down.


Exit mobile version