Skip to content
Naked Security Naked Security

ISIS recruiter caught by Facebook screenshot

An ISIS follower tried to radicalize hundreds of strangers worldwide, until one of his targets captured the messages and gave them to police.

Mohammed Kamal Hussain, a 28-year-old recruiter for Daesh, also known as ISIS, ISIL or Islamic State, who used Facebook, WhatsApp and Telegram to send thousands of messages to strangers in efforts to radicalize them, was given seven years in jail after one of his targets took screenshots of the messages and turned them over to police.
Hussain, a Bangladeshi national who had overstayed his visa and was living in East London, was found guilty on Monday at Kingston Crown Court of two counts of encouraging terrorism and one count of supporting a proscribed organization.
According to London’s Met Police, Hussain came to their attention only when a man who lives outside the UK emailed the Home Office in March 2017, saying he’d received Facebook messages from a stranger inviting him to join Daesh. Instead of ignoring the unprompted pitches to join the terrorist group, he grabbed screenshots of the messages and sent them to police.
Commander Dean Haydon, head of the Met Police Counter Terrorism Command, praised him as a “conscientious individual” who trusted his instincts to report the suspicious messages:

It is in great part thanks to him that police were able to bring Hussain to justice.

Met Police said that investigators trawled thousands of messages sent by Hussain. Among them were Facebook posts encouraging and glorifying terrorism, including a speech from the so-called “leader” of Daesh, Abu Bakr al Baghdadi. Hussain was arrested on 30 June 2017.
According to Commander Dean Haydon, when police searched Hussain’s devices, they found “barbaric” videos of Daesh violence and “warped reasoning” for killing people, including children and Muslims.
Haydon encouraged anyone who “sees something online that they have even the slightest feeling could be terrorist- or extremist-related” to follow the example of the screenshot-grabbing man who helped police track down Hussain. He suggested reporting such content to police via the Home Office’s online reporting form, which is part of its ACT (action counters terrorism) campaign.
Reporting can be done anonymously. Haydon said the site has a team of specially trained officers who look at all reports and decide if action is required.
Earlier this month, the Home Office announced the launch of an artificial intelligence (AI) tool that it said will be able to automatically identify extremist videos like the kind Hussain was disseminating – and even block them before they can be uploaded.
The Home Office cited tests that show the tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. That accuracy rate translates into only 50 out of one million randomly selected videos that would require human review. The tool can run on any platform and can integrate into the video upload process to stop most extremist content before it ever reaches the internet.
That £600,000 AI tool was developed by the Home Office and ASI Data Science. It’s primarily designed for smaller platforms such as Vimeo, Telegra.ph and pCloud – platforms that don’t have the resources to build their own AI counterterrorist tools but still need them, given that they’re increasingly targeted by Daesh and its supporters.


As for the bigger platforms, they’re already working on their own machine-learning projects to fight terrorist content online.
The most recent such project comes from Facebook Messenger. The BBC reported on Monday that Facebook has been running, and funding, a pilot project to de-radicalize extremists.
Led by the counter-extremism organization Institute for Strategic Dialogue (ISD), the aim was to mimic extremists’ own recruitment methods, specifically in the realm of direct messaging. ISD staffers scanned several far-right and Islamist pages on Facebook for targets, then manually searched profiles to find instances of violent, dehumanizing and hateful language.
Eleven “intervention providers” – they were either former extremists, survivors of terrorism or trained counsellors – reached out to 569 people. Seventy-six of those people responded and took part in conversations of five or more messages, and researchers claimed that eight showed signs of rethinking their views.


1 Comment

I could easily produce a tool that detected 100% of Daesh propaganda with 100% accuracy, but only by simply flagging every single message as being Daesh propaganda. But obviously, that wouldn’t be very useful.
As far as I know, no one has mentioned the false positive rate of the Home office AI tool, which means we can’t meaningfully calculate the number of messages that have to be inspected by hand.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!