Site icon Sophos News

Ex-NSA hacker builds AI tool to hunt hate groups’ symbols online

Emily Crose, ex-hacker for the National Security Agency (NSA), ex-Reddit moderator and current network threat hunter at a cybersecurity startup, wanted to be in Charlottesville, Virginia, to join in the protest against white supremacists in August.
Three people died in that protest. One of Crose’s friends was attacked and hurt by a neo-Nazi.
As Motherboard’s Lorenzo Franceschi-Bicchierai tells it, Crose was horrified by the violence of the event. But she was also inspired by her friend’s courage.
Her response has been to create and train an Artificial Intelligence (AI) tool to unmask hate groups online, be they on Twitter, Reddit, or Facebook, by using object recognition to automatically spot the symbols used by white nationalists.
The images her tool automatically seeks out are so-called dog whistles, be they the Black Sun (also known as the “Schwarze Sonne,” an image based on an ancient sun wheel artifact created by pagan German and Norse tribes that was later adopted by the Nazi SS and which has been incorporated into neo-Nazi logos) or alt-right doctored Pepe the frog memes.
Crose dubbed the AI tool NEMESIS. She says the name is that of the Greek goddess of retribution against those who succumb to arrogance against the gods:

Take that to mean whatever you will, but you have to admit that it sounds pretty cool.

Crose says it’s just a proof of concept at this point …

… and has agreed with detractors who say that the technology is “riddled with surveillance and privacy issues.”

She posted this clip onto Twitter that shows NEMESIS in action, picking out black sun and other white supremacist images carried by white supremacist protesters.

Crose said that from the beginning, the tool has been designed to identify symbols, not to identify faces. It would of course be easy for her to create a convolutional neural network (CNN) for facial recognition that could associate symbolism with faces, she said, but “that’s not my goal.”
She pointed to how Google’s using CNNs to navigate automated cars.
Should we trust CNNs with people’s personal privacy? Not if they’re in the wrong hands, she said: just go ask the Electronic Frontier Foundation (EFF) about the issues that arise.


In September, when it addressed the House of Lords Select Committee on Artificial Intelligence, the EFF brought up issues of bias that can arise from the use of AI, be it CNNs or other deep-learning techniques.
Such systems must be auditable, if not transparent, the EFF said, giving these examples:

NEMESIS is clearly generating a lot of controversy – controversy that Crose apparently welcomes, given that it “tells me I’m doing something right.”
But as she told Motherboard, NEMESIS hasn’t evolved to an autonomous, privacy-invading AI, by any means. In fact, it’s kind of dumb at this point, she said, given that there are still humans involved. It still requires human intervention to curate the pictures of the symbols in an inference graph and make sure they’re being used in a white supremacist context as opposed to inadvertently flagging users who post Hindu swastikas, for example.
In other words, NEMESIS still needs to be taught context, Crose said:

It takes thousands and thousands of images to get it to work just right.


Image courtesy of Emily Crowse / Twitter

Exit mobile version