Emily Crose, ex-hacker for the National Security Agency (NSA), ex-Reddit moderator and current network threat hunter at a cybersecurity startup, wanted to be in Charlottesville, Virginia, to join in the protest against white supremacists in August.
Three people died in that protest. One of Crose’s friends was attacked and hurt by a neo-Nazi.
As Motherboard’s Lorenzo Franceschi-Bicchierai tells it, Crose was horrified by the violence of the event. But she was also inspired by her friend’s courage.
Her response has been to create and train an Artificial Intelligence (AI) tool to unmask hate groups online, be they on Twitter, Reddit, or Facebook, by using object recognition to automatically spot the symbols used by white nationalists.
The images her tool automatically seeks out are so-called dog whistles, be they the Black Sun (also known as the “Schwarze Sonne,” an image based on an ancient sun wheel artifact created by pagan German and Norse tribes that was later adopted by the Nazi SS and which has been incorporated into neo-Nazi logos) or alt-right doctored Pepe the frog memes.
Crose dubbed the AI tool NEMESIS. She says the name is that of the Greek goddess of retribution against those who succumb to arrogance against the gods:
Take that to mean whatever you will, but you have to admit that it sounds pretty cool.
Crose says it’s just a proof of concept at this point …
“This is censorship! I won’t stand for this!”
The thing is, Nemesis isn’t plugged into anything yet. Right now, It’s a proof of concept; a concept which obviously is very controversial (which tells me I’m doing something right.)— 𝓞𝓻𝓲𝓰𝓲𝓷𝓪𝓵 👾 𝓢𝔂𝓷 (@emilymaxima) January 7, 2018
… and has agreed with detractors who say that the technology is “riddled with surveillance and privacy issues.”
“This technology is riddled with surveillance and privacy issues!”
You’re absolutely right.
Actually, this is the most frightening part of #Nemesis if I’m being honest with you.— 𝓞𝓻𝓲𝓰𝓲𝓷𝓪𝓵 👾 𝓢𝔂𝓷 (@emilymaxima) January 7, 2018
She posted this clip onto Twitter that shows NEMESIS in action, picking out black sun and other white supremacist images carried by white supremacist protesters.
Here’s some more Nazi Vision #Nemesis goodness for tonight. pic.twitter.com/54437yNkGc
— 𝓞𝓻𝓲𝓰𝓲𝓷𝓪𝓵 👾 𝓢𝔂𝓷 (@emilymaxima) December 2, 2017
Crose said that from the beginning, the tool has been designed to identify symbols, not to identify faces. It would of course be easy for her to create a convolutional neural network (CNN) for facial recognition that could associate symbolism with faces, she said, but “that’s not my goal.”
She pointed to how Google’s using CNNs to navigate automated cars.
Should we trust CNNs with people’s personal privacy? Not if they’re in the wrong hands, she said: just go ask the Electronic Frontier Foundation (EFF) about the issues that arise.
In September, when it addressed the House of Lords Select Committee on Artificial Intelligence, the EFF brought up issues of bias that can arise from the use of AI, be it CNNs or other deep-learning techniques.
Such systems must be auditable, if not transparent, the EFF said, giving these examples:
- AI systems used for government purposes (e.g., to advise judicial decisions, to help decide what public benefits people do or do not receive, and especially any AI systems used for law enforcement purposes).
- AI systems used by companies to decide which individuals to do business with and how much to charge them (e.g., systems that assign credit scores or other financial risk scores or financial profiles to people, systems that advise insurance companies about the risk associated with a potential customer, and systems that adjust pricing on a per-customer basis based on the traits or behavior of that customer).
- AI systems used by companies to analyze potential employees.
- AI systems used by large corporations to decide what information to display to users (e.g., search engines, AI systems used to decide what news articles or other items of interest to show someone online – if they make those decisions based on individual user characteristics – and AI systems used to decide what online ads to show someone.
NEMESIS is clearly generating a lot of controversy – controversy that Crose apparently welcomes, given that it “tells me I’m doing something right.”
But as she told Motherboard, NEMESIS hasn’t evolved to an autonomous, privacy-invading AI, by any means. In fact, it’s kind of dumb at this point, she said, given that there are still humans involved. It still requires human intervention to curate the pictures of the symbols in an inference graph and make sure they’re being used in a white supremacist context as opposed to inadvertently flagging users who post Hindu swastikas, for example.
In other words, NEMESIS still needs to be taught context, Crose said:
It takes thousands and thousands of images to get it to work just right.
Image courtesy of Emily Crowse / Twitter