Emily Crose, ex-hacker for the National Security Agency (NSA), ex-Reddit moderator and current network threat hunter at a cybersecurity startup, wanted to be in Charlottesville, Virginia, to join in the protest against white supremacists in August.
Three people died in that protest. One of Crose’s friends was attacked and hurt by a neo-Nazi.
As Motherboard’s Lorenzo Franceschi-Bicchierai tells it, Crose was horrified by the violence of the event. But she was also inspired by her friend’s courage.
Her response has been to create and train an Artificial Intelligence (AI) tool to unmask hate groups online, be they on Twitter, Reddit, or Facebook, by using object recognition to automatically spot the symbols used by white nationalists.
The images her tool automatically seeks out are so-called dog whistles, be they the Black Sun (also known as the βSchwarze Sonne,” an image based on an ancient sun wheel artifact created by pagan German and Norse tribes that was later adopted by the Nazi SS and which has been incorporated into neo-Nazi logos) or alt-right doctored Pepe the frog memes.
Crose dubbed the AI tool NEMESIS. She says the name is that of the Greek goddess of retribution against those who succumb to arrogance against the gods:
Take that to mean whatever you will, but you have to admit that it sounds pretty cool.
Crose says it’s just a proof of concept at this point …
“This is censorship! I won’t stand for this!”
The thing is, Nemesis isn’t plugged into anything yet. Right now, It’s a proof of concept; a concept which obviously is very controversial (which tells me I’m doing something right.)β ππ»π²π°π²π·πͺπ΅ πΎ π’ππ· (@emilymaxima) January 7, 2018
β¦ and has agreed with detractors who say that the technology is “riddled with surveillance and privacy issues.”
“This technology is riddled with surveillance and privacy issues!”
You’re absolutely right.
Actually, this is the most frightening part of #Nemesis if I’m being honest with you.β ππ»π²π°π²π·πͺπ΅ πΎ π’ππ· (@emilymaxima) January 7, 2018
She posted this clip onto Twitter that shows NEMESIS in action, picking out black sun and other white supremacist images carried by white supremacist protesters.
Here’s some more Nazi Vision #Nemesis goodness for tonight. pic.twitter.com/54437yNkGc
β ππ»π²π°π²π·πͺπ΅ πΎ π’ππ· (@emilymaxima) December 2, 2017
Crose said that from the beginning, the tool has been designed to identify symbols, not to identify faces. It would of course be easy for her to create a convolutional neural network (CNN) for facial recognition that could associate symbolism with faces, she said, but “that’s not my goal.”
She pointed to how Google’s using CNNs to navigate automated cars.
Should we trust CNNs with people’s personal privacy? Not if they’re in the wrong hands, she said: just go ask the Electronic Frontier Foundation (EFF) about the issues that arise.
In September, when it addressed the House of Lords Select Committee on Artificial Intelligence, the EFF brought up issues of bias that can arise from the use of AI, be it CNNs or other deep-learning techniques.
Such systems must be auditable, if not transparent, the EFF said, giving these examples:
- AI systems used for government purposes (e.g., to advise judicial decisions, to help decide what public benefits people do or do not receive, and especially any AI systems used for law enforcement purposes).
- AI systems used by companies to decide which individuals to do business with and how much to charge them (e.g., systems that assign credit scores or other financial risk scores or financial profiles to people, systems that advise insurance companies about the risk associated with a potential customer, and systems that adjust pricing on a per-customer basis based on the traits or behavior of that customer).
- AI systems used by companies to analyze potential employees.
- AI systems used by large corporations to decide what information to display to users (e.g., search engines, AI systems used to decide what news articles or other items of interest to show someone online – if they make those decisions based on individual user characteristics – and AI systems used to decide what online ads to show someone.
NEMESIS is clearly generating a lot of controversy – controversy that Crose apparently welcomes, given that it “tells me I’m doing something right.”
But as she told Motherboard, NEMESIS hasn’t evolved to an autonomous, privacy-invading AI, by any means. In fact, it’s kind of dumb at this point, she said, given that there are still humans involved. It still requires human intervention to curate the pictures of the symbols in an inference graph and make sure they’re being used in a white supremacist context as opposed to inadvertently flagging users who post Hindu swastikas, for example.
In other words, NEMESIS still needs to be taught context, Crose said:
It takes thousands and thousands of images to get it to work just right.
Image courtesy of Emily Crowse / Twitter
Anonymous
“make sure theyβre being used in a white supremacist context ”
So Antifa and other far-left organizations get a free pass to spread their violence?
Liam
Why would Satan cast out Satan? Left wingers are a sick cult. I’m not part of any white supremacist group nor am I racist in any way, but I’m also not a blind political cultist. She better have a really good lawyer because that tech is in direct violation of our nation’s first amendment rights. If this continues and people find out about it, she’ll have an army of citizens on both sides that will demand justice.
Anonymous
It should be “Schwarze Sonne”, not “Schwartze Sonne”.
Paul Ducklin
Fixed, thanks.
Mark
“Not if they are in the wrong hands.”
The problem is there ARE NO “right” hands.
John Gochnauer
This is a computer security issue?
Bryan
Anytime an electronic device segregates individuals from a population at large, then a digital privacy aspect is at play. So yes.
Even if not, it’s interesting.
(sorry late reply)