Skip to content
Naked Security Naked Security

Facial recognition is real-life ‘Black Mirror’ stuff, Ocasio-Cortez says

"People think they're going to put on a cute filter and have puppy dog ears, and not realize that that data's being collected."

During a House hearing on Wednesday, Rep. Alexandria Ocasio-Cortez said that the spread of surveillance via ubiquitous facial recognition is like something out of the tech dystopia TV show “Black Mirror.”

This is some real-life “Black Mirror” stuff that we’re seeing here.

Call this episode “Surveil Them While They’re Obliviously Playing With Puppy Dog Filters.”
Wednesday’s was the third hearing on the topic for the House Oversight and Reform Committee, which is working on legislation to address concerns about the increasingly pervasive technology. In Wednesday’s hearing, Ocasio-Cortez called out the technology’s hidden dangers – one of which is that people don’t really understand how widespread it is.
At one point, Ocasio-Cortez asked Meredith Whittaker – co-founder and co-director of New York University’s AI Now Institute, who had noted in the hearing that facial recognition is a potential tool of authoritarian regimes – to remind the committee of some of the common ways that companies collect our facial recognition data.
Whittaker responded with a laundry list: she said that companies scrape our biometric data from sites like Flickr, from Wikipedia, and from “massive networked market reach” such as that of Facebook.

Ocasio-Cortez: So if you’ve ever posted a photo of yourself to Facebook, then that could be used in a facial recognition database?
Whittaker: Absolutely – by Facebook and potentially others.
Ocasio-Cortez: Could using a Snapchat or Instagram filter help hone an algorithm for facial recognition?
Whittaker: Absolutely.
Ocasio-Cortez: Can surveillance camera footage that you don’t even know is being taken of you be used for facial recognition?
Whittaker: Yes, and cameras are being designed for that purpose now.

This is a problem, the New York representative suggested:

People think they’re going to put on a cute filter and have puppy dog ears, and not realize that that data’s being collected by a corporation or the state, depending on what country you’re in, in order to …surveil you, potentially for the rest of your life.

Whittaker’s response: Yes. And no, average consumers aren’t aware of how companies are collecting and storing their facial recognition data.
It’s “Black and brown Americans” who suffer the most from the ubiquity of this error-prone technology, Ocasio-Cortez said, bringing up a point from a previous hearing in May 2019: that the technology has the highest error rates for non-Caucasians.

Problems in facial recognition technology

At the May 2019 hearing, Joy Buolamwini, founder of the Algorithmic Justice League (AJL) – a nonprofit that works to illuminate the social implications and harms of artificial intelligence (AI) – had testified about how failures of facial analysis technologies have had “real and dire consequences” for people’s lives, including in critical areas such as law enforcement, housing, employment, and access to government services.
Buolamwini founded the AJL after experiencing such failure firsthand, when facial analysis software failed to detect her dark-skinned face until she put on a white mask. Such failures have been attributed to the lack of diversity within the population of engineers who create facial analysis algorithms. In other words, facial recognition achieves its highest accuracy rate when used with white male faces.
Here’s Buolamwini in the May hearing:

If you have a case where we’re thinking about putting, let’s say, facial recognition technology on police body cams, in a situation where you already have racial bias, that can be used to confirm [such bias].

In Wednesday’s hearing, Ocasio-Cortez said that the worst implications are that a computer algorithm will suggest that a Black person has likely committed a crime when they are, in fact, innocent.


Because facial recognition is being used without our consent or knowledge, she suggested, we may be mistakenly accused of a crime and have no idea that the technology has been used as the basis for the accusation.
That’s right, the AI Now Institute’s Whittaker said, and there’s evidence that the use of facial recognition is often not disclosed. That lack of disclosure is compounded by our “broken criminal justice system,” Ocasio-Cortez said, where people often aren’t allowed to access the evidence used against them.
Case in point: the Willie Lynch case in Florida. A year ago, Lynch, from Jacksonville, Florida, asked to see photos of other potential suspects after being arrested for allegedly selling $50 worth of crack to undercover cops. The police search had relied on facial recognition: the cops had taken poor-quality photos of the drug dealer with a smartphone camera and then sent them to a facial recognition technology expert who matched them to Lynch.
In spite of it being his constitutional right to see the evidence, a state appellate court decided that Lynch had no legal right to see other matches returned by the facial recognition software that helped put him behind bars. This, in spite of the algorithm having expressed only one star of confidence that it had generated the correct match.
From the American Civil Liberties Union’s (ACLU’s) writeup of the case:

Because the officers only identified him based on the results of the face recognition program, looking into how the face recognition algorithm functioned and whether errors or irregularities in the procedure tainted the officers’ ID was critical to his case. But when Lynch asked for the other booking photos the program returned as possible matches, both the government and the court refused. Their refusal violated the Constitution, which requires prosecutors to disclose information favorable to a defendant.

Ocasio-Cortez, in Wednesday’s hearing:

These technologies are almost automating injustices, both in our criminal justice system but also automating biases that compound on the on the lack of diversity in Silicon Valley, as well.

C-SPAN has full coverage of the three hours of testimony given in Wednesday’s hearing.

4 Comments

It’s such a relief to see a politician who truly seems to understand and take seriously the ramifications of the misuse of surveillance technology, both in law enforcement and across society in general. That is not a political party statement. Simply put, in this day and age there needs to be adequate representation in all areas of government of people who can truly engage on these issues.
I appreciate the House Oversight and Reform Committee looking into facial recognition bias, and Representative Ocasio-Cortez’s insightful questions.

Even Buolamwini said “failures have been attributed to the lack of diversity within the population of engineers who create facial analysis algorithms”; then look for better “engineers” that have “diversity” in mind; maybe going for Chinese/Korean/Japanese/Indian/… engineers! If your areas/cities have diversity of nations, for god sake, the engineers should be included these diversity of nations in creating facial analysis algorithms, right?
ACLU’s writeup also saying “the officers only identified him based on the results of the face recognition program” and “both the government and the court refused”; so, what it means it’s the human not the technology, right? That’s why “their refusal violated the Constitution”; not the technology nor AI.
As always, a knife could save or kill; not the knife’s problem

I see no “relief” in the politician having shown they have a minuscule knowledge of the Constitution of The United States of America, and denies crimes that occur because she doesn’t want to believe they do or have. However, obviously she is afraid her face is going to be seen somewhere that she wishes it weren’t. This is an immature ‘politician’ (for now) telling the country that global warming is going to make humans extinct in twelve years. I do not trust anything she says or believes. If she even were to agree with me I would suspect an ulterior motive. Any technology can and has been misused. A human who can not be truthful is one to avoid, especially if they have some “power.”

I think whoever it was who wrote “The Terminator” is/was a supergenius and clearly ahead of their time. FR technology is something i’m afraid of, because there is no justification for it except to invade privacy

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?