In a year in which facial recognition has made massive strides to invade personal privacy and settle in as a favored tool for government surveillance, Microsoft isn’t just open to government regulation; it’s asking for it.
On Thursday, in a speech at the Brookings Institution, Microsoft President Brad Smith warned about facial recognition technology spreading “in ways that exacerbate societal issues.” Never mind any dents to profits, he said, we need legislation before the situation gets more dystopian than it already is.
We don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.
We must ensure that the year 2024 doesn’t look like a page from the novel 1984.
Smith said that Microsoft, after much pondering, has decided to adopt six principles to manage the risks and potential for abuse that come along with facial recognition: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance. He said that Microsoft will publish a document this week with suggestions on implementing the principles.
The good, the bad, and the intrusive
It’s not as if facial recognition is being used to solely create worlds of ubiquitous surveillance, in which you’re shamed for jaywalking, you’re publicly humiliated for your financial troubles, or law enforcement uses it to surveil crowds that are overwhelmingly composed of innocent people.
Smith pointed to uses of facial recognition that, unlike those applications, are not, in fact, leading us all to Orwellian, Black Mirror-esque dystopia. He pointed to cases of missing children being reunited with their families, for example. One such is the story of a child with Down’s syndrome who’d wandered away from his father and was reunited after being missing for four years. That happy ending was thanks to Microsoft’s Photo Missing Children (PhotoMC) technology.
Microsoft isn’t the only vendor using facial recognition to do good. Other vendors make tools to find missing children: Smith pointed to nearly 3,000 missing children having been traced in four days when the New Delhi police did a trial of facial recognition technology in April.
On the other side of the coin, there are those Orwellian aspects of the technology.
For one thing, it’s well-documented that automated facial recognition (AFR) is an inherently racist technology. One reason is that black faces are over-represented in face databases to begin with, at least in the US: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.
Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.
During a scathing US House oversight committee hearing on the FBI’s use of the technology in 2017, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.
That’s a lot of people wrongly identified as persons of interest to law enforcement.
In spite of that, law enforcement across the world adores facial recognition. In recent weeks, it’s emerged that the Secret Service plans to test facial recognition around the White House. That’s according to Department of Homeland Security (DHS) documents uncovered by the American Civil Liberties Union (ACLU).
While it’s important to protect the physical security of the president and the White House, the ACLU points out, this is also “opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks,” as the cameras will “include images of individuals passing by on public streets and parks adjacent to the White House Complex.”
The ACLU knows first-hand how prone to error the technology is: it’s tested Amazon Rekognition, the company’s facial recognition technology, which is used by police in Orlando, Florida, and found that it falsely matched 28 members of Congress with mugshots.
To address bias, Smith said that we need legislation that would require companies to provide documentation about what their technology can and can’t do – in plain English that customers and consumers can understand. He also said that new laws should require third-party testing to check for accuracy and unfair bias in facial recognition services and suggested that companies could make an API available for this purpose.
He also said that laws should require humans to weigh in on facial recognition conclusions in “high-stakes scenarios,” including “where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged.”
Other things that new legislation should do, Smith said:
- Ensure that it’s not used for unlawful discrimination. As it is, human rights activists say that China is using its facial-recognition systems to track members of persecuted minorities, including Uighur Muslims, Protestant Christians and Tibetan Buddhists.
- Require that people know when they’re surveilled and that they give consent. Entities that use facial recognition to identify consumers should place a conspicuous notice that “clearly conveys that these services are being used.”
- Limit ongoing government surveillance of specified individuals. To protect democratic freedoms, individuals should only be surveilled in public spaces under court order or in cases of “imminent danger or risk of death or serious physical injury to a person.”
We’re still in the infancy of this new technology, Smith said. Microsoft plans to formally launch its principles and a supporting framework before the end of March 2019, but in the meantime, it doesn’t even know all the questions, let alone the answers.
Hopefully, this will put us all on the road to getting there, Smith said:
We believe that taking a principled approach will provide valuable experience that will enable us to learn faster. As we do, we’re committed to sharing what we learn, perhaps most especially with our customers through new material and training resources that will enable them to adopt facial recognition in a manner that gives their stakeholders and the public the confidence they deserve.
Mahhn
“it falsely matched 28 members of Congress with mugshots.” Sounds like it’s working perfectly. But hey, if you’re rich enough you can have your face removed from the DB next time around.
Steve
“The ACLU knows first-hand how prone to error the technology is: it’s tested Amazon Rekognition, the company’s facial recognition technology, which is used by police in Orlando, Florida, and found that it falsely matched 28 members of Congress with mugshots.”
Given the sorts of news stories that seem to consistently appear, how sure are they that those were FALSE matches? Perhaps they really were false matches, but they also had accurate matches for 2-3 times that many others? LOL!
Aron M
“it’s well-documented that automated facial recognition (AFR) is an inherently racist technology” – Indeed, it is well documented that computers are very racist.
JD
“How do we protect ourselves from the government misusing this technology?”
“I know – we’ll ask the government to regulate it!”
Brilliant (in the old Guinness commercials kind of way)