IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Facial Recognition Creates Risks for Trans Individuals, Others

As more government entities look to adopt facial recognition, concerns have been raised about its potential risks and how the technology might have disproportionate impacts for transgender and nonbinary individuals.

Facial,Recognition
Shutterstock/Fractal Pictures
As facial recognition technology becomes more commonly used by government agencies, some experts have raised concerns about the outsized potential for adverse impacts for certain individuals.

Some lawmakers have sought to restrict the use of this type of tech at a local and state level, with some specifically citing concerns about its misidentification of people of color and transgender and nonbinary individuals.

CALLS FOR A BAN


In 2020, a coalition of organizations — including the Electronic Frontier Foundation (EFF) and the National Center for Transgender Equality (NCTE) — came together to ask the Privacy and Civil Liberties Oversight Board in a letter to urge government to stop its use of facial recognition technology. The letter cites studies that have shown biases in the technology, and states that “the rapid and unregulated deployment of facial recognition technology poses a direct threat to the ‘precious liberties that are vital to our way of life.’”

Nathan Sheard, EFF’s associate director of community organizing, said that this type of surveillance technology — even without the disparities that exist in identifying people of color, transgender and nonbinary people — is unacceptable.

“We don’t believe that there is any responsible regulation of the technology, and that it simply needs to be banned for many reasons,” said Sheard.

He stated that when government entities use surveillance technology, it impacts First Amendment protections. The example he gave was that it could potentially deter people from making certain choices, like going to a medical clinic or participating in a protest.

Rodrigo Heng-Lehtinen, deputy executive director of NCTE, took a similar stance, calling for a moratorium until the technology “can be adequately studied,” both in terms of its impact on civil liberties and in minimizing accuracy disparities across populations.

Elaborating on these disparities, he said that facial recognition technology often predicts gender incorrectly because it is using assumptions about things like facial structure. When it comes to transgender people, he said, it is frequently wrong.

DANGERS IN THE DATA


According to Heng-Lehtinen, differing accuracy rates across populations are not acceptable — especially because the most prominent use of this technology at this time is in law enforcement.

“That makes the stakes really high,” he said. “People of color, in general, are more likely to be targeted by law enforcement. Trans people are also more likely to be targeted by law enforcement.”

He stated that transgender people of color are frequently trapped in the legal system based on stigma and discrimination, and this technology can exacerbate these risks, adding that these risks are greater for people at risk for multiple forms of discrimination.

A well-known example of this technology’s risks in regards to the justice system is that of a Detroit man wrongfully arrested due to a facial recognition error.

“Because law enforcement has the power to deprive people of their safety and their freedom, and a monopoly on violence in order to execute that power, the potential impact and the potential harms of their use [of facial recognition technology] are much greater than some of the other use cases we might have explored,” Sheard explained.

Heng-Lehtinen emphasized that in addition to general privacy concerns, there are concerns with inaccuracy. Because the technology is accurate in some populations but inaccurate in others, that makes the technology in its current state a failed attempt.

A potential risk with the inaccuracy of this technology is that it can out people by connecting them with identity documents with the name or sex they were assigned at birth.

Heng-Lehtinen explained that the process of having every legal identity document updated is a very complicated — and expensive — process. NCTE conducted a study, published in 2015, that found only 11 percent of respondents had successfully updated all their identity documents with their preferred name and gender.

A person could be at risk of facing discrimination or violence when they are outed, he added.

LEGISLATIVE PROGRESS (AND OBSTACLES)


There are efforts to ban this technology at the national level, explained Sheard, citing federal legislation introduced in 2020. His hope is that there will be action on that front in the coming year.

He also referred to the more than a dozen cities around the country — such as Boston and New York — that have taken action to ban government use of the technology.

Sheard acknowledged that there is a growing awareness of the potential harm of this technology, and credits cities and lawmakers for efforts that protect their constituents from “this particularly pervasive surveillance.”

Heng-Lehtinen’s belief is that because facial recognition technology’s potential risks are an issue many are not familiar with, there is in turn less political will to have it changed. He also cited the pandemic’s impact, which has shifted government focus.

He said that regulations are a step in the right direction, but that legislation needs to go further to ensure individuals' safety and privacy.

EFF offers resources through its About Face campaign to help guide community efforts to protect individuals from the impacts of this technology. There is information available on related bans, active bills and moratoria. There is also a toolkit available with information and model legislation that government units can use to implement their own bans.

PRIVATE SECTOR’S ROLE


The private sector may not currently be limited by federal legislation, but companies can still take steps to encourage ethical use of their facial recognition products.

While some companies have halted development of facial recognition technology over concerns of bias, others are implementing detailed use guidelines.

One example is Amazon’s Rekognition product. In the developer guide, it is stated that a face’s physical appearance allows a prediction to be made on gender within the binary; however, it notes that the product should not be used to indicate a person’s gender identity.

Clarifai, another company with facial recognition products, detailed in a blog that the reason the company chose to use “masculine” and “feminine” as the descriptive terms is that gender terms are “an aspect of self and not something we felt our AI could appropriately label.”

Clarifai did not respond to Government Technology’s request for an interview.

An industry source told Government Technology on the condition of anonymity that an important part of improving the technology is continually retraining the model. By inputting and annotating the data that is collected and aligning that information with individuals’ preferred categorization, the machine learning technology can improve its capabilities in identifying and categorizing faces.

This source also suggested that companies can encourage ethical use of their products by making clear the intended uses and limitations, as well as by working to anticipate how the technology could be used by the public.
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.