IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Could National Unrest Derail the Future of Facial Recognition?

As civil liberty groups have lobbied for police reforms nationwide, an increasingly hostile regulatory landscape is emerging for facial recognition technology. It throws into question whether there is a path forward for its use by state and local governments.

a crowd of people with one standing out
Even before the nationwide call for police reform, facial recognition was struggling to win public support. 

A Pew poll last September showed that only around half of Americans thought police departments could be trusted to use the biometric tool responsibly. Even fewer Americans thought the technology should be used by advertisers or tech companies.

Descriptors like "creepy," "invasive" and "Orwellian" have frequently dogged the technology, and, as a result, every civil liberties organization in the country has put a target on its back. 

Now, as calls to defund, divest or otherwise drastically alter police departments have escalated, a fairly hostile regulatory landscape is emerging for facial recognition. Some municipalities are considering outright bans and a number of potential laws threaten to drastically curtail the industry. 

To a large degree, the police protests have reset the legislative conversation. Previously, face recording moratoriums had been introduced in cities across the country, but almost all of these bills floundered, frequently after localized pressure from the tech lobby.

Now, however, these regulations are seeing renewed interest. Much of this momentum has likely been engendered by the now heightened relevance of arguments long made by civil rights groups: that facial recognition inordinately targets marginalized communities and, in some cases, reinforces a "racist" system of policing.  

Within weeks of George Floyd's death in Minneapolis, Boston became the largest city on the East Coast to enact a moratorium on facial recognition, joining a growing coalition of communities that have rejected it as a legitimate policing tool. It's so far unclear whether this trend will catch on in other cities. 

The private sector has handled this maelstrom with a certain amount of flexibility, and numerous companies have made quick concessions to the current public outcry surrounding the technology. 

As protests exploded across the country, a number of the U.S.'s largest vendors halted sales in an apparent bid to stifle controversy. IBM, Microsoft and Amazon announced that they would not sell facial recognition technology to police departments, at least for the time being. This hasn't stopped many of those same companies from being the target of new lawsuits that claim they broke the Illinois biometric privacy law.  

When reached by email, IBM provided GT the following statement: 

"IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."

A spokesperson for Amazon, meanwhile, pointed to writings by Michael Punke, the company's vice president of global public policy. Punke takes a decidedly more flexible view when it comes to facial recognition, and his writings show hope for the product line's future.    

"Our communities are safer and better equipped to help in emergencies when we have the latest technology, including facial recognition technology, in our toolkit," writes Punke, mentioning the device's power to assist police in important criminal investigations, like human trafficking cases.

Punke also argues that allegations that Amazon Rekognition routinely misidentifies suspects have all been based on improper usage of the product.

"In each case, we’ve demonstrated that the service was not used properly; and when we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes," Punke writes.  

In the case of Microsoft, a spokesperson pointed out that it has consistently lobbied for regulations that protect both communities and industry. A policy framework with testing requirements and transparency and accountability components is essential, they said.   

“For the past two years we have been focused on developing and implementing strong principles that govern our use of facial recognition, and we’ve been calling for strong government regulation. We do not sell our facial recognition technology to U.S. police departments today, and until there is a strong national law grounded in human rights, we will not sell this technology to police departments,” the spokesperson said. 

But even more "equitable" facial recognition may not be enough to satisfy certain quarters of the public. Jennifer Lee, technology and liberty project manager for the ACLU Washington state chapter, said that even if facial recognition technology is perfected to weed out potential racial bias, it still represents a basic threat to Americans' civil liberties. 

"Accuracy does not equal equity," Lee said. "Making facial recognition 100 percent accurate does not solve the problems presented by face surveillance technology and its role as a tool that fuels police brutality. ... Everyone should be concerned about a perfectly accurate facial recognition tool. The equity component is huge, but beyond that it just facilitates unprecedented government intrusion." 

Though companies like Microsoft have frequently fought for more industry-friendly regulations like the one recently passed in Washington state, recent weeks show a preference by certain lawmakers for a more draconian approach. Case in point, the "strong national law" Microsoft has in mind is probably not the bill recently introduced by U.S. Senators, which would ban use of the biometric tech by all federal law enforcement agencies

A middle route between moratoriums and unfettered use is to press pause so that communities can study the technology and the effects it would have on their residents. 

As example, an omnibus police reform bill in Massachusetts, the Reform, Shift and Build Act, has proposed a one-year ban on usage of facial recognition by state agencies. A staffer for Sen. Cynthia Stone Creem, who is co-sponsoring the bill, said that the point of the legislation is to "hit pause" on the technology until it can be further scrutinized. 

"It's basically a full-on moratorium on facial recognition technology. That moratorium would last until December 2021," the staffer said, explaining that it would apply to any state agency or bureau, with the exception of the DMV, which uses the technology for routine identity verification purposes and to prevent fraud. 

At the same time, the bill would create a special commission to study the technology. The commission would be made up of numerous officials from public agencies, as well as experts from various backgrounds — including those specializing in civil rights. The group would submit its findings and recommendations to the Legislature no later than July 2021, according to the text of the bill. 

"Then the legislature would have some time to go through those recommendations and either enact statutory authority for it, or, if the recommendations come out saying 'There's no good use for this,' then they might make a different decision." 

Sen. Creem, speaking with Government Technology, said that she had been interested in regulating facial recognition since before the wave of police protests swept the country. Her stance has been that deployment of biometrics by government should be halted until proper investigation into its usage can be conducted. Creem said that she might eventually be open to use of the tools, but that there should be a more robust public process surrounding how it's used, which communities are affected by it and whether it is effective or not. 

"I am open to seeing where we go with it and to see what rules and regulations [can be created]. Perhaps you get a warrant, perhaps the technology gets better. I'm open minded to having a discussion about it and seeing where we go," said Creem.  

Lucas Ropek is a former staff writer for Government Technology.