Facial Recognition: Policy Outpaces Tech, Puts Certain Communities at Risk

As facial recognition systems advance and become more widely used by police agencies, the need for policy and will also grow.

by / October 26, 2016
Previous Next

If Hollywood has taught us anything about technology, it’s that it will all eventually spiral out of control and leave humans in the cold, metallic grip of a ruthless robot overlord. And while we all may end up chained together in said robot’s prison camp at some point — unless the constant cattleprodding kills us first — we have real-world technology problems to address that, thankfully, aren't glitzy or action-packed enough to grace the silver screen.

Last week, the Georgetown Law Center on Privacy and Technology released a report detailing many of the key issues with rapidly advancing facial recognition technology in law enforcement. Among the chief concerns mentioned in The Perpetual Line-Up: Unregulated Police Face Recognition in America, is a sweeping lack of operating policy across U.S. police departments and the disproportionate effects the tools would pose to communities of color.

The widely circulated report cites that facial recognition networks currently include more than 117 million adults, a number that is quickly growing. Additionally, roughly one in two Americans has had their image searched this way. As if the idea of federal, state and local agencies referencing databases at will weren’t enough to sound alarms in civil rights circles, many agencies are looking toward the prospect of live, or real-time, recognition systems. 

The Wide-Angle View

As Georgetown researcher and computer scientist Jonathan Frankle explained, the overarching problem with facial recognition technology is the lack the solid policy around its deployment. State and local police agencies are left on their own to navigate the intricacies of the technology and its implications.

And, while he is quick to point out the potential benefits of the tool, he qualifies his comments with the expectation it will be used in accordance with thoughtful policy.

“It seems to me that there is simply no regulation on this technology as of right now, that very few state legislatures have really looked into this or even passed anything that seems to relate to face recognition. And then beyond that, it’s pretty much a Wild West,” Frankle said. “These departments are crafting their own policies, sometimes they are not publishing the policies, not publishing the fact that they even have a system to begin with. And so, it’s pretty much a free-for-all.”

This creates a vacuum, often filled with eyebrow-raising practices and seemingly secretive policies. But, Frankle argues, creating policy isn’t as simple as critics might have you believe.

“… I can imagine how challenging it would be for a police department to have this powerful new tool and kind of have to start from square one and imagine how it should and shouldn’t be used," he said. "It’s a challenge for anybody.”

Though he said there is a need for more comprehensive policy, he explained that the report was not meant to impede the use of the facial recognition tool, but rather jumpstart the conversation policy makers should be having but aren’t.

Racial Implications

One of the more troubling revelations of the Georgetown report is the perceived impact facial recognition would have on the African-American community at large.

According to Frankle, the disproportionate police attention often reported within this community would likely complicate not only the ongoing national conversation around policing, but the job of it as well.

Increased police attention results in more arrests; more arrests means more mug shots; more mug shots means more images to run against facial recognition systems.

“If you are being disproportionately arrested that means you are going to be disproportionately in these databases, and if you are in one of these databases, it means you are disproportionately identifiable with this technology. What I mean by that is you can only use someone to identify someone if you have their photo already in your database,” the researcher explained. “That is a major concern.”

But it comes down to more than just having a photo in a database; accuracy is also an issue when dealing with darker-skinned individuals. As Frankle explained, the ability to pick up indicators, like scars, freckles and other identifiable markers is essential to accurately identifying of the target face. Darker skin means less contrast for the system to lock onto, and ultimately more variable results.

“I would say there are more risks with a face recognition system when dealing with African Americans," he said, "given some of the idiosyncrasies with policing in the U.S. and the way that face recognition happens to work on people with darker skin."

How Accurate Is Accurate?

As was mentioned in the case of African Americans, accuracy depends on a number of factors. Frankle explained that facial recognition tools also rely on being properly deployed, as well as on the ability to identify key markers.

For example, a facial recognition systems designed to scan driver’s license photos against a still image would operate with accuracy in the 90 percentile. But deploying the same system to make matches in a real-world environment, like a train station, would be much less accurate.

“I want to start with the fact that accuracy is not one number,” he said. “The question you should be asking is how accurate is it on a particular task?”

Real-world deployments, Frankle said, are not only less accurate at this point in 2016, but also pose a higher risk from the standpoint of privacy and accuracy.

Paving the Policy Road Ahead

Given the scope of facial recognition technology in the United States and the perceived inability to craft policy in step with it, it is easy to point to those agencies not doing it correctly.

Law enforcement agencies in New York and Los Angeles have been more secretive about the technology and policies they have in place. Despite reports and evidence of facial recognition programs, Frankle said the the agencies declined to respond and provided no records to his colleague’s Freedom of Information Act (FOIA) request.  

On the other end of that spectrum are agencies like the Seattle Police Department and the Michigan State Police, who took action to not only outline policy, but work with outside partners in that effort.

In Seattle, the department worked with the ACLU of Washington to alleviate privacy concerns, and in Michigan, the state police uses a program of specialized training and dual human verification to authenticate computer-generated results.

As the technology eventually makes its way into public spaces, the Georgetown team argues strict restrictions as to arbitrarily tracking citizens who are going about their daily lives.

The need for solid and comprehensive policy can only increase with the technology. As Frankle sees it, the world is one major breakthrough away from the next iteration of facial recognition technology — an advance that could ultimately mean more efficient and effective live deployments of tools. 

Eyragon Eidam Web Editor

Eyragon Eidam is the Web editor for Government Technology magazine, after previously serving as  assistant news editor and covering such topics as legislation, social media and public safety. He can be reached at eeidam@erepublic.com.