IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Connecticut's Facial Recognition Bill: A Model for States?

State legislators step back from a bill that would limit such technology and instead take a reasonable approach — that should serve as a model for state legislators considering regulation for other emerging technologies.

Earlier this year, the Connecticut General Assembly was considering a bill that would prohibit the use of facial recognition technology for commercial applications unless companies got prior consent from consumers to gather that information — a move that would have severely curtailed the deployment of the technology. Fortunately, state lawmakers listened to reason and revised the bill so that it now simply requires retailers to display signs indicating that their establishments use facial recognition. This type of reasonable approach to regulating new technology should serve as a model for state legislators considering regulation for other emerging technologies.

Facial recognition is a form of automated image recognition that uses computer algorithms to uniquely identify an individual in a database based on a photo. Concerned with the growing accuracy of the technology, some privacy advocates have argued that facial recognition is a threat to privacy and public anonymity and have recommended the government impose restrictions on both public- and private-sector uses of it.

However, broad restrictions on using facial recognition could chill innovation and prevent uses that benefit consumers and society alike. Most people are willing to accept limitations on anonymity and privacy in exchange for security and convenience. For example, few people mind that a grocery store uses cameras to prevent shoplifting, since this helps prevent theft and thereby lowers prices. Similarly, facial recognition technology can help drive down prices by making repeat shoplifters easier to spot.

Well meaning laws can often have unintended effects. For example, some organizations are beginning to use facial recognition to combat human trafficking. Undoubtedly, it would be nearly impossible to obtain consent from the subjects of the millions of photographs that would need to be analyzed to find the victims.

Moreover, the concern about new technology is often inflated. For example, retailers already know who their customers are if the person is using a credit card or loyalty card to complete the transaction. There’s little real impact to consumer privacy if these same retailers also use facial recognition technology.

Technology is just a tool, and it can be used for both good or ill. The goal of legislation should be to protect people from harms that result from the abuse of the technology, not to stop its use overall. By that metric, the original bill was an overreaction and would have effectively prevented Connecticut businesses from using facial recognition technology in public by requiring them to obtain prior consent from every customer entering their stores. The new bill scraps that approach and instead requires retailers and other businesses to display a sign so that shoppers are aware that facial recognition is being used on the premises.

To be clear, the new bill is still not ideal. While transparency in business practices is generally good, by requiring retailers to post warning signs, there’s an implicit assumption that using facial recognition technology is something potentially harmful that requires consumer notification. Instead, states should abide by technology-neutral policies. If they want to require retailers to post a sign, they should do so for all surveillance video recordings, not just those using facial recognition. Even if legislators simply want to ban retailers from tracking the movement of customers in their stores, then it should prohibit that practice across all technologies, including other forms of biometrics. But it should not play favorites.

A better approach would have been for lawmakers to look at what specific harms they were actually trying to address, such as harassment or defamation, and make laws prohibiting those uses. Still, Connecticut lawmakers deserve credit for not letting those peddling fear run the show and focusing on the issue most salient to them.

As my colleague ITIF Research Assistant Alan McQuinn and I have written before, the privacy panic cycle — a term used to describe the increasingly alarmist rhetoric around new technologies — often dominates the politics of emerging technologies and causes lawmakers to overreact to perceived fears. Overcoming this cycle of fear requires policymakers to act thoughtfully, without passion or prejudice. Other states would be wise to follow Connecticut’s path.

Daniel Castro is the vice president of the Information Technology and Innovation Foundation (ITIF) and director of the Center for Data Innovation. Before joining ITIF, he worked at the Government Accountability Office where he audited IT security and management controls.