IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Are Governments Right to Ban Facial Recognition Technology?

Proposed public-sector bans of facial recognition are often based on inaccurate misconceptions, and following through on them would harm law enforcement, school safety and technological progress.

Over the past year, a number of organizations have campaigned for policymakers to ban government use of facial recognition technology and for companies like Microsoft, Amazon and Google to not sell the technology to government. Their efforts have begun to bear fruit. In February, a lawmaker in San Francisco proposed a rule that would ban all city departments from using facial recognition technology, and state legislators in Massachusetts and Washington have since followed suit with their own proposal to ban government use of facial recognition. However, these proposals are based on inaccurate or misguided concerns, and following through on them would weaken the effectiveness and efficiency of law enforcement, make schools less safe, and hold back technological progress at other government agencies.

Much of the opposition to facial recognition is based on the false belief that the systems are not accurate. But many of the most high-profile critiques of facial recognition are based on shoddy research. For example, the American Civil Liberties Union (ACLU) has repeatedly claimed that Amazon’s facial recognition service had an error rate of 5 percent when used to compare Congressional photos to mugshots, but the error rate would have dropped to zero had the ACLU used the recommended confidence threshold of 99 percent.

Moreover, there is clear evidence that facial recognition technology is becoming increasingly accurate. In 2018, the Department of Commerce’s National Institute of Standards and Technology (NIST) tested how accurately the facial recognition software from major developers could match two photos of the same individual from a database of nearly 27 million photos. NIST found that only 0.2 percent of searches failed. And while several facial recognition systems perform less accurately for certain demographics, the private sector has been actively working to address this problem, such as by developing more diverse image data sets to train their systems.

Unfortunately these proposed bans would limit many beneficial applications of facial recognition technology, such as allowing police to more quickly identify potential suspects, witnesses and victims; ensuring only authorized personnel can access secure government buildings; and helping schools prevent sex offenders, disgruntled employees, or other potentially dangerous people from entering their facilities. Simply put, computers can search millions of photographs at a fraction of the time and expense of humans. Indeed, the technology has already proven its value on many occasions, including by finding missing children, catching people with false documents at airports, and combating human trafficking. Moreover, some of the proposed bans, such as the bill in Washington, would limit government from using other technologies, like tools to blur faces in surveillance footage before public release.

Policymakers can promote the responsible use of facial recognition technology by government without banning its use. For example, they can require that law enforcement only use facial recognition technology that meets certain performance benchmarks to ensure the technology is used effectively, and they can provide oversight to ensure accountability for the policies and practices that law enforcement develops to use the technology effectively. In many cases, this oversight will be an extension of existing accountability measures. Finally, government funding can support the development of better data sets, more accurate systems and best practices.

No rational person wants to live in the real-life version of society described in George Orwell’s novel 1984, but police adoption of facial recognition in democratic, rule-of-law nations does not equate to such a world. Instead of supporting the calls for bans, policymakers should create rules to prevent inappropriate uses of facial recognition technology by government and allow government to adopt the technology where it is useful.

Daniel Castro is the vice president of the Information Technology and Innovation Foundation (ITIF) and director of the Center for Data Innovation. Before joining ITIF, he worked at the Government Accountability Office where he audited IT security and management controls.