An attempt to limit sales of the controversial surveillance technology on the part of civil rights activists did not get far. They fear the technology could be used to unfairly target minorities, people of color and women.
A failed bid to halt sales of Rekognition, Amazon’s controversial facial recognition software, is the most recent in a string of high-profile controversies surrounding the new surveillance technology.
As the prevalence of facial recognition has grown, governments and businesses alike have grappled with how to appropriately deploy it, spurring concerns from civil liberties groups and inspiring debates about the proper balance of privacy and innovation.
Most recently, the city of San Francisco became the first in the nation to ban government use of facial recognition by law enforcement, though some have noted the bill still leaves the door open for “spying” by private companies.
This week, privacy activists — most notably those attached to the American Civil Liberties Union (ACLU) — sought to encourage Amazon shareholders to pass proposals that would have limited the company's ability to sell Rekognition. The proposals, both non-binding and subject to veto, were presented at the annual shareholders conference in Seattle this week, but were ultimately voted down.
One of the proposals would have put a moratorium on sales of the technology to government agencies, while another would have mandated the commission of an independent study to determine the effects on individual civil liberties.
Rekognition is used by myriad organizations and companies, but activists worry specifically about the company's aggressive marketing push to police and law enforcement agencies. The software has already been adopted by a number of prominent agencies, including by the Washington County Sheriff’s Office in Oregon, as well as the Orlando, Fla., Police Department.
“This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control and harm people,” reads a letter penned by the ACLU, in anticipation of this week's conference. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents.”
According to Amazon, the software utilizes AI and deep learning to extract metadata from visual content, collating and analyzing it to identify and index individuals. The technology identifies not only demographic indicators like age, gender, and race, but can also read human sentiment like emotions and mood. It can also analyze and track people in real time, allegedly identifying as many as 100 people in a single picture frame.
Activists have expressed myriad concerns about the new technology — chief among them the fact that face recording will be inordinately used to target minorities, people of color and women.
Multiple studies have shown that facial recognition software is consistently less accurate when attempting identifications of people of color and women, spurring fears of a kind of automated bias.
In the wake of the company’s decision, the ACLU denounced the decision as tone deaf to privacy concerns.
“The fact that there needed to be a vote on this is an embarrassment for Amazon’s leadership team,” said Shankar Narayan, ACLU of Washington. “It demonstrates shareholders do not have confidence that company executives are properly understanding or addressing the civil and human rights impacts of its role in facilitating pervasive government surveillance.”