IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Civil Rights Activists Sue to Stop Clearview AI Data Scraping

The lawsuit, submitted in California Superior Court in Alameda County, is part of a growing effort to restrict the use of facial recognition technology. A handful of Bay Area cities were among the first in the U.S. to limit the use of the tech in 2019.

a crowd of people with one standing out
Shutterstock/FranciscoBaliego
(TNS) — Clearview AI has amassed a database of more than 3 billion photos of individuals by scraping sites like Facebook, Twitter, Google and Venmo. It's bigger than any other known facial-recognition database in the U.S., including the FBI's. The New York company uses algorithms to map the pictures it stockpiles, determining, for example, the distance between an individual's eyes  to construct a "faceprint."

This technology appeals to law enforcement agencies across the country, which can use it in real time to help determine people's identities. It also has caught the attention of civil liberties advocates and activists, who allege in a lawsuit filed Tuesday that the company's automatic scraping of their images and its extraction of their unique biometric information violate privacy and chill protected political speech and activity.
 
The plaintiffs — four individual civil liberties activists and the groups Mijente and NorCal Resist — allege Clearview AI "engages in the widespread collection of California residents' images and biometric information without notice or consent." This is especially consequential, the plaintiffs argue, for proponents of immigration or police reform, whose political speech may be critical of law enforcement and who may be members of communities that have been historically over-policed and targeted by surveillance tactics. The plaintiffs argue Clearview AI enhances law enforcement agencies' efforts to monitor these activists, as well as immigrants, people of color and those perceived as "dissidents," such as Black Lives Matter activists, and can potentially discourage their engagement in protected political speech as a result.
 
The lawsuit, submitted in California Superior Court in Alameda County, is part of a growing effort to restrict the use of facial-recognition technology. Bay Area cities, including San Francisco, Oakland, Berkeley and Alameda, have led that charge, and were among the first in the U.S. to limit the use of facial recognition by local law enforcement in 2019.
 
Yet the push comes at a time when consumer expectations of privacy are low, as many have come to see the use and sale of personal information by companies like Google and Facebook  as an inevitability of the digital age. Unlike other uses of personal information, facial recognition poses a unique danger, argues Steven Renderos, the executive director of MediaJustice and one of the individual plaintiffs in the lawsuit. "While I can leave my cellphone at home [and] I can leave my computer at home if I wanted to," he said, "one of the things that I can't really leave at home is my face."
 
Enhancing law enforcement's ability to instantaneously identify and track individuals is potentially chilling, the plaintiffs argue, and could inhibit the members of their groups or Californians broadly from exercising their constitutional right to protest.
 
"It doesn't have to be pictures that I'm posting of myself," said Renderos. "It could be images if I'm at an event, which I do a lot of in my role as an executive director of a nonprofit. ... Those images are also potentially susceptible to being scraped." He also argued the company was "circumventing the will of a lot of people" in the Bay Area cities who voted to ban or limit facial-recognition use.
The plaintiffs are seeking an injunction that would force the company to stop collecting biometric information in California. They are also seeking the permanent deletion of all images and biometric data or personal information in their databases, according to Sejal R. Zota, a legal director at Just Futures Law and one of the attorneys representing the plaintiffs in the suit. The plaintiffs are also being represented by Braunhagey & Borden LLP.
 
"Our plaintiffs and their members care deeply about the ability to control their biometric identifiers and to be able to continue to engage in political speech that is critical of the police and immigration policy free from the threat of clandestine and invasive surveillance," Zota said. "And California has a Constitution and laws that protect these rights."
 
In a statement Tuesday, Floyd Abrams, attorney for Clearview AI said the company "complies with all applicable law and its conduct is fully protected by the 1st Amendment." It's not the first lawsuit of its kind — the American Civil Liberties Union is suing Clearview AI in Illinois for violating the state's biometric privacy act. But it is one of the first lawsuits filed on behalf of activists and grassroots organizations "for whom it is vital," Zota said, "to be able to continue to engage in political speech that is critical of the police, critical of immigration policy."
 
Clearview AI faces scrutiny internationally, as well. In January, the European Union said Clearview AI's data processing violates the General Data Protection Regulation. Last month, Canada's privacy commissioner, Daniel Therrien, called the company's services "illegal" and said they amounted to mass surveillance that put all of society "continually in a police lineup." He demanded the company delete the images of all Canadians from its database.
 
Clearview AI has seen widespread adoption of its technology since its founding in 2017. Chief Executive Hoan Ton-That said in August that more than 2,400 law enforcement agencies were using Clearview's services. After the January riot at the U.S. Capitol, the company saw a 26% jump in law enforcement's use of the tech, according to Ton-That.
 
The company continues to sell its tech to police agencies across California as well as to  Immigration and Customs Enforcement, according to the lawsuit, in spite of several local bans on the use of facial recognition. The San Francisco ordinance that limits the use of facial recognition specifically cites the technology's proclivity "to endanger civil rights and civil liberties" and "exacerbate racial injustice." Studies have shown that facial-recognition technology falls short in identifying people of color. A 2019 federal study concluded Black and Asian people were about 100 times more likely to be misidentified by facial recognition than white people. There are now at least two known cases of Black people being misidentified by facial-recognition technology, leading to their wrongful arrest.
Ton-That previously told The Times that an independent study showed Clearview AI had no racial biases and that there were no known instances of the technology leading to a wrongful arrest. The ACLU, however, has previously called the study into question, specifically saying it is "highly misleading" and that its claim that the system is unbiased "demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands."
 
Renderos argues that making facial recognition more accurate doesn't make it less harmful to communities of color or other marginalized groups. "This isn't a tool that exists in a vacuum," he said. "You're placing this tool into institutions that have a demonstrated ability to racially profile communities of color, Black people in particular. ... The most neutral, the most accurate, the most effective tool — what it will just be more effective at doing is helping law enforcement continue to over-police and over-arrest and over-incarcerate Black people, Indigenous people and people of color."
 
©2021 the Los Angeles Times, Distributed by Tribune Content Agency, LLC.