IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Using Facial Recognition to Find Capitol Rioters Brings Risks

In the aftermath of a riot that included white supremacist factions attempting to overthrow the results of the presidential election, communities of color are warning about the potential danger of the software.

BIZ-CPT-CAPITOL-RIOTS-FACIALRECOGNITION-ABA
Police officers in riot gear stand guard while supporters of President Donald Trump protest on the steps of the U.S. Capitol building on Capitol Hill in Washington, D.C., on Wednesday, Jan. 6, 2021. (Yuri Gripas/Abaca Press/TNS)
TNS
(TNS) — In the days following the Jan. 6 riot at the nation’s Capitol, there was a rush to identify those who had stormed the building’s hallowed halls.

Instagram accounts with names like Homegrown Terrorists popped up, claiming to use AI software and neural networks to trawl publicly available images to identify rioters. Researchers such as the cybersecurity expert John Scott-Railton said they deployed facial recognition software to detect trespassers, including a retired Air Force lieutenant alleged to have been spotted on the Senate floor during the riot. Clearview AI, a leading facial recognition firm, said it saw a 26% jump in usage from law enforcement agencies on Jan. 7.

A low point for American democracy had become a high point for facial recognition technology.

Facial recognition’s promise that it will help law enforcement solve more cases, and solve them quickly, has led to its growing use across the country. Concerns about privacy have not stopped the spread of the technology — law enforcement agencies performed 390,186 database searches to find facial matches for pictures or video of more than 150,000 people between 2011 and 2019, according to a U.S. Government Accountability Office report. Nor has the growing body of evidence showing that the implementation of facial recognition and other surveillance tech has disproportionately harmed communities of color.

Yet in the aftermath of a riot that included white supremacist factions attempting to overthrow the results of the presidential election, it’s communities of color that are warning about the potential danger of this software.

“It’s very tricky,” said Chris Gilliard, a professor at Macomb Community College and a Harvard Kennedy School Shorenstein Center visiting research fellow. “I don’t want it to sound like I don’t want white supremacists or insurrectionists to be held accountable. But I do think because systemically most of those forces are going to be marshaled against Black and brown folks and immigrants it’s a very tight rope. We have to be careful.”

Black, brown, poor, trans and immigrant communities are “routinely over-policed,” Steve Renderos, the executive director of Media Justice, said, and that’s no different when it comes to surveillance.

“This is always the response to moments of crises: Let’s expand our policing, let’s expand the reach of surveillance,” Renderos said. “But it hasn’t done much in the way of keeping our communities actually safe from violence.”

BIASES AND FACIAL RECOGNITION

On Jan. 9, 2020, close to a year before the Capitol riots, Detroit police arrested a Black man named Robert Williams on suspicion of theft. In the process of his interrogation, two things were made clear: Police arrested him based on a facial recognition scan of surveillance footage and the “computer must have gotten it wrong,” as the interrogating officer was quoted saying in a complaint filed by the ACLU.

The charges against Williams were ultimately dropped.

Williams’ is one of two known cases of a wrongful arrest based on facial recognition. It’s hard to pin down how many times facial recognition has resulted in the wrong person being arrested or charged because it’s not always clear when the tool has been used. In Williams’ case, the giveaway was the interrogating officer admitting it.

Gilliard argues instances like Williams’ may be more prevalent than the public yet knows. “I would not believe that this was the first time that it’s happened. It’s just the first time that law enforcement has slipped up,” Gilliard said.

Facial recognition technology works by capturing, indexing and then scanning databases of millions of images of people’s faces — 641 million as of 2019 in the case of the FBI’s facial recognition unit — to identify similarities. Those images can come from government databases, like driver’s license pictures, or, in the case of Clearview AI, files scraped from social media or other websites.

Research shows the technology has fallen short in correctly identifying people of color. A federal study released in 2019 reported that Black and Asian people were about 100 times more likely to be misidentified by facial recognition than white people.

The problem may be in how the software is trained and who trains it. A study published by the AI Now Institute of New York University concluded that artificial intelligence can be shaped by the environment in which it is built. That would include the tech industry, known for its lack of gender and racial diversity. Such systems are being developed almost exclusively in spaces that “tend to be extremely white, affluent, technically oriented, and male,” the study reads. That lack of diversity may extend to the data sets that inform some facial recognition software, as studies have shown some were largely trained using databases made up of images of lighter-skinned males.

But proponents of facial recognition argue when the technology is developed properly — without racial biases — and becomes more sophisticated, it can actually help avoid cases of misidentification.

Clearview AI chief executive Hoan Ton-That said an independent study showed his company’s software, for its part, had no racial biases.

“As a person of mixed race, having non-biased technology is important to me,” Ton-That said. “The responsible use of accurate, non-biased facial recognition technology helps reduce the chance of the wrong person being apprehended. To date, we know of no instance where Clearview AI has resulted in a wrongful arrest.”

Jacob Snow, an attorney for the ACLU — which obtained a copy of the study in a public records request in early 2020 — called the study into question, telling BuzzFeed News it was “absurd on many levels.”

More than 600 law enforcement agencies use Clearview AI, according to the New York Times. And that could increase now. Shortly after the attack on the Capitol, an Alabama police department and the Miami police reportedly used the company’s software to identify people who participated in the riot. “We are working hard to keep up with the increasing interest in Clearview AI,” Ton-That said.

Considering the distrust and lack of faith in law enforcement in the Black community, making facial recognition technology better at detecting Black and brown people isn’t necessarily a welcome improvement. “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us,” doctoral candidate and activist Zoé Samudzi wrote.

RESPONDING WITH SURVEILLANCE

In the days after the Capitol riot, the search for the “bad guys” took over the Internet. Civilian Internet sleuths were joined by academics, researchers, as well as journalists in scouring social media to identify rioters. Some journalists even used facial recognition software to report what was happening inside the Capitol. The FBI put a call out for tips, specifically asking for photos or videos depicting rioting or violence, and many of those scouring the Internet or using facial recognition to identify rioters answered that call.

The instinct to move quickly in response to crises is a familiar one, not just to law enforcement but also to lawmakers. In the immediate aftermath of the riot, the FBI Agents Assn. called on Congress to make domestic terrorism a federal crime. President Biden has asked for an assessment of the domestic terrorism threat and is coordinating with the National Security Council to “enhance and accelerate” efforts to counter domestic extremism, according to NBC News.

But there is worry that the scramble to react will lead to rushed policies and increased use of surveillance tools that may ultimately hurt Black and brown communities.

“The reflex is to catch the bad guys,” Gilliard said. “But normalizing what is a pretty uniquely dangerous technology causes a lot more problems.”

Days after the riot, Rep. Lou Correa (D-Santa Ana) helped reintroduce a bill called the Domestic Terrorism Prevention Act, which Correa said aims to make it easier for lawmakers to get more information on the persistent threat of domestic terrorism by creating three new offices to monitor and prevent it. He also acknowledged the potential dangers of facial recognition, but said it’s a matter of balancing it with the potential benefits.

“Facial recognition is a sharp double-edged dagger,” Correa said. “If you use it correctly, it protects our liberties and protects our freedoms. If you mishandle it, then our privacy and our liberties that we’re trying to protect could be in jeopardy.”

Aside from facial recognition, activists are concerned about calls for civilians to scan social media as a means to feed tips to law enforcement.

“Untrained individuals sort of sleuthing around in the Internet can end up doing more harm than good even with the best of intentions,” said Evan Greer, the director of digital rights and privacy group Fight for the Future. Greer cited the response to the Boston marathon bombing on Reddit, when a Find Boston Bombers subreddit wrongly named several individuals as suspects.

“You always have to ask yourself, how could this end up being used on you and your community,” she said.

Historically, attacks on American soil have sparked law enforcement and surveillance policies that research suggests have harmed minority communities. That’s a cause for concern for Muslim, Arab and Black communities following the Capitol riot.

After the Oklahoma City bombing, when anti-government extremists killed 168 people, the federal government quickly enacted the Antiterrorism and Effective Death Penalty Act of 1996, which, the Marshall Project wrote, “has disproportionately impacted Black and brown criminal defendants, as well as immigrants.”

Even hate crime laws have a disproportionate effect on Black communities, with Black people making up 24% of all accused of a hate crime in 2019 though they only make up 13% of the U.S. population according to Department of Justice statistics.

“Whenever they’ve enacted laws that address white violence, the blowback on Black people is far greater,” Margari Hill, the executive director of the Muslim Anti-Racism Collaborative, said at an inauguration panel hosted by Muslim political action committee Emgage.

In response to 9/11, federal and local governments implemented several blanket surveillance programs across the country — most notoriously in New York City — which the ACLU and other rights groups have long argued violated the privacy and civil rights of many Muslim and Arab Americans.

Many civil rights groups representing communities of color aren’t confident in the prospects of law enforcement using the same tools to root out right-wing extremism and, in some cases, white supremacy.

"[Law enforcement] knows that white supremacy is a real threat and the folks who are rising up in vigilante violence are the real threat,” Lau Barrios, a campaign manager at Muslim grass-roots organization MPower Change, said, referring to a Department of Homeland Security report that identified white supremacists as the most persistent and lethal threat facing the country in October 2020.

Instead, they focus their resources on movements like Black Lives Matter, she said. “That was what gave them more fear than white supremacist violence even though they’re not in any way comparable.”

These groups also say any calls for more surveillance are unfounded in reality. The Capitol riots were planned in the open, in easy-to-access and public forums across the Internet and the Capitol police were warned ahead of time by the NYPD and the FBI, they argue. There’s no shortage of surveillance mechanisms already available to law enforcement, they say.

The surveillance apparatus in the U.S. is vast and entails hundreds of joint terrorism task forces, hundreds of police departments equipped with drones and even more that have partnered with Amazon’s Ring network, Renderos said.

“To be Black, to be Muslim, to be a woman, to be an immigrant in the United States is to be surveilled,” he said. “How much more surveillance will it take to make us safe? The short answer is, it won’t.”

©2021 Los Angeles Times, Distributed by Tribune Content Agency, LLC.