Could New York City's AI Transparency Bill Be a Model for the Country?

The recently-passed legislation is aimed at ensuring government is not "black boxed."

by / January 4, 2018
Shutterstock

The New York City Council met early in December to pass a law on algorithmic decision-making transparency that could have real significance for cities and states in the rest of the nation. With the passage of an algorithmic accountability bill, the city gains a task force that will monitor the fairness and validity of algorithms used by municipal agencies.

The public is in the dark about AI (artificial intelligence) and how it is deployed and used, said Bronx City Council representative James Vacca. 

“I strongly believe the public has a right to know when decisions are made using algorithms,” said Vacca during the December City Council Technology Committee meeting.

New York uses algorithms to determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, accessing teacher performance, identifying Medicaid fraud and to determine where crime will happen next. 

For example, he said, “I’ve always felt that the number of police officers in my two police precincts have always been disproportionately low, inadequate.” But according to Vacca, no one in the police department has been able to tell him they determine the numbers of police on the streets.

“I don’t know what it is. I don’t know how it works. I don’t know what factors go into it,” Vacca said. “As we advance into the 21st century, we must ensure our government is not ‘black boxed.’ I have proposed this legislation not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public.”

The mayor will choose a task force within three months of the bill going into effect. The panel will include groups and individuals that are affected by algorithms, technology ethicists, city department heads using AI, technology companies as well as legal experts.

To see that the panel is equitable, the American Civil Liberties Union (ACLU) is actively lobbying the mayor’s office to help shape the panel membership and the conversation the city will have about these algorithms.

Nationally, the ACLU has been active in prosecuting AI unfairness and lack of transparency, particularly in the areas of criminal justice and social welfare. But now they actively want to have a seat at the table and to have a hand in the selection of the panel members. 

Rashida Richardson, legislative counsel for the New York Civil Liberties Union, a state affiliate of the ACLU, said the organization testified in support of the bill and would like to offer some suggestions for task force members.

“We hope to see a broader group convened to make sure the group is effective and fair,” she explained.

Richardson said the national organization will be meeting soon to charter a legislative agenda for all affiliates on the topic, and the ACLU wants to help shape a national dialog on how it is implemented.  

“AI is now pervasive throughout government,” she said. “Cash-strapped government uses AI to save money, but [lack of transparency] is creating an imbalance,” she said. 

Making sure the task force does not kowtow to law enforcement and software companies

Introduced in August, the bill originally took more of an enforcement approach, but during hearings on the legislation, law enforcement and representatives of technology companies opposed revealing software code in question. The NYPD said the bill would hurt the tactical advantage the agency currently enjoys.

Similarly, technology companies objected to publishing proprietary software code. Critics have said that merely assigning a task force to determine how the city can create transparency without tools of enforcement gives the body a symbolic charter, but no big stick to get work done. These critics are also concerned that the task force will not be forceful enough against law enforcement and proprietary software concerns. 

Ellen Goodman, a law professor with a specialty in information policy at Rutgers Law, said she was concerned that the task force would be too deferential to technology companies’ “broad claims of trade secrecy.” Goodman, who recently co-authored, with Robert Brauneis, a paper entitled Algorithmic Transparency for the Smart City, found that most cities do not disclose information about their use of AI because they are afraid of repercussions from AI software companies.

“Proprietary interests should not tromp on public access,” Goodman, who helped to prepare expert testimony heard by the NYC Council, said. “New York City has the power [to push back] and insist on preserving the public interests.” She suggests that software deployed to make decisions for the city about its citizens should be audited for bias independently. “The source code does not need to be disclosed,” she said.

Goodman argues that a city would not need to release the “whole blueprint” to help people understand what the AI looks at and what type of criteria it uses to decide on bail or access to Medicaid funding. One solution could be writing contracts to address transparency concerns from the outset. 

Julia Powles, an academic researcher at the New York University School of Law, who writes on topics at the intersection of law and technology, said she was disappointed to see the ambitious tone of the original bill scaled back.

“New York City is well-placed to take on an initiative like this one,” said Powles, noting that officials “downscaled their ambitions” and instead chose to create a task force. 

“They established a task force because it was the uncontroversial path,” she said, adding that it does not have the power to investigate.

The fact that the city also has no central repository of how much it spends on AI nor how it is applied puts the committee at a disadvantage. “There is no readily accessible public information on how much the city spends on algorithmic services, for instance, or how much of New Yorkers’ data it shares with outside contractors,” she wrote in a recent piece for The New Yorker.

In Europe, fines are already being leveled if AI transparency in software is not readily available, she explained. “In the past decade, countries like France and Germany have put companies on notice and transparency has become a company issue.”

The fact that his original bill has changed a lot does not seem to concern Vacca. “This is a part of the normal legislative process,” he said. “This is the first time in the country that this issue has been taken up. I knew my bill was ambitious.”

Vacca, who has termed out of the city council after serving 12 years, sees the legislation as a legacy piece. “This issue has long-term implications for the country as a whole,” he said. “What the council does will be watched by the whole nation.”

Editor’s note: Changes were made to clarify Elizabeth Goodman’s role in council testimony. 

Elizabeth Zima

Elizabeth Zima is a former staff writer for Government Technology. She has written in depth on topics including health care, clinical science, physician relations and hospital communications.


artificial intelligence Stories Mapped by City and State

See the big picture of how government agencies are utilizing artificial intelligence by exploring our Government Technology editorial database geographically visualized by location and date.