IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Ethics in the Balance: AI’s Implications for Government

As automation becomes an ever-more viable tool for government for everything from cameras on light poles to using AI to set prisoners’ bail, can policymakers ensure it is used responsibly and ethically?

Empty hallway in a jail
There is much disagreement as to whether AI deserves its growing role in criminal justice decisions.
Shutterstock.com
While the COVID-19 crisis got most folks thinking about face masks and toilet paper, Chris Calabrese was pondering artificial intelligence and its implications for public policy. His aha moment came when he realized Facebook had sent home most of its human overseers and put AI in charge of policing the social forum for inappropriate content.

“The result has been systems that don’t work as well. They are taking down groups dedicated to sewing masks, just because they are falsely flagged,” said Calabrese, vice president of policy at the Center for Democracy and Technology. “That’s automation being used by one of the most influential companies in the world, and it’s still not up to snuff. That gives me a sense of how far we have to go.”

Facebook’s stuttering steps into automation reflect broader ethical challenges faced by public tech leaders as AI, biometrics and surveillance technologies increasingly enter the mainstream. CIOs are considering everything from the moral implications of cameras on light posts to the ethical fallout from allowing AI to set prisoners’ bail.

Will there be bias in AI systems? Will facial recognition erode privacy rights? Around the nation, city-sponsored commissions and academic task forces are trying to tackle some of these tough ethical questions. 

‘Complex issues’

New York City was an early entrant into the fray. In 2018, local legislation created the Automated Decision Systems (ADS) Task Force to develop policies and procedures aimed at guiding agencies in their use of AI and related technologies.

“The ADS Task Force tackled the complex issues surrounding automated decisions systems … to ensure these systems align with the goal of making New York City a fairer and more equitable place,” said the mayor’s Deputy Press Secretary Laura Feyer.

The resulting 36-page report, issued in November 2019, recommended broadening the public discussion around ADS and urged city leaders to formalize management functions. The report was explicit in laying out emerging ethical concerns.

“We know that certain decisions, whether made by a human or a computer, carry with them a risk of certain implicit or explicit biases,” the authors wrote. They called on the city to investigate these risks in order to use ADS “most effectively and responsibly.”

The city has since followed up on those early recommendations with institutional means to further safeguard society. “After the report was released, the mayor created the position of algorithms management and policy officer to continue the work of the task force,” Feyer said. That officer “will work with all agencies to identify and evaluate ADS though the lens of fairness, equity and transparency.”



Center+for+Democracy+and+Technology+Chris+Calabrese
At the Center for Democracy and Technology, Chris Calabrese and his colleagues work to understand the potential policy impacts of new technologies.


As New York has pursued its foray into issues of fairness and equality in an AI-driven world, various non-government entities have been pursuing a similar track. Drawing from academia, industry and the public sector, several groups are delving deep into these issues in an effort to guide effective policymaking. 

The Center for Democracy and Technology has generally focused its efforts on understanding policy challenges surrounding the Internet. More recently, the group has taken a deep dive into the ethics of AI. “The impact of that across the board is going to be one of the guiding challenges of the 21st century,” Calabrese said.

The center’s thinkers are especially concerned about the algorithms that drive automated decision-making. “That means making sure facial recognition works as well on a white face as it does on an African-American face,” he said. They’re also exploring the implementation of these technologies. “Looking at facial recognition again, that can be used to track people in public, thanks to the power of AI. There are all sorts of issues that arise from that.”

More than just a theoretical exploration, the center is focused on the practical applications of new technology. There is, for example, an immediate concern around the growing use of commercial AI products to set bail levels and determine who should or should not be released on bond.

Calabrese isn’t just worried about the error rate of these decisions — about 30 percent, according to the center’s findings. He’s also concerned about the nature of those errors. “What’s troubling from our perspective is that it tends to skew differently for different populations,” he said. “It makes it more likely that African Americans will be incarcerated while high-danger white people will be let out.”

To address such potential hazards, the center has developed “digital decision tools” for schools and policymakers to help them determine whether algorithms are being used appropriately.

“Policymakers need to be very conservative in how they deploy these technologies,” Calabrese said. “If something is new or experimental, you need to have a lot of skepticism about how it is going to do its job. You can’t just defer to the computer.” 

Wide-ranging impact

A similar effort is underway at Harvard’s Berkman Klein Center for Internet and Society, a 20-year-old interdisciplinary research center. Originally focused on the intersection of law and technology, the center lately has been delving deep into ethical issues. Its Ethics and Governance of AI initiative looks at the challenges that stakeholders face in developing, using and regulating this technology.

“We are looking at not just the technological side of AI, but at the ways in which these emerging technologies impact everything — from the education of young people, to law and policy, to business, and even issues like democratic engagement and information-sharing online,” said Ryan Budish, assistant director for research.

Researchers here are asking tough questions about AI deployments. How is the system trained? How is it validated? What biases were present in the underlying training data? How can policymakers test the system to be sure it’s not just replicating existing biases?

Budish, too, points to criminal justice as a point of urgent concern. “There are a lot of different organizations out there, some very well intended and some just trying to sell snake oil,” he said. “If you are the IT person in the county court and your administrator wants you to choose a technology to help with this, you might not have the right background to evaluate the different systems. You might not even know the right questions to ask.”

The Berkman Klein Center has looked at a range of state and local issues in which AI could play a part. The technology might be used to detect fraud in welfare benefits, or it could be used to identify which properties should be prioritized for fire-code inspections. Each scenario carries not just practical but moral implications as well.

To assist state and local leaders in their search for answers, the Berkman Klein Center shares best practices. Its researchers have looked at various real-world AI implementation policies, looking for common ground and also highlighting points of divergence. “That helps folks who are trying to think about their own approach to these questions, when they can see what issues other folks are thinking about,” Budish said.

The center also works directly with state attorneys general, helping them to craft policy that responds to the nuanced inner workings of artificial intelligence.

“One of the big challenges with AI is that it can be really technical, and then folks just throw up their hands. It seems like magic,” Budish said. “It’s not magic; it is something that people can understand. We can educate folks and put them in touch with experts who can help them to think critically about the way it impacts the work they are doing.”

On the global front, the Berkman Klein Center collaborates with the Organization for Economic Cooperation and Development, and with the International Telecommunications Union, an agency of the United Nations. Through those efforts, the center has helped to develop AI governing principles that have been adopted by 42 countries.

Budish envisions the center as a potential bridge between civic leaders, who may be new to the conversation around the ethics of AI, and academics who have been exploring these issues for years. “There are a lot of places where those conversations are happening, but those conversations are perhaps not always reaching policymakers,” he said. 

A natural fit

Philadelphia began addressing these issues in 2017 with the launch of GovLabPHL, a multi-agency collaboration using studies of human behavior to shape how the city interacts with residents. That effort led to the creation of a road map in early 2019, and those findings in turn are being leveraged today to help guide Philly’s smart city initiatives.

Smart city development is a natural place to put into action the ethical precepts surrounding AI, said Philadelphia Smart Cities Director Emily Yates. All those cameras on light posts, the smart sensors and other apparatuses of civic improvement — these are the front lines of AI implementation.



Philadelphia+Smart+Cities+Director+Emily+Yates
Philadelphia Smart Cities Director Emily Yates balances the usefulness of data collected by connected devices with citizen privacy.


"We are drilling down into specific topics. What do you attach to a smart street pole? Is AI going to be a surveillance program? And what happens to the data that we are pulling down? We are still fleshing out all of that,” she said.

Yates’ office is developing a deep governance structure to ensure that key learnings are incorporated as AI usage increases. This includes executive leadership, an advisory committee, an internal working group and various subcommittees drawn from across city government.

This governmental infrastructure is key to the city’s efforts to be responsible in its use of technology.

As the big thinkers address weighty questions around ethical usage, “that information has to move up the ladder and across the ladder so that everyone in government is aware of these issues,” Yates said. “It’s incumbent on the working group and the subcommittees to communicate to the relevant individuals and also to the mayor’s office, in order to put citywide policies in place.”

In Philadelphia that effort includes deep engagement with the city’s data network and security group, whose experts have weighed in on some of the most pressing issues. “There is a tension of how granular and effective you can get with the data, while still respecting the boundaries of privacy and security,” Yates said. 

Practical vision

As cities and states continue to wrangle with the use and potential misuse of new technologies, Yates offers a practical vision. When it comes to the responsible use of data — from automated decisions to biometric-informed surveillance — government will be most successful when it is most transparent.

“If you are going to put up a camera facing a mosque, you need to communicate what you are doing, who has ownership of that data, whether it is public,” she said. “That’s why we have the communications and marketing team as a subgroup working on this. We need to address the community so that they know what we are doing and why we are doing it in a specific way.”

Serious peril looms for cities that fail to address the risks, or that come up with answers but fail to engage the community. Yates points to the big civic projects of the 1960s and ’70s that tried to serve a greater good but ended up sowing distrust.

“The worst case is that you spend a significant amount of money, the community gets concerned and then we have to shut it down,” she said. “We don’t want to step backward and deploy technology in a way that makes people worry we might be using it in a way that could harm them.”

Those with a firm grasp of the technological underpinnings as well as community sentiment may be best positioned to help government to state its case as it moves ahead to incorporate emerging innovations.

It’s especially helpful to understand the cyclical nature of an AI deployment: The power of this technology lies in the algorithmic ability to learn over time. Civic leaders can take advantage of that to constantly improve upon their uses of the new tools.

“In each stage of that process you need to interrogate your assumptions,” Calabrese said. “Is my data representative of the whole population? Will my assumptions impact everybody fairly? As a government official you can build fairness into the process, and you do that by understanding how this process works.”