IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Responsible AI: What Does It Take to Turn Principles into Practice?

Are new regulations needed to safeguard AI use, or will best practices recommendations and existing laws be enough? And how can privacy frameworks set the groundwork for responsible AI practices?

Many agree on what responsible, ethical AI looks like — at least at a zoomed-out level. But outlining key goals, like privacy and fairness, is only the first step. The next? Turning ideals into action.

Policymakers need to determine whether existing laws and voluntary guidance are powerful enough tools to enforce good behavior, or if new regulations and authorities are necessary.

And organizations will need to plan for how they can shift their culture and practices to ensure they’re following responsible AI advice. That could be important for compliance purposes or simply for preserving customer trust.

A recent Center for Digital Innovation panel explored the legislative debate, while International Association of Privacy Professionals (IAPP)’s Principal Researcher for Technology Katharina Koerner told GovTech how organizations can use privacy practices to guide their approach to responsible AI strategies.


Public institutions and organizations in Asia, Europe and North America tend to agree that “responsible” AI supports accountability, explainability, fairness, human oversight, privacy, robustness and security, according to IAPP’s recent Privacy and AI Governance report, which interviewed entities in the regions.

Now developers, procurement officials and others may need more specific, fine-grained guidance on what tools and benchmarks to look to for helping achieve these goals.

For example, while AI principles often call for preventing discrimination or bias, it’s not always clear how organizations can assess that they’re doing this properly. Organizations often want regulators to identify indicators or tools to use to check if they’ve successfully minimized bias or otherwise complied with responsible AI goals.

“Companies really say … what is ‘non-bias,’ for example? How is ‘non-bias’ defined, because as humans and systems being built by humans, how is there no subjectivity in the systems?” Koerner said.


The spotlight is on responsible AI.

NIST recently released its Artificial Intelligence Risk Management Framework, a voluntary guide aimed at helping organizations and individuals use and create AI in responsible ways, and the White House also issued a voluntary Blueprint for an AI Bill of Rights in October 2022.

State and local governments are making moves, too. New York City issued an executive order in 2019 calling for a framework and other processes to keep the city’s use of AI fair and responsible. But the city’s comptroller recently found agencies failing to ensure AI use is “transparent, accurate, and unbiased and avoids disparate impacts.”

Meanwhile, Massachusetts lawmakers are considering AI regulations, including one proposal that would require large IT companies to disclose information on their algorithms and regularly run risk assessments, to prevent discrimination.



In much of the world, responsible AI principles appear to be already upheld by various existing privacy and non-discrimination laws, but it would be helpful for organizations to be able to see the principles mapped out to the relevant legislation, Koerner said.

In a February 2023 piece, Center for Data Innovation Director Daniel Castro suggested that few new regulations may be needed.

He called for any forthcoming policies to “avoid slowing AI innovation and adoption” and said that regulations should “address concerns about AI safety, efficacy and bias by regulating outcomes rather than creating specific rules for the technology.”

Such an approach would leave organizations free to decide when to use AI or humans to carry out tasks, so long as either method avoided producing prohibited harms. Castro wrote that current non-discrimination and worker protection laws already address many of the potential ill effects of AI — they just need to be applied to the technology.

Enforcing non-discrimination laws on AI could be tricky, however.

Agencies and officials may need new authorities to do so, said Brookings Institution fellow Alex Engler, during the CDI panel. Engler “studies the implications of artificial intelligence and emerging data technologies on society and governance,” per Brookings.

Individuals may get less information about the reasoning behind a decision when an algorithm, not a human, is the one making it, Engler said. That’s especially true if the AI system is proprietary, and it can make it difficult to tell whether the decision was made fairly.

“If you move from a human process for something like hiring to an algorithmic process, probably, in most of those cases, you actually lose government protection,” Engler said. “It is harder to enforce anti-discrimination law; it is harder to go through a civil liability process to prove that you were discriminated against; it's possible you might have less insight into a system because you may not know an algorithm was run … . You might not be able to check if the underlying data was correct.”
A screen-shot of a webinar, with four callers. Each one is visible in a head-and-shoulder shot from their webcam: Hodan Omaar, Lynne Parker, Paul Lekas and Alex Engler
A Center for Data Innovation Panel, moderated by CDI Senior Policy Analyst Hodan Omaar, discusses AI policy.
Now is a good time to consider what new measures are needed to help apply existing regulations and protections, Engler advised. For example, regulatory agencies could be given authorization to subpoena AI models and data if they strongly suspect a system is breaking the law.

Fully understanding an algorithm’s effects also may require tracking its impacts over time on a larger scale, rather than just trying to decipher individual decisions, Engler said.

“If we really want accountability, we have to find some way for some of these systems to have a more systemic evaluation,” Engler said.

Unfulfilled Federal Measures: Using What’s Already Here?

Guiding the federal government’s use of AI may not require all new policies — several helpful ones exist that have yet to be fully enacted, said Dr. Lynne Parker, during the CDI panel. Parker previously directed the National Artificial Intelligence Initiative Office and is currently director of the University of Tennessee, Knoxville’s AI Tennessee Initiative.

One is 2020’s EO 13960 “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” It outlines ethical principles for most agencies to follow, calls for the Office of Management and Budget (OMB) to create policy guidance to support following these principles, and tells agencies to inventory the AI they use.

Another is 2020’s AI In Government Act. It tells the OMB to advise agencies about using AI in ways that avoid harming civil rights or liberties, national security or the economy and to detail best practices for “identifying, assessing and mitigating” discriminatory impact and bias. It also creates an AI Center of Excellence.

Another initiative is progressing: A 2020 task force has been studying how the U.S. can create a National AI Research Resource and delivered its report on the topic in January.

AI Guidance: Binding, Voluntary, Sector-by-Sector?

Voluntary guidance around responsible AI use gives organizations flexibility and avoids locking entities into practices that could become outdated as the technology quickly evolves.

On the other hand, voluntary guidance cannot compel disinterested organizations to change their AI use.

“If you were hoping that law enforcement was going to implement some rules on itself, and its own use of AI tools — for instance, facial recognition or other surveillance tools — we haven't seen that yet,” Engler said.

If federal policymakers decide to regulate, there may not be a one-size-fits-all approach.

It’d be difficult to create one piece of legislation that tightly defines what AI is and which accounts for all the potential risks associated with the various ways it might be used, Engler said. He advocated for avoiding one overarching federal law and instead creating policies addressing specific sectors and specific use cases.

“There is very little that you can say that's true about the AI in a safety component in a plane and the AI that's setting the interest rate for a mortgage,” Engler said.

Paul Lekas, senior vice president for Global Public Policy for the Software and Information Industry Association (SIIA) said during the CDI panel that more needs to be learned about “general-purpose” and “generative” AI before any formal regulations are passed on it, but that the federal government can help by providing education and guidance on best practices.

In some cases, communities may be the best ones to decide what appropriate AI use means to them, Parker suggested. She said universities could make their own choices on whether it is acceptable for their students to use tools like ChatGPT to conduct research or write journal articles.

But for universities to make those decisions, they need first to understand exactly what the tool can and cannot do, and the associated risks.

“That then comes back to a need for the collective technical community to help provide some training and education for everyone,” Parker said.


Privacy is an essential aspect of responsible AI, and policymakers and organizations alike should pay attention.

Lekas noted that state privacy laws often address AI and said getting a federal privacy law could set a foundation for later AI policies. That topic is actively debated: a House subcommittee meets today, March 1, to discuss a potential national data privacy standard.

Whether required to or not, organizations may want to develop their own, internal strategies for putting responsible AI ideals into action. As they look to do so, organizations can use their existing privacy work and practices as a launchpad — sparing them from having to reinvent the wheel, per the IAPP report. Privacy impact assessments can be expanded to include items related to AI or can serve as models for responsible AI impact assessments, for example.

This approach is gaining attention: Per IAPP’s report, “more than 50 percent of organizations building new AI governance approaches are building responsible AI governance on top of existing, mature privacy programs. “

And emerging privacy-enhancing technologies (PETs) allow AI to analyze data while better preserving its privacy. Federated learning, for example, is a method for training AI on data that remains siloed in different devices, without having to share and pool the information into a central database.
Head and shoulders shot of Katharina Koerner looking thoughtful
IAPP's Katharina Koerner moderates a recent virtual conversation on "Leveraging Privacy Governance for the Responsible Use of AI"


Organizations looking to create a strategy to govern their own use of AI should bring together various perspectives — including from their legal, security and privacy teams — to discuss which principles are relevant to their operations and to document a planned approach, Koerner said.

Starting small is good: Organizations can look at one business case at a time, to consider how it relates to the agreed-upon principles for good AI use.

It’ll also help to decide on common terms and resources, to keep everyone on the same page. For example, “privacy” can mean one thing in the context of math and engineering, and other in a legal context.

“The NIST AI Risk Management Framework … is a great starting point for discussing how to identify, assess and mitigate the risks associated with the use of AI technologies, and what to include in an internal framework for decision-making and oversight of AI projects,” Koerner added.

Early steps include inventorying all the AI systems the organization uses or develops, appointing people in each relevant business unit who’ll take point on ethical AI matters, as well as training internal staff and promoting an overall culture of responsible AI use.

Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.