IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Europe’s AI Act Adopted: What U.S. Governments Can Learn

The European Parliament held a vote on the EU AI Act Wednesday in which a majority of members voted to adopt the regulation — marking the establishment of a model in artificial intelligence regulation for governments worldwide.

Illustrated robots cross from left to right on two paths: a red arrow at the top and a blue arrow at the bottom.
The European Parliament voted Wednesday to adopt the European Union (EU) Artificial Intelligence (AI) Act, creating a comprehensive law with lessons for governments worldwide.

The regulation, which negotiators agreed on in December, was endorsed by members of the European Parliament (MEP) with 523 votes in favor, 46 against and 49 abstentions.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” MEP Brando Benifei said Tuesday during the plenary debate.

At the federal level in the U.S., officials have already taken several actions in this space. The Biden administration published the voluntary AI Bill of Rights in October 2022, but some experts argued this was only the first of multiple necessary steps to effectively regulate AI. Last October, however, the administration went further, releasing an executive order that set new standards in security and privacy for AI. In Congress, several pieces of legislation have been introduced that would address AI, including the Algorithmic Accountability Act and the AI Research, Innovation and Accountability Act. And at the state and local level, regulations are rapidly evolving.

But the passage of the EU AI Act raises the question: What can U.S. government leaders learn from it?

New America, a nonprofit, nonpartisan think tank, held a panel discussion in February, “EU AI Act: Lessons for U.S. Policymakers,” during which several experts discussed the EU’s approach to AI governance.

Open Technology Institute (OTI) Policy Director Prem M. Trivedi said during the panel that the EU AI Act is expected to have a global impact.

“So, policymakers in the United States need to examine not just the act’s texts and requirements, but are also going to be thinking through how American and other national efforts around the world can further the goal of broad global regulatory harmonization amongst like-minded nations,” Trivedi said.


Gabriele Mazzini, AI team leader for the AI Act on the European Commission and a lead author of the proposal on the AI Act, shared his expertise during the panel.

Mazzini noted that the EU AI Act does not seek to regulate AI technology itself; but instead, it regulates concrete use cases through a risk-based approach.

More specifically, there are certain AI systems that are prohibited under the EU AI Act due to what it described as “unacceptable risk,” including those systems that enact social scoring and those that compile facial recognition databases.

Laura Lazaro Cabrera, counsel and director of the Equity and Data Programme at the Center for Democracy and Technology in the EU, said during the panel that one of the positive aspects of the EU AI Act worth underlining when exploring applying this legislation to other jurisdictions is that human rights are framed at the center.

She also pointed out that the legislation deems certain practices prohibited as a positive thing. However, in this regard, she argued that some of these prohibitions come with exceptions.

She further explained that, while Parliament was pushing for a total ban on remote biometric identification, the EU AI Act contains exceptions for this.

“So, it’s great that we have these prohibitions in place, but some of them have a few caveats that could potentially undermine them in the future,” she said, stating that it is yet to be seen how far the prohibitions will extend.


Mazzini noted that the goal was always to maintain the understanding that the work done in the EU was not self-isolated. As he put it, “Considering the global nature of the technology, we intentionally wanted to make sure that, at least on certain key points and elements — like, for instance, the definition of AI — we will be aligned internationally. “

David Morar, OTI senior policy analyst, said during the panel that defining international standards is an area in which U.S. and EU leaders can collaborate.

Morar said the EU AI Act will become the floor for U.S. regulations, because while the U.S. may make changes, companies will likely already be aiming to abide by the requirements of the EU AI Act — so regulatory conversations should start there. The primary benefit of this legislation, according to Morar, is that the U.S. and EU will now be speaking the same language on AI policy issues.

He recommended the U.S. move forward by first addressing privacy.

“In terms of lessons … the first thing that the U.S. can and should do is pass comprehensive federal privacy legislation,” Morar said.

Kai Zenner, head of office and digital policy adviser for MEP Axel Voss, said during the panel that there is currently a lot of international collaboration when it comes to risk assessment, and he cited cooperation as a positive force. Still, Zenner said, more time is needed to see how the legislation will work in practice, how different interpretations of the law may occur, and whether further regulatory action is needed.

“With every new law, we will have a transition period where we really need to find each other first,” Zenner said.