IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Congressional Framework Would Address Extreme AI Risks

A bipartisan group of U.S. senators has introduced a congressional framework, in a letter to Senate artificial intelligence working group leaders, that would establish federal oversight on extreme risks related to AI.

"Ai" is glowing in cyan with data points around it over an image of the U.S. Capitol building.
A newly unveiled congressional framework aims to address extreme risks related to the development and use of artificial intelligence.

A bipartisan group of four U.S. senators, including Utah Republican Mitt Romney and Rhode Island Democrat Jack Reed, released the “Framework to Mitigate AI-Enabled Extreme Risks” in a letter April 16 to the leaders of the Senate AI working group. AI offers many benefits, the senators said in the letter, which underlined the need for any comprehensive AI risk framework to also include measures to address “catastrophic risks.” The letter defined catastrophic risks to include those related to biological, chemical, cyber and nuclear weapons.

“In a worst-case scenario, these [AI] models could one day be leveraged by terrorists or adversarial nation state regimes to cause widespread harm or threaten U.S. national security,” the lawmakers said in the letter.

This follows several related moves at the federal level — most notably, President Joe Biden’s Executive Order signed Oct. 30. In addition, federal agencies are collaborating on AI, the Department of Homeland Security has released an AI road map, and state and local governments are charting their own regulatory paths to manage risk.

The proposed framework would establish federal oversight of frontier AI hardware, development and deployment to address extreme risks. In the framework, frontier models include the most advanced AI models that would be developed in the future — specifically, models trained on significant amounts of computing power as defined by the Executive Order and capable of, or intended to, complete tasks related to bioengineering, chemical engineering, cybersecurity or nuclear development.

To implement necessary safeguards, the framework suggests the need for oversight from a federal agency or body and offers four potential options: a new agency, a new interagency coordinating body, the Department of Commerce or the Department of Energy.

The framework recommends this oversight entity be composed of subject matter experts, skilled AI scientists and engineers. The entity could report challenges and newfound risks to Congress, as needed.

Regarding frontier models, the framework suggests the entity provide oversight in three areas: hardware, development and deployment.

First, it states that entities that sell or rent large amounts of computing hardware for AI development should report large acquisitions or usage of those resources to the oversight entity.

Second, under this framework, developers would be required to notify the oversight entity when developing a frontier model, before training it. Developers would be required to report on steps taken for risk mitigation to the entity.

Third, developers would need to get an evaluation and a license from the entity to ensure risk has been adequately addressed prior to release of frontier models.

“It is my hope that our proposal will serve as a starting point for discussion on what actions Congress should take on AI — without hampering American innovation,” Romney said in the announcement.

Stakeholders and members of the public have until May 17 to respond to the framework.