IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

NIST Releases Voluntary AI Risk Management Framework

The U.S. Department of Commerce’s National Institute of Standards and Technology’s newly released framework provides organizations a pathway to use artificial intelligence technology in a way that reduces risk.

A hand reaches out to touch a glowing, cyan-colored brain over a black background.
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has released a document to better guide organizations’ use of artificial intelligence and manage the associated risks.

As the technology becomes increasingly mainstream, there is much work to be done to regulate it. Even an AI-driven chatbot has acknowledged the benefits of thoughtful regulations.

The new guidance — the Artificial Intelligence Risk Management Framework (AI RMF 1.0)was created at the direction of Congress and provides a framework that will adapt as the landscape evolves.

This guidance follows other advances in this space, including the National Artificial Intelligence Advisory Committee, to which NIST provides administrative support, and a proposal for reducing the risk of bias in AI use published by NIST.

AI RMF 1.0 is divided into two parts. The first part helps organizations outline the risks of AI systems and the characteristics of trustworthy AI systems, while the second helps address risks in practice.

The resource offers four specific functions for organizations to focus on: govern, map, measure and manage.

AI RMF 1.0 has been developed over the course of 18 months by NIST in collaboration with private- and public-sector organizations. In order to draft the framework, NIST received about 400 sets of formal comments from over 240 different organizations on draft versions.

To further help organizations navigate and use the framework, NIST has released a companion AI RMF Playbook.

A collection of organizations, including the U.S. Chamber of Commerce, the Bipartisan Policy Center, Microsoft and Google, have already stated their intentions to use or promote the framework.

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” said Deputy Commerce Secretary Don Graves in the announcement.

The framework will be periodically revised, and to do so, NIST welcomes comments. Comments made by the end of February 2023 will be included in the spring 2023 updated version.

NIST will also be developing a Trustworthy and Responsible AI Resource Center to further help organizations implement this framework.