IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

National AI Safety Institute Consortium Takes Shape

In accordance with President Joe Biden’s 2023 executive order on artificial intelligence, the federal government is moving forward with key actions — namely, the creation of an AI safety consortium.

The,White,House,At,Day,,Washington,Dc,,Usa.,Executive,Branch.
Shutterstock
Yesterday, the Biden-Harris administration announced the official launch of a consortium dedicated to AI safety.

An executive order from President Joe Biden required the National Institute of Standards and Technology (NIST) to establish the U.S. AI Safety Institute (USAISI), which would create rigorous standards to test models and ensure their safety for public use. Part of the institute’s work is a new consortium, through which participants can represent any organization interested in AI safety. In November 2023, NIST put out the call for consortium participants.

The announcement this week marks the official creation of the AI Safety Institute Consortium (AISIC).

The full inaugural cohort includes more than 200 stakeholders representing organizations across sectors. From the private sector, companies like Apple, Meta and Microsoft are members. From the education sector, Carnegie Mellon University, the Stanford Institute for Human-Centered AI and Ohio State University are members. And from the public sector, entities like the state of Kansas Office of Information Technology Services and the state of California Department of Technology are members.

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” said Secretary of Commerce Gina Raimondo in the announcement. “That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

The consortium is made up of people closely involved in understanding and building ways for AI to transform society. And according to the Department of Commerce announcement, the consortium is the largest collection of test and evaluation teams established to date.

“Thanks to President Biden's landmark executive order, the AI Safety Consortium provides a critical forum for all of us to work together to seize the promise and manage the risks posed by AI,” said Bruce Reed, White House deputy chief of staff in the announcement.

Earlier this week, Raimondo also announced key members of the USAISI leadership team. Specifically, Elizabeth Kelly was named to lead the institute as its inaugural director, and Elham Tabassi was tapped to serve as the chief technology officer.

Kelly will provide executive leadership, management and oversight and will coordinate with other AI policy initiatives across government. Tabassi will lead the institute’s key technical programs and will help shape efforts to conduct research and evaluations of AI models and to develop guidance.