IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Who Serves on the New National AI Advisory Committee?

The newly appointed 27 members will advise the federal government on AI topics like competitiveness, equity and use by law enforcement. EqualAI’s Miriam Vogel will chair the group; Google’s James Manyika is vice chair.

A layout of a brain formed by blue lines with one side looking like a computer chip to indicate artificial intelligence. Gradient blue and black background.
A newly announced 27-member committee will help advise the president and National AI Initiative Office on artificial intelligence and its impacts.

The National Artificial Intelligence Advisory Committee (NAIAC) will “provide recommendations on topics including the current state of U.S. AI competitiveness, the state of science around AI, and AI workforce issues,” according to a U.S. Department of Commerce press release.

The board is intended to help the department balance goals around using AI for economic and national security advantages while also striving for equitable impacts and minimizing risks, said Deputy Secretary of Commerce Don Graves in a statement in the release.

“The diverse leaders of our inaugural National Artificial Intelligence Advisory Committee represent the best and brightest of their respective fields and will be instrumental in helping the department strike this balance,” Graves said. “Their anticipated recommendations to the president and the National AI Initiative Office will serve as building blocks for U.S. AI policy for decades to come, and I am immensely grateful for their voluntary service.”

One thorny area will be addressing law enforcement use of AI. NAIAC will establish a subcommittee to focus on this topic and address considerations like how suited the technology is for use by security and law enforcement as well as issues of data security and bias. The subcommittee will also advise on what legal standards are needed to “ensure that AI use is consistent with privacy rights, civil rights and civil liberties, and disability rights,” per the release.

The NAIAC is intended to help bring expertise and a broad array of viewpoints to such issues. The National Institute of Standards and Technology (NIST), which will give administrative support to the board, issued a public call for NAIAC member nominations last fall. NIST said the Department of Commerce wanted members from a variety of perspectives, sectors and geographic locations.

The board’s first meeting is slated for May 4 and will be viewable online.

Who’s on the Board?

NAIAC includes university professors, members of nonprofits, members of large technology firms and others. Many bring experience examining the ethical considerations around AI, workforce implications and other areas.

Miriam Vogel will chair the board. She is the president and CEO of nonprofit EqualAI, which says it aims to reduce “unconscious bias in the development and use of artificial intelligence.”

Vice chair goes to a voice from the private sector: James Manyika, Google’s newly appointed, inaugural senior vice president of technology and society.


  • Ayanna Howard, dean of Ohio State University’s College of Engineering, who also brings a background in robotics.
  • Daniel E. Ho, professor of law and of political science at Stanford University, associate director of the Stanford Institute for Human-Centered Artificial Intelligence, and director of the Regulation, Evaluation and Governance Lab.
  • David Danks, professor of data science and philosophy at University of California, San Diego, who has published research about ethical AI.
  • Ramayya Krishnan, professor of management science and information systems at Carnegie Mellon University and dean of its Heinz College of Information Systems and Public Policy.
  • Jon Kleinberg, professor of computer science at Cornell University.
  • Frederick L. Oswald, pressor of psychology at Rice University.
  • Frank Pasquale, professor of law at Brooklyn Law School, who counts artificial intelligence and law among his focuses.

Nonprofits and Foundations

  • Susan Gonzales, CEO of AIandYou, a nonprofit that says it is aimed at giving marginalized communities information about new technologies and their impacts, so they can “take action” on issues like AI, the metaverse and NFTs.
  • Trooper Sanders, CEO of nonprofit Benefits Data Trust, which says it uses “data, technology, policy change, and direct service” to help residents claim social safety net benefits. He also explored how AI could increase social and economic equity via a previous role at Rockefeller Foundation.
  • Ylli Bajraktari, of the Special Competitive Studies Project, a nonprofit that makes recommendations about preserving U.S. competitiveness as emerging technologies like AI cause economic, national security and societal changes.
  • Janet Haven, executive director of Data & Society Research Institute, a nonprofit research organization that explores areas like AI, online disinformation and technologies’ impacts on health and labor.
  • Zoë Baird, president of the Markle Foundation, a private foundation interested in how information technology can address societal issues like health and employment.

Membership Groups

  • Amanda Ballantyne, director of the AFL-CIO’s Technology Institute, which launched last year to act as a think tank and center for collaboration on “leverage[ing] the power of technology and innovation for the labor movement.”
  • Victoria Espinel, president and CEO of BSA: The Software Alliance, an advocacy group representing global software industry members.

Private Tech Companies

  • Sayan Chakraborty, executive vice president of product and technology at HR and financial management software-as-as-service provider Workday.
  • Paula Goldman, chief ethical and humane use officer at customer relationship management cloud software provider Salesforce.
  • Ashley Llorens, vice president and managing director of Microsoft Research’s outreach team.
  • Haniyeh Mahmoudian, global AI ethicist at AI cloud platform provider DataRobot, where she focuses on areas like bias, privacy and ethics in AI.
  • Christina Montgomery, vice president and chief privacy officer at IBM, as well as chair of its AI Ethics Board.
  • Liz O’Sullivan, CEO of Parity, which offers a platform intended to assist with AI regulation compliance, model risk assessments and impact assessments.
  • Jason Clark, co-founder of Anthropic, a company researching and aiming to build “reliable, interpretable and steerable AI systems.”
  • Navrina Singh, CEO and founder of Credo AI, which offers solutions for guiding AI development and deployment.
  • Swami Sivasubramanian, vice president of database, analytics and machine learning at Amazon Web Services.
  • Keith Strier, vice president of worldwide AI initiatives at computing platform firm NVIDIA.
  • Reggie Townsend, director of data ethics practice at business analytics solutions provider SAS.