IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Artificial Intelligence Executive Order: Industry Reactions

Last Monday, President Biden issued an executive order on safe, secure and trustworthy artificial intelligence. Here’s what’s included, as well as the tech and cybersecurity industries’ response.

President Joe Biden
President Joe Biden
Shutterstock/archna nautiyal
On Oct. 30, 2023, the White House released a long-awaited executive order on artificial intelligence, which covers a wide variety of topics. Here I'll briefly cover the EO and spend more time on the industry responses, which have been numerous.

The EO itself can be found at the Whitehouse.gov briefing room: White House tackles artificial intelligence with new executive order. Here’s an opening excerpt:

“With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:
  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. …
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. …
  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. …  
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. …
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. …
  • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff."

The EO goes on to discuss areas like privacy, advancing equity and civil rights, standing up for consumers, patients and students, supporting workers, advancing American leadership around the world, and ensuring responsible and effective government use of AI.

A memo from AI.gov covers federal government agency responsibilities and drills down on how agencies will be on the hook for tapping chief AI officers, adding risk management practices to AI and more.

MEDIA COVERAGE


The MIT Technology Review offers a detailed analysis of the EO in this piece: Three things to know about the White House’s executive order on AI. Here's an excerpt:

“Experts say its emphasis on content labeling, watermarking and transparency represents important steps forward. …

“Here are the three most important things you need to know about the executive order and the impact it could have. 

“What are the new rules around labeling AI-generated content? The White House’s executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt.

“Will this executive order have teeth? Is it enforceable? While Biden’s executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced.

“What has the reaction to the order been so far? Major tech companies have largely welcomed the executive order.

“Brad Smith, the vice chair and president of Microsoft, hailed it as 'another critical step forward in the governance of AI technology.' Google’s president of global affairs, Kent Walker, said the company looks 'forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.'”

EY offers this excellent piece on key takeaways from the Biden administration executive order on AI:

“The Executive Order is guided by eight principles and priorities:
  1. AI must be safe and secure by requiring robust, reliable, repeatable and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, mechanisms to test, understand, and mitigate risks from these systems before they are put to use.
  2. The US should promote responsible innovation, competition and collaboration via investments in education, training, R&D and capacity while addressing intellectual property rights questions and stopping unlawful collusion and monopoly over key assets and technologies.
  3. The responsible development and use of AI require a commitment to supporting American workers though education and job training and understanding the impact of AI on the labor force and workers’ rights.
  4. AI policies must be consistent with the advancement of equity and civil rights.
  5. The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
  6. Americans’ privacy and civil liberties must be protected by ensuring that the collection, use and retention of data is lawful, secure and promotes privacy.
  7. It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
  8. The federal government should lead the way to global societal, economic and technological progress including by engaging with international partners to develop a framework to manage AI risks, unlock AI’s potential for good and promote a common approach to shared challenges.”

ABC News offers this coverage:
According to Axios, testing requirements are the most significant and stringent provision of the executive order:
  • “Developers of new 'dual-use foundation models' that could pose risks to 'national security, national economic security, or national public health and safety' will need to provide updates to the federal government before and after deployment — including testing that is 'robust, reliable, repeatable and standardized.'
  • The National Institute of Standards and Technology will develop standards for red-team testing of these models by August 2024, while the Defense Production Act will be used to compel AI developers to share the results.
  • The testing rules will apply to AI models whose training used 'a quantity of computing power greater than 10 to the power of 26 integer or floating-point operations.' Experts say that will exclude nearly all AI services that are currently available.”

BLETCHLY PARK SUMMIT ON AI


Time magazine said this about the United Kingdom’s Bletchley Park Summit on the Future of AI:

“On Wednesday and Thursday, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the world’s first AI Safety Summit at this former stately home near London, now a museum. Among the attendees: representatives of the U.S. and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.

“The high-profile event, hosted by the Rishi Sunak-led U.K. government, caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago. The chatbot displayed for the first time—to many users at least—the powerful general capabilities of the latest generation of AI systems. Its viral appeal breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology. Those discussions have been taking place amid warnings not only that today’s AI tools already present manifold dangers—especially to marginalized communities—but also that the next generation of systems could be 10 or 100 times more powerful, not to mention more dangerous.”

Reporting on the summit, The Daily Mail (UK) wrote, "Elon Musk warns AI poses 'one of the biggest threats to humanity' at Bletchley Park summit... but Meta's Nick Clegg says the dangers are 'overstated.'"

    The billionaire tech entrepreneur's fears were echoed by delegates from around the world at the UK's AI Safety Summit at Bletchley Park in Buckinghamshire. Musk said that government must be a referee, a rule-making body. We need to establish a “framework for insight,” and develop fair rules that everyone should play by. Later, he said that "AI will eventually create a situation where ‘no job is needed.’"

    Speaking in a conversation with U.K. Prime Minister Rishi Sunak, Musk said that AI will have the potential to become the “most disruptive force in history.”

    FINAL THOUGHTS


    There are certainly disagreements over how much government regulation is needed regarding AI, and how new regulations will be enforced.

    But one thing is clear: the new AI EO just signed by President Biden will serve as the near-term road map for most AI related research, testing and development in the US.
    Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.