IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Tech Leaders, Congress Meet: How Will We Regulate AI?

Tech leaders gathered in Washington, D.C., this past week for public and private meetings with Congress on the future of AI in the U.S. What happened, and what’s next?

Robot hand banging a digital gavel
Rarely have I seen such a large number of global technology leaders come together for a private meeting with senators in Washington, D.C.

No, these were not antitrust hearings with testimonies under oath. Nor was this a meeting for a congressional investigation into data breaches or privacy violations from foreign adversaries.

Rather, some of the smartest leaders in the world were gathered to examine questions surrounding the future of artificial intelligence.

The New York Times headline on Sept. 13, 2023, pronounced “Tech Leaders Gather for an A.I. Week in Washington.” The subhead continued, “Elon Musk, Mark Zuckerberg and Sam Altman are among the tech moguls meeting with lawmakers to discuss how to regulate the fast-growing technology.”

According to CNBC, the top tech executives in attendance Wednesday included:
  • OpenAI CEO Sam Altman
  • Former Microsoft CEO Bill Gates
  • Nvidia CEO Jensen Huang
  • Palantir CEO Alex Karp
  • IBM CEO Arvind Krishna
  • Tesla and SpaceX CEO Elon Musk
  • Microsoft CEO Satya Nadella
  • Alphabet and Google CEO Sundar Pichai
  • Former Google CEO Eric Schmidt
  • Meta CEO Mark Zuckerberg

Here's an excerpt from that story: “The panel, attended by more than 60 senators, according to Schumer, took place behind closed doors. Schumer said the closed forum allowed for an open discussion among the attendees, without the normal time and format restrictions of a public hearing. But Schumer said some future forums would be open to public view.

“Google’s Pichai outlined four areas where Congress could play an important role in AI development, according to his prepared remarks. First by crafting policies that support innovation, including through research and development investment or immigration laws that incentivize talented workers to come to the U.S. Second, 'by driving greater use of AI in government,' third by applying AI to big problems like detecting cancer, and finally by 'advancing a workforce transition agenda that benefits everyone.'”

This CNN article described the meeting, but also highlighted how other U.S. senators were not happy that the meeting was happening at all: “A bipartisan pair of U.S. senators sharply criticized the meeting, saying the process is unlikely to produce results and does not do enough to address the societal risks of AI.

“Connecticut Democratic Sen. Richard Blumenthal and Missouri Republican Sen. Josh Hawley each spoke to reporters on the sidelines of the meeting. The two lawmakers recently introduced a legislative framework for artificial intelligence that they said represents a concrete effort to regulate AI — in contrast to what was happening steps away behind closed doors.

“'This forum is not designed to produce legislation,' Blumenthal said. 'Our subcommittee will produce legislation.'

"Blumenthal added that the proposed framework — which calls for setting up a new independent AI oversight body, as well as a licensing regime for AI development and the ability for people to sue companies over AI-driven harms — could lead to a draft bill by the end of the year.

“'We need to do what has been done for airline safety, car safety, drug safety, medical device safety,' Blumenthal said. 'AI safety is no different — in fact, potentially even more dangerous.'”

National Public Radio (NPR) wrote this story about the meetings: The who's who of the tech world meet with senators to debate plan to regulate AI. Here's an excerpt:

“New Jersey Democratic Sen. Cory Booker called the discussion a 'thoughtful conversation.'

"'At the end of the day, everybody on the panel believes that government has a regulatory role,' Booker told reporters after leaving the room for a lunch break. 'And that's going to be the challenge, stepping up to the right regulatory role that can help protect us from the real issues that threaten our country and humanity.'

"The group of 22 tech experts met for two closed-door sessions held in a private Senate building meeting room.”


One headline on Sept. 14 read: Elon Musk calls for AI 'referee' as tech moguls gather for regulation forum at US Capitol. Here’s an excerpt:

“Tesla CEO Elon Musk called on Wednesday for a U.S. 'referee' for artificial intelligence after he, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai met with lawmakers at Capitol Hill behind closed doors at for a forum on regulating AI. …

"'It's important for us to have a referee,' Musk told reporters, comparing it to sports. The billionaire, who also owns the social media platform X, added that a regulator would 'ensure that companies take actions that are safe and in the interest of the general public.'

"Musk said the meeting was a 'service to humanity' and said it 'may go down in history as very important to the future of civilization.'" added this different twist on AI regulation: What the U.S. Can Learn From China About Regulating AI.

“Over the past two years, China has enacted some of the world’s earliest and most sophisticated regulations targeting AI. On the surface, these regulations are often anathema to what U.S. leaders hope to achieve. For instance, China’s recent generative AI regulation mandates that companies uphold 'core socialist values,' whereas Schumer has called for legislation requiring that U.S. AI systems 'align with our democratic values.'

“Yet those headline ideological differences blind us to an uncomfortable reality: The United States can actually learn a lot from China’s approach to governing AI. Of course, Washington shouldn’t require that AI systems 'adhere to the correct political direction,' as one Chinese regulation mandates. But if we can look beyond the ideological content of the rules, we can learn from the underlying structure of the regulations and the process by which China has rolled them out. If taken seriously, those structure- and process-oriented lessons could be invaluable as U.S. leaders navigate a morass of AI issues over the coming years. …

“By contrast, the Chinese government has taken a targeted and iterative approach to AI governance. Instead of immediately going for one all-encompassing law that covers all of AI, China has picked out specific applications that it was concerned about and developed a series of regulations to tackle those concerns. That has allowed it to steadily build up new policy tools and regulatory know-how with each new regulation. And when China’s initial regulations proved insufficient for a fast-moving technology like AI, it quickly iterated on them.”


According to Reuters, U.S. Senate Majority Leader Chuck Schumer on Wednesday said that while regulations on artificial intelligence were certainly needed, they should not be made "too fast":

"'If you go too fast, you can ruin things,' Schumer told reporters after organizing a closed-door AI forum bringing together U.S. lawmakers and tech CEOs. The European Union went 'too fast,' he added."

Meanwhile, Elon Musk fully expects AI regulation.

This piece dove into more details from the meeting, without giving many specific details:

“Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, Zuckerberg brought up the question of closed vs. 'open source' AI models and IBM CEO Arvind Krishna expressed opposition to the licensing approach favored by other companies, according to a person in attendance.

“There appeared to be broad support for some kind of independent assessments of AI systems, according to this person, who spoke on condition of anonymity due to the rules of the closed-door forum.

“'It was a very civilized discussion among some of the smartest people in the world,' Musk said after leaving the meeting. He said there is clearly some strong consensus, noting that nearly everyone raised their hands after Schumer asked if they believed some regulation is needed.”


A new Geneva Association report provides insight into the evolving AI regulatory landscape for insurers. The report analyzes the varying approaches to AI regulation and explores their impact on the insurance industry. Taking stock of these developments, the report provides key considerations for regulators and policymakers that encourage innovation while ensuring adequate protection for customers. In particular, it finds that existing, technology-neutral insurance regulatory frameworks can be leveraged to manage AI-related risks specific to insurance, whereas cross-sectoral regulation could hinder innovation.

“Jad Ariss, Managing Director of The Geneva Association, said: 'An AI-enabled approach to doing business allows insurers to offer more personalized products, and improved efficiency and costs may make insurance more affordable and attractive. Regulatory frameworks need to evolve in tandem, however, to ensure ethical, accountable and equitable use of AI technologies, without hindering insurers’ ability to innovate. The fast-moving nature of AI developments makes this challenging but a balanced approach to data governance with a focus on customer outcomes will help promote innovation in a fair manner.'

“Dennis Noordhoek, Director Public Policy & Regulation at The Geneva Association and author of the report, said: 'Though certain risks, such as compromised data privacy and potential discrimination, may be heightened by the growing use of AI in insurance, these risks are not new. Accordingly, they are already captured by existing regulatory frameworks, which can be built on and tailored to the use of AI in insurance. Also, as highlighted in the report, coherent approaches across jurisdictions would go a long way to helping insurers navigate the challenges and opportunities related to AI more effectively."


Indiana Sen. Todd Young appeared on CNBC for an interview on Thursday, Sept. 14, on the topic of AI regulation and results from the meeting. He predicted that there will be bipartisan agreement on this topic and that legislation (new regulation) is needed on AI. Young also said that this work should be:
  • Use case by use case
  • Developed after knowing what is the risk (toys versus toxins)
  • Will be some disagreements
  • U.S. MUST lead the world — using our values.

Young was asked why the meeting was not in public. He replied:
  • Important to have unguarded conversations
  • Potential labor disruptions
  • National security implications
  • Open/honest statements
  • Need balance between open source and closed source information.

My recommendation is to watch this topic closely, because I am confident that much more will be leaked out about this meeting, and potential follow-up meetings, over the months and years ahead.

Finally, the Washington Times reported that the Biden administration is preparing more executive actions on AI via an executive order in the weeks ahead.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.