IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Tech Leaders Release AI Safety Principles With Federal Backing

The guidelines, announced by leading venture capitalists with the backing of U.S. Secretary of Commerce Gina Raimondo, lay out how software developers should use the tech responsibly, in concert with moneyed backers.

Artificial intelligence (AI), data mining, deep learning modern computer technologies. 
Futuristic Cyber Technology Innovation. 
Brain representing artificial intelligence with printed circuit board (
(TNS) — Trying to get their hands around the risks and challenges posed by artificial intelligence, technology leaders have proposed yet another responsible AI framework, this time aimed at startups and investors.

The new guidelines, announced in San Francisco on Tuesday by leading venture capitalists with the backing of U.S. Secretary of Commerce Gina Raimondo, seek to lay out how software developers should use the technology responsibly, in concert with their moneyed backers.

During a brief press availability at a South of Market event space, followed by a closed-door roundtable with venture capitalists and startup leaders, Raimondo portrayed the pledge as building on the commitments made by larger AI companies during White House events in the spring.

The Biden administration secured voluntary commitments from leading AI developers including Amazon, Google and OpenAI promising they would develop the technology responsibly.

"We can't just have (those commitments) from the biggest AI companies," said Raimondo, a former venture capitalist herself, during the gathering. Commitments to safely developing AI have to be "throughout the ecosystem" of tech companies, she added.

The guidelines come from Responsible Innovation Labs, a nonprofit group of tech investors and executives headed by Gaurab Bansal, who previously worked as a consultant and White House staffer, according to his Linkedin profile.

The nonprofit said 35 companies and investor groups had signed the pledge, including Mayfield, General Catalyst, Felicis, Bain Capital, IVP, and Lux Capital. They are committing to five broad-strokes principles organized around the idea that "it is critical that startups incorporate responsible AI practices into product development from the outset."

The nonprofit said it had developed the voluntary commitments with feedback from the Department of Commerce, as well as AI experts in the private sector, academia and civil society.

The principles aim to "secure organizational buy-in on responsible AI," require transparency from companies about their use of AI, and convince them to plan ahead about the risks and benefits of using the technology. The requirements also call for product safety testing, as well as for companies to "make regular and ongoing improvements."

Not everyone in Silicon Valley welcomed the latest framework with open arms. Marc Andreessen, founding partner of the venture capital firm Andreessen Horowitz and a pioneer in the creation of web browsers, reposted an announcement about the framework on X (formerly Twitter), writing "Absolutely not."

Andreessen had previously released his own lengthy Techno-Optimist Manifesto, along with other statements such as one titled, Why AI Will Save the World.

As with other AI principles released recently, the set announced Tuesday is laid out in general terms and represents a voluntary commitment with no clear enforcement mechanism. Experts and lawmakers have avoided making specific pronouncements about where AI may not belong, such as in elections advertising, for example.

"AI is the defining technology of our generation," Raimondo said in a statement before the event. "Voluntary commitments like the protocol announced today demonstrate important leadership from the private sector."

The protocols come after U.S. Secretary of Labor Julie Su told the Chronicle earlier this month that organized labor could be a key bulwark against any labor market disruptions driven by AI technology. The technology has the potential, among other applications, to displace call center agents with chatbots, while the battle over self-driving vehicles and the Teamsters union has already reached the California Legislature.

President Joe Biden also released a broad executive order last month regulating artificial intelligence and its developers. The administration requires creators of the most powerful AI tools to submit their technology for safety testing, and sets out plans for government use of the technology, as well as how it can be used in workplaces, schools and a range of other settings.

That order was received positively in some quarters of the tech industry, with tech lobbying group TechNet saying it would "strengthen America's AI leadership."

California Gov. Gavin Newsom also released an executive order of his own in September, focusing among other things on how the emerging technology might be used by various state agencies to improve the services they deliver.

The Biden order also required departments to appoint AI czars, something Su told the Chronicle the labor department is working on.

It was not clear if Raimondo, who is attending this week's APEC summit in San Francisco, had yet appointed an AI point person for her department. She did not take questions during her brief remarks to the press Tuesday.

Governments the world over are concerned about powerful tools like OpenAI's GPT-4 chatbot, and their ability to potentially disseminate misinformation or more sinister applications like instructions to build weapons.

© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.