IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Lawmakers Reintroduce Bill to Regulate Use of AI Systems

Last week, several members of Congress reintroduced the Algorithmic Accountability Act, a bill that would help regulate new generative AI systems to protect constituents from potential harm.

A stack of papers sits behind two wooden stamps, one of which says "CONDITIONS" and one says "TERMS."
Shutterstock
Last week, Sens. Ron Wyden and Cory Booker and Rep. Yvette Clarke reintroduced the legislation to help regulate the use of artificial intelligence systems.

The Algorithmic Accountability Act of 2023 applies to new generative AI systems and other AI and automated systems. Specifically, it aims to create new protections for people affected by AI systems used in decision-making for housing, credit, education and other high-impact uses.

There has been debate about how Congress can effectively regulate AI, with some experts arguing the best path is to pass specific legislation such as this act. Other experts have argued that a national commission or even a single federal regulatory agency could better address the rapidly evolving field. In any case, experts are coming together across sectors to determine an effective path forward.

The bill was first introduced in April 2019. Its primary purpose was to require the Federal Trade Commission (FTC) to prepare rules for companies to test AI-powered systems for accuracy, fairness, bias, discrimination, privacy and security.

The bill requires that companies conduct impact assessments for such factors when using AI to make critical decisions; it also requires reporting certain impact-assessment documentation to the FTC.

In addition, it would create a public repository at the FTC of these systems that consumers and advocates can use to review how critical decisions have been automated by companies. The FTC is also required in the bill to publish an anonymized aggregate report on trends annually. It also adds 75 staff members to the commission to help enforce the law.

This legislation has been endorsed by numerous civil society organizations and experts, including Access Now, the Anti-Defamation League, the Center for Democracy and Technology and New America’s Open Technology Institute.

“We know of too many real-world examples of AI systems that have flawed or biased algorithms: automated processes used in hospitals that understate the health needs of Black patients; recruiting and hiring tools that discriminate against women and minority candidates; facial recognition systems with higher error rates among people with darker skin; and more,” said Booker in an announcement, arguing that this bill would help to create a safer future in the face of evolving AI systems.

As Clarke noted in the announcement, lines of code have been able to remain exempt from U.S. anti-discrimination laws, and she argues that exemption cannot continue.

Earlier this year, the federal government took action to advance responsible AI use. At the state level, some states like Virginia and Pennsylvania have already issued guidance on its use; at the local level, cities like San Jose, Calif., are doing the same.

Earlier this month during a virtual event, U.S. Rep. Ted Lieu defined AI systems that cause harm in three categories: those that “can destroy the world,” those that can potentially kill one or more persons and those that may not directly threaten lives but can cause widespread harm. This piece of legislation aims to address AI systems that fall into the third category, mitigating the potential of such technologies to cause harm through biased systems.