The Leadership Conference on Civil and Human Rights is a coalition to protect U.S. residents’ civil rights, with members representing more than 240 national organizations. The Leadership Conference Education Fund is its education and research arm, founded in 1969 to help build public will for public policy centered on civil rights.
In an uncertain era of AI regulation, frameworks can provide a road map to mitigate risk during AI implementation. The National Institute of Standards and Technology released an AI risk management framework, for example, which is informing decision-making in states including Colorado.
The new Innovation Framework aims to offer entities guidance for how they invest in, create and use AI systems to ensure civil rights are protected.
“American-made AI will succeed when our rights lead the way,” Maya Wiley, Leadership Conference president and CEO, said in a statement. Securing AI leadership status for the U.S. is one of the current presidential administration’s priorities.
The framework lays out four foundational values to support a business strategy: centering civil and human rights in the design process, considering AI as a tool rather than a solution, human impact and oversight being integral to AI, and ensuring AI is environmentally sustainable.
Ten life cycle pillars outlined in the framework aim to ensure the values are included in practice. For example, AI design should start with identifying appropriate use cases. Historically marginalized populations should be centered in this process. Responsible AI development should involve representative data, and it should include protections for sensitive data. AI tools should be assessed for bias and potential discriminatory impacts, and even after deployment, there should be consistent monitoring and mechanisms for accountability.
“Private industry doesn’t have to wait on Congress or the White House to catch up; they can start implementing this Innovation Framework immediately,” Koustubh “K.J.” Bagchi, Center vice president, said in a statement.
The framework aims to be a resource for companies, civil society and others advocating for responsible private-sector AI development and deployment. It can be used by AI investors, developers and deployers, including C-suite leaders, product teams and engineers.
The framework was created by gathering input from stakeholders and holding feedback sessions involving the civil rights community, the Center’s Advisory Council, and companies.