IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

NIST Proposal Aims to Reduce Bias in Artificial Intelligence

The National Institute of Standards and Technology recently released a proposal regarding the risk of bias in the use of artificial intelligence to help reduce it. The agency is seeking comments from the tech community.

AI_shutterstock_6785833751
Shutterstock
The National Institute of Standards and Technology (NIST) recently announced the publication of A Proposal for Identifying and Managing Bias in Artificial Intelligence.

The proposal outlines a possible approach for reducing risk of bias in the use of artificial intelligence (AI) technology, and the agency is seeking comments from the public to strengthen that effort until Aug. 5.

Studies have shown that AI can be biased against people of color, and while there are legislative efforts in progress to tackle this issue from a policy standpoint, much of the issue hinges on the way the technology functions at its most basic level.

“We want to bring together the community of AI developers, of course, but we also want to involve psychologists, sociologists, legal experts and people from marginalized communities,” said Elham Tabassi, NIST’s chief of staff in the Information Technology Laboratory and a member of the National AI Research Resource Task Force in the announcement.

The proposal seeks to help industries using AI technology to develop a risk-based framework. The proposal notes that while reducing risk in these products is “critical,” it remains “insufficiently defined.”

The announcement details some of the possible discriminatory outcomes that can come from AI systems, such as wrongful arrests or unfairly rejecting qualified job applicants.

NIST has identified several characteristics needed in AI systems in order to create public trust: accuracy, explainability and interpretability, privacy, reliability, robustness, safety and security. These characteristics must also be paired with a reduction of harmful bias.

NIST’s proposed approach involves three stages for reducing that bias: predesign, design and development, and deployment.

The first stage refers to where the AI products and their parameters are defined, as well as the determination of a product’s central purpose. In this phase, forward-thinking to possible problems is critical.

The next stage is design and development, where the engineering and modeling take place. In this stage, software designers must pay close attention to context and how predictions may affect different populations.

Finally, in the deployment stage, it is important that the products continue to be monitored. In some cases, they are deployed to the public with very little moderation in what follows.

The proposal concludes that while bias is not new or unique to AI, identifying and reducing that bias can help with a responsible use of the technology. According to one of the report’s authors, NIST’s Reva Schwartz, bias exists throughout this AI life cycle.

“Determining methods for identifying and managing it is a vital next step,” Schwartz said.

NIST is welcoming public feedback in the approach outlined in the proposal — from people both within and outside of the technical industry. Comments can be made by downloading and completing a template and sending it via email to ai-bias@list.nist.gov.