IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

U.S. House Mulls Ethical AI Frameworks for Financial Sector

The U.S. House Committee on Financial Services’ Task Force on Artificial Intelligence is considering how to prevent AI from perpetuating old forms of discrimination or introducing new ones.

MBroussard.png
New York University’s Arthur L. Carter Journalism Institute associate professor Meredith Broussard testifying during the U.S. House Committee on Financial Services hearing.
(Image courtesy U.S. House Committee on Financial Services)
The U.S. House Committee on Financial Services held a hearing Wednesday to examine the risks of harmful artificial intelligence application in the financial services space and examine what regulatory frameworks could keep AI from perpetuating existing discriminatory trends or introducing new ones. The technology is already actively at play in the industry, and questions remain over how government can best ensure its safe, ethical use.

The House Financial Services’ Task Force on Artificial Intelligence called the hearing, which came on the heels of news that the White House Office of Science and Technology Policy (OSTP) will be crafting an AI bill of rights. OSTP leadership announced their plans in an Oct. 8 opinion piece in Wired. Federal financial regulators also turned their focus onto financial-sector AI earlier this year, seeking comment from March to June 2021 about topics such as risks and potential need for new regulation or guidance.

Wednesday’s task force hearing convened witnesses who reflected perspectives from academia, financial services, the software industry, a policy think tank and a nonprofit aimed at combating unconscious bias in AI.

They generally called for scrutinizing and documenting practices at all stages of an AI model’s life cycle — from conceiving of the problem the system is intended to solve, to selecting data sets and developing the model to continually checking its performance as long as it remains in use.

Miriam Vogel, president and CEO of nonprofit EqualAI, and Meredith Broussard, associate professor of New York University’s Arthur L. Carter Journalism Institute, also underscored the need to diversify the groups creating and regulating AI so that the viewpoints represented better reflect the overall population. The variety of perspectives would better ensure that one person can fill in another’s blind spots, and vice versa.

“Silicon Valley and its developers tend to be very pale, male and Yale,” Broussard said. “When we have a small and homogeneous group of people creating AI, that AI then gets the collective blind spots of the community of people who are creating the algorithms.”

The quest for better practices also raises questions over where, exactly, ethical AI enforcers should set the bar for fairness — including whether algorithmic decision-making should be held to the same or higher standard than that of human counterparts, said task force chair Rep. Bill Foster, D-Illinois.

“As we start defining frameworks for developing and performance testing AI, it seems possible that we're starting to place requirements on any AI that are more strict than we would ever place on human decision-makers,” Foster said. “For example, most of our witnesses today have advocated for defining minimum diversity standards for the training data sets for AI. But we'd never considered requiring that a human bank officer would have a minimum number of friends of different races of protected classes, even though it might arguably result in more fair decision-making.”

Many countries’ policies on AI in the financial sector are in the “adolescent stage,” according to Jeffery Yong, principal adviser the Bank for International Settlements’ Financial Stability Institute. He co-authored a study of 19 AI policies related to the financial sector from national, regional and international authorities, and found a similar set of core values emerging in many of them: accountability, ethics, fairness, reliability and transparency.

Identifying ethical values and better practices will also only be part of the equation, while getting companies to follow them is another.

Private firms often compete over bringing products to the market faster than their rivals and so currently have little motivation to spend time on evaluating AI-powered products for potentially harmful impact, said Meg King, director of the Science and Technology Innovation Program at the Wilson Center think tank. While consumers are showing growing demand for fairness, companies generally list only vague ethical principles without showing clear action.

AI ALREADY AT PLAY


AI is in active use in the financial sector. Financial institutions like Bank of America currently harness AI for virtual assistant chatbots, while fintech company Ocrolus uses machine learning to parse loan applications and automate some approvals and rejections. The government has adopted such tools as well, with the U.S. Small Business Administration launching an AI-powered platform in August to evaluate Paycheck Protection Program (PPP) loan forgiveness requests, accelerating processes.

Results of the technology have been mixed.

Foster said recent reports find AI-powered fintech apps approved more minority PPP loan applicants than did human bankers. Foster did not specify which study he spoke of, but an Oct. 11, 2021, New York University working paper, which has not yet been peer reviewed, found “evidence that when small banks automate their lending processes, and thus reduce human involvement in the loan origination process, their rate of PPP lending to Black-owned businesses increases, with larger effects in places with more racial animus.”

The Independent Community Bankers of America, a small banking trade group, meanwhile, criticized the NYU study for “guessing” at applicants’ races (something done because lenders did not always gather this data), per the New York Times. Study authors wrote that they used details like business locations and owners’ names to inform assumptions about applicants’ race. A bank’s loan officers would likely also be making assumptions based on the same details, meaning that this method could reflect how beliefs about applicants’ races — accurate or not — might influence approval decisions, they said.

While Foster highlighted a potential positive of automation, Broussard also recalled a more negative 2019 incident involving Apple and Goldman Sachs’ joint Apple Card. The credit card came under fire when two male customers alleged the product’s algorithms offered significantly higher credit limits to them than to the wives with whom their finances were intertwined, suggesting a gender bias in the tool. One customer was Apple co-founder Steve Wozniak who said he was offered a 10 times higher credit limit, despite sharing all bank and credit card accounts, as well as assets, with his spouse, according to Reuters.

VETTING AI


Aaron Cooper, vice president for global policy at industry advocacy group BSA — also known as The Software Alliance, said companies need to examine AI for potential unexpected consequences and discriminatory effects at various points throughout the design and deployment process. That includes clearly laying out what the AI is expected to do, what data it will consider and what historical biases may be present in the data sets, as well as how the model was tested.

For example, loan approval algorithms that are trained on historical data sets are likely to repeat the long-running discriminatory mortgage lending practices, noted Broussard. Synthetic data could potentially be added to the data sets to help counteract biases, King suggested.

King said that AI must also be continually vetted, because these models evolve over time as they process more data, and Cooper advocated for companies to always provide channels for the people affected by an AI system to voice their concerns.

Requiring firms to document their thinking and steps they took both forces them to consider potential risks carefully and makes it easier to go back and try to trace the source of a problem, should the algorithms be found to cause damaging effects, Cooper said.

Developers and adopters of AI can try to anticipate how algorithms might replicate old forms of discrimination, but not all harms are easy to predict.

“AI breaks often in unpredictable ways at unpredictable times,” King said.

That can make it important to anticipate how AI will behave once deployed, and King recommended developers test their offerings in sandboxes that offer mock real-world situations. She also advocated that anybody overseeing financial-sector AI include members from other sectors, too, because increasing the range of perspectives can up the chance of catching unexpected outcomes.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.