AI has a discrimination problem. In the banking sector, the consequences can be severe

menu icon

  • When it comes to banking and financial services, the problem of artificial intelligence amplifying existing human biases can be serious.
  • Deloitte notes that AI systems are ultimately only as good as the data they receive: incomplete or unrepresentative datasets could limit the AI’s objectivity, while biases in the development teams that train such systems could perpetuate that cycle of prejudices.
  • Lending is a prime example of where the risk of an AI system being biased against marginalized communities can rear its head, according to former Twitter executive Rumman Chowdhury.

Artificial intelligence algorithms are increasingly used in financial services, but carry serious risks of discrimination.

Sadik Demiroz | Photodisc | Getty Images

AMSTERDAM Artificial intelligence has a racial bias problem.

From biometric identification systems that disproportionately misidentify Black and minority faces, to speech recognition software applications that fail to distinguish voices with distinct regional accents, AI has a lot to work with when it comes to of discrimination.

And the problem of amplifying existing biases can be even more serious when it comes to banking and financial services.

Deloitte notes that AI systems are ultimately only as good as the data they receive: incomplete or unrepresentative datasets could limit the AI’s objectivity, while biases in the development teams that train such systems could perpetuate that cycle of prejudices.

Nabil Manji, head of cryptocurrency and Web3 at FIS’s Worldpay, said a key thing to understand about AI products is that the strength of the technology depends heavily on the source material used to train it.

“The thing about how good an AI product is, there are two variables,” Manji told CNBC in an interview. “One is the data it has access to, and the second is how good the large language model is. That’s why on the data side, you see companies like Reddit and others, they’ve come out publicly and said we won’t allow companies to scrape our data, you’ll have to pay us for it.”

As for financial services, Manji said many of the back-end data systems are fragmented into different languages ​​and formats.

“None of this is consolidated or harmonized,” he added. “This will cause AI-powered products to be much less effective in financial services than they might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”

Manji he suggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data hidden away in traditional banking’s messy systems.

However, he added that banks, being the heavily regulated and slow-moving institutions, are unlikely to move as fast as their nimbler tech counterparts in adopting new AI tools.

“There’s Microsoft and Google, which over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks aren’t known for being fast,” Manji said.

Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said the loan is a prime example of how an AI system’s bias against marginalized communities can rear its head.

“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying them [loans] mostly to black neighborhoods.”

In the 1930s, Chicago was notorious for the discriminatory practice of “redlining,” in which the creditworthiness of properties was strongly determined by the racial demographics of a given neighborhood.

“There would be a giant map on the wall of all the boroughs in Chicago, and they would draw red lines through all the boroughs that were primarily African American, and they wouldn’t give them loans,” he added.

“Fast forward a few decades later, and you’re developing algorithms to determine the riskiness of different districts and individuals. And while you might not include the data point of someone’s race, it’s implicitly collected.”

Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization that aims to empower black women in the artificial intelligence industry, he tells CNBC that when AI systems are used specifically for loan approval decisions, he has found that there is a risk of replicating existing biases present in the historical data used to train the algorithms.

“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.

“It is imperative that banks recognize that implementing AI as a solution can inadvertently perpetuate discrimination,” he said.

Frost Li, a developer who has worked in artificial intelligence and machine learning for more than a decade, told CNBC that the “customization” dimension of AI integration can also be problematic.

“What’s interesting about AI is how we select ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI in their platforms. “Sometimes, we select features that are unrelated to the outcomes we want to predict.”

When AI is applied to banking, Li says, it’s harder to identify the “culprit” of bias when everything is convoluted in the calculation.

“A good example is how many fintech startups are specially designed for foreigners, because a Tokyo University graduate will not be able to get any credit card even if he works in Google; however a person can easily get one from the union of community college credit because bankers know more about local schools,” Li added.

Generative AI is not typically used to create credit scores or in consumer risk scores.

“That’s not what the tool was created for,” said Niklas Guske, chief operating officer of Taktile, a startup that helps fintechs automate decision-making.

Instead, Guske said the most powerful applications are in the preprocessing of unstructured data such as text files such as transaction classification.

“These signals can then be plugged into a more traditional underwriting model,” Guske said. “Thus, Generative AI will improve the quality of the underlying data for such decisions rather than replacing common scoring processes.”

But it is also difficult to prove. Apple and Goldman Sachs, for example, have been accused of giving women lower limits for the Apple Card. But these claims were rejected by the New York Department of Financial Services after the regulator found no evidence of discrimination based about sex.

The problem, according to Kim Smouter, director of the anti-racist group European Network against Racism, is that it can be difficult to prove whether discrimination based on artificial intelligence has actually taken place.

“One of the difficulties in the mass deployment of AI,” he said, “is the opaqueness in how these decisions are made and what remedial mechanisms exist if a racialized individual even becomes aware that there is discrimination.”

“Individuals have little understanding of how AI systems work and that their individual case may, in fact, be the tip of a system-wide iceberg. As a result, it is also difficult to detect specific cases where things are go wrong,” he added.

Smouter cited the example of the Dutch childcare scandal, in which thousands of benefits claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found victims were “treated with an institutional bias”.

This, Smouter said, “demonstrates how quickly such malfunctions can spread and how difficult it is to prove them and obtain redress once they are discovered and significant, often irreversible damage has occurred in the meantime.”

Chowdhury says a global regulatory body, such as the United Nations, is needed to address some of the risks surrounding AI.

While AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the main concerns expressed by industry insiders are disinformation; racial and gender biases embedded in AI algorithms; and “hallucinations” generated by tools similar to ChatGPT.

“I’m a little concerned that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy, neither the text, nor the video, nor the audio, but then how do we get our information? And how do we make sure that the information has a high amount of integrity?” said Chowdhury.

Now is the time for significant AI regulation to take effect, but knowing how long it will take for regulatory proposals like the European Union’s AI Act to take effect, some worry that it won’t happen fast enough.

“We call for more transparency and accountability of algorithms and how they operate, and a layman’s declaration that allows people who are not AI experts to judge for themselves, proof of tests and publication of results, independent complaints process, audits and periodic reporting, racialized community involvement as technology is being designed and considered for implementation,” Smouter said.

The AI ​​Act, the first regulatory framework of its kind, incorporated a fundamental rights approach and concepts such as compensation, according to Smouter, adding that the regulation will be enforced in about two years.

“It would be great if this period could be shortened to ensure transparency and accountability are at the heart of innovation,” he said.

#discrimination #problem #banking #sector #consequences #severe

Previous articleVoxel51 Open-Source VoxelGPT: An AI assistant that leverages the power of GPT-3.5 to generate Python code for analyzing computer vision datasets
Next articleThis is what internet travel of the future will look like

LEAVE A REPLY

Please enter your comment!
Please enter your name here