Algorithmic gatekeepers: The hidden bias in AI payments

by Geroge Iddenden

Share this post

As AI systems decide who gets loans and who is flagged as a fraud risk, ethical questions loom large. Are algorithms the new gatekeepers of financial fairness—or hidden enforcers of bias?

Imagine waking up to find your bank account frozen because an AI algorithm flagged you as a fraud risk. Who do you call to fix it, especially when the decision was made by a machine? As AI takes on a greater role in payment decisions, its convenience comes with ethical dilemmas that financial institutions can no longer ignore. From personalising customer interactions to preventing fraud, AI is rewriting the rules of finance—but at what cost?

Who is accountable when an AI system makes a flawed decision? How can businesses ensure their algorithms aren’t reinforcing bias? And what happens when AI-powered fraud detection systems wrongly flag legitimate transactions, locking customers out of their own money? 

FScom Senior Manager Anna Sweeney tells Payments Review about how much AI is impacting payments: “AI is increasingly pivotal in the payments industry, especially for fraud detection and prevention. Firms are leveraging innovative AI techniques to assess factors such as behavioural biometrics, device intelligence, IP data, digital footprints, and network analysis to assign fraud risk scores to applicants before they officially apply.” 

Sweeney believes these advanced systems can greatly enhance fraud detection accuracy, but they also introduce a significant risk of amplifying or perpetuating biases, potentially disadvantaging entire demographics of users. 

The concerns around AI and ethics in this space aren’t hypothetical. AI-driven payment systems are already influencing financial inclusion, risk assessments, and data privacy, and often with realworld consequences. As AI continues to revolutionise payments, businesses face a dilemma: innovate fast or play it safe. The decisions made today could shape financial security for millions tomorrow. 

With regulation struggling to keep pace, firms are left to navigate a murky ethical landscape with little clear guidance. For some, compliance is just a box to tick. But the real challenge—and the real risk—lies in embedding ethics into AI systems. Because ethical failures in AI aren’t just technical errors; they’re decisions that impact real lives. When customers and regulators start asking difficult questions, it might already be too late. 

Who’s accountable when AI gets it wrong?

With AI deciding who gets a loan or who is flagged as a fraud risk, the stakes are higher than ever. These decisions aren’t just about data—they’re about people’s lives, access to money, and financial security. But what happens when these systems get it wrong? Bias, privacy breaches, and security flaws are no longer hypothetical. They’re already reshaping financial access, sometimes in dangerously unfair ways. 

NatWest Head of AI, Data Science & Innovation, Graham Smith, explains that, as with any emerging technology, firms must exercise caution and be mindful of privacy, ethics, and risks: “We must get it right and make sure our use of AI is carefully considered. Not only to properly and safely serve our customers but to protect the reputation of our business.” 

FIS Head of European Growth, International & Corporate Banking Kevin Flood believes “trust” and “safety” are indispensable when discussing AI. He tells Payments Review: “With AI’s capability to make autonomous decisions, we must consider our comfort level in relinquishing control and allowing it to make critical decisions for us or about us that impact us. 

“The ethics of AI in payments revolves around fairness, transparency, security, and inclusivity. AI-driven payment systems must ensure unbiased decision-making, avoiding discrimination in credit approvals, fraud detection, and risk assessments. Transparency is crucial so users understand how their data is used and processed. 

“Security is another primary concern, and AI systems must protect sensitive financial information from fraud and cyber threats. Additionally, AI should be rooted in financial inclusion, ensuring automated systems do not exclude individuals based on socioeconomic status or digital access. Ethical AI in payments requires continuous oversight, regulatory compliance, and a commitment to responsible innovation.” 

Ethical design isn’t just about avoiding mistakes; it’s about preventing biases from becoming built-in barriers. And in AI systems, these biases often begin at the source. 

Bias by design: Why AI isn’t always fair 

AI systems learn from historical data, which means they can also learn historical biases. If the training data reflects discrimination in lending or risk assessments, the AI models will absorb these patterns and repeat them. For example, if past data shows lower loan approvals in certain postcodes, the AI might learn to deny loans in those areas even if applicants are financially qualified. Sweeney explains: “One common source of bias in AI systems is biased training data. If AI models are trained on data that reflects historical biases— whether related to race, gender, socioeconomic status, or other demographic factors—they can inadvertently reinforce those biases.” 

AI systems can amplify biases, leading to unfair credit assessments, discriminatory fraud detection, or disproportionately flagging certain demographics as high risk. For example, studies have shown that AI-driven lending models sometimes deny loans to applicants from marginalised backgrounds, not because of their financial behaviour but because historical data skews the algorithm’s understanding of risk. If left unchecked, these biases don’t just affect individuals but can also undermine trust in the financial system. It’s up to industry leaders to act now, ensuring fairness is built into AI from the ground up. 

A 2019 Capgemini study found that 42% of employees had encountered ethical issues with AI in their organisations, yet many firms still treat these failures as statistical errors rather than real-life consequences that affect customers. And when things go wrong, there is often no appeal process, because who do you argue with when an algorithm makes the decision?

What price? The data dilemma behind AI’s power

AI in payments thrives on vast amounts of data. The more data it has, the more accurate it becomes. But that is also where the danger lies. Customers often have no idea how their data is being used, who has access to it, or how long it is stored.

Regulations like the EU GDPR have been designed to clamp down on unethical data practices, but enforcement lags behind AI’s rapid evolution. The risk of data exploitation grows as AI gets better at identifying spending patterns, locations, and financial behaviour. Businesses must attempt to leverage AI-driven insights without overstepping ethical boundaries.

Meanwhile, AI systems are prime targets for cybercriminals. When fraud detection and identity verification tools rely on AI, hackers use AI-powered attacks to manipulate them. If an AI model incorrectly identifies fraud, criminals can find ways to exploit that flaw, with real customers getting caught in the crossfire.

Security or vulnerability? The AI paradox in payments

AI is revolutionising fraud detection, identifying suspicious transactions faster and more accurately than any human could. However, it is also creating new security threats and, in some cases, giving cybercriminals the upper hand. While fraudsters weaponise AI against financial institutions, launching adaptive attacks that evade detection, the very tools designed to protect payment systems are becoming their biggest vulnerability.

Then there is the issue of explainability, or the lack thereof. Many AI-powered payment systems operate as ‘black boxes,’ making decisions that even their developers struggle to understand fully. This lack of transparency makes it difficult to spot flaws, biases, or vulnerabilities, increasing the risk of AI being manipulated or making unchecked errors that impact real customers.

Sweeney believes human oversight remains essential, particularly when AI is handling critical financial decisions. She adds: “While AI can process data at an unprecedented scale, it is important to have human intervention for high-stakes decisions, particularly in fraud detection. Human-in-the-loop systems ensure that when AI systems flag high-risk cases, a human expert can review the decision to ensure fairness and avoid any algorithmic discrimination.” Without stronger governance and human accountability, AI could quickly become more of a liability than an asset.

When AI fails: Real-world consequences for customers

When AI in payments fails ethically, it causes massive inconvenience, blocking access to financial services, reinforcing discrimination, and exposing customers to fraud. As AI takes on more responsibility in payments, businesses must confront these risks head-on. The next step is understanding how to build responsible AI systems that prioritise fairness, security, and accountability from the ground up. “AI has undoubtedly become an integral part of how payments firms operate today, and it is no longer a concept for the future—it is an essential part of business strategy,” Sweeney explains. She expresses concerns that firms that fail to embrace AI risk falling behind, but equally, those who fail to exercise care in deploying AI could face serious consequences. “The tension between driving innovation and ensuring ethical and secure data handling is real, especially since AI relies heavily on large volumes of customer data,” she says.

Building ethical AI: A blueprint for responsible payments

The path to responsible AI in payments isn’t just about avoiding regulatory penalties but building trust in a world where algorithms decide who gets access to money. As the ethical risks become clearer, so do the opportunities to build fair and transparent systems. Firms that confront these challenges head-on can turn ethical responsibility into a competitive advantage, setting themselves apart as industry leaders. The question now is will the financial industry lead responsibly, or wait for the first scandal to force change? The answer will not only shape the future of payments, but also the trust customers place in the financial system itself.

Making AI explainable

This ‘black box’ effect is one of the biggest challenges with AI in payments, decisions are made, but no one can fully explain how. This becomes a significant problem when an AI model determines something as critical as whether a transaction is fraudulent or a customer qualifies for a loan. If businesses do not understand how their AI models reach conclusions, how can they be sure those decisions are fair?

The explainability of these decisions must be a priority for firms that should invest in AI models that can be audited and understood by data scientists, compliance teams, and regulators. Clear documentation, real-time monitoring, and user-friendly explanations of AI decisions can help ensure accountability. Some businesses are already implementing AI tools that provide customers with reasons for automated decisions, an approach that should become the industry norm.

Sweeney believes the most significant risk to the security of payment systems is the loss of consumer trust. While AI is a potent tool for strengthening payment security, it must be deployed responsibly. “Over-reliance on AI without proper safeguards or transparency, as well as the potential for bias, could erode consumer confidence. If even one firm mishandles AI or faces a breach, it could cascade, shaking trust in the entire financial services sector,” she explains.

Smith points towards NatWest’s AI and Data Ethics Code of Conduct to demonstrate how customer and colleague safety is a priority for the firm. “We hope that other firms will follow suit in adopting clear principles that can guide the industry’s use of AI for the benefit, and not the detriment, of our customers,” he adds.

Fixing bias before it breaks trust

Eliminating bias in AI should start with pulling data and not tweaking algorithms after they’ve gone live. Firms should ensure the data used to train AI models is diverse, representative, and regularly tested for bias. Ideally, models should have diverse data sets, with the source reflecting the full spectrum of customer behaviours and demographics. It should also be regularly audited to detect and correct bias before it interacts with real customers, and it should always have a degree of human oversight available. Some companies are now introducing AI “ethics boards” or dedicated fairness teams to oversee AI deployments, a step that could soon become standard practice across the payments industry.

Caught in the crossfire: AI ethics and global regulations

Regulations around AI in payments are evolving quickly, with frameworks like the EU AI Act and GDPR setting new ethical and compliance standards. While Europe focuses on transparency and risk management, other regions take a more varied approach, creating challenges for firms operating across borders. Balancing compliance with innovation remains the key hurdle. However, firms that embed ethical AI principles early can turn compliance into a competitive advantage.

The ethics advantage: Turning responsibility into a competitive edge

AI is transforming payments, but without ethical safeguards, it risks reinforcing biases, compromising privacy, and exposing firms to security threats. Businesses that fail to address these challenges will face regulatory scrutiny, reputational damage, and a loss of customer trust. The solution lies in transparency, fairness, and strong governance. Payment firms can build innovative and responsible systems by making AI explainable, tackling bias at the source, and aligning with evolving regulations.

Untitled design (36)
Read the entire Payments Review Spring edition here

More To Explore

Membership

Merchant Community Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Continue reading

This content is only available to subscribers - please see instructions below!

Subscribe to continue reading

Already a subscriber? Please log in to continue

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?