AI in payments: The battle against fraud's evolving threat

May 2 2025

by Payments Intelligence

LinkedIn
Email
X
WhatsApp

What’s the article about?

How artificial intelligence is both enabling and combating payment fraud, based on insights from a recent industry webinar.

Why is it important?

With fraud accounting for a significant portion of UK crime, understanding AI’s role is critical for developing effective, future-ready defences.

What next?

Payments leaders should focus on improving data quality, fostering cross-sector collaboration, and responsibly integrating AI into fraud prevention strategies.

Introduction

Artificial intelligence is reshaping the payments landscape—empowering both fraudsters and those fighting them. In a recent webinar hosted by the Payments Association’s financial crime working group, leading experts from Monzo, PwC, Featurespace, Thredd, and the City of London Police explored the evolving role of AI in fraud prevention and response.

The panel, featuring fraud specialists from Monzo, PwC, Featurespace, and the City of London Police, shared insights into how criminals are weaponising AI while institutions race to deploy countermeasures that balance security with customer experience. 

Panelists

The evolving fraud landscape

The webinar started by explaining that fraud now accounts for approximately 40% of all UK crime, with criminals growing increasingly sophisticated in their approach. The panellists described a troubling evolution: traditional scams being supercharged by artificial intelligence. 

“Real-time deepfakes have moved beyond theoretical concerns,” explained David Sutton, chief innovation officer at Featurespace. He cited a confirmed case from February 2024 where a Hong Kong multinational lost $25 million after an employee was deceived by a deepfake call purporting to be from their CFO. 

Beyond high-profile cases, criminals are using AI more subtly to scale up operations. Large language models are being deployed to automate the initial targeting of victims, allowing scammers to cast wider nets than ever before. While AI struggles to maintain believability in longer conversations because of its context window, it excels at making initial contact messages more grammatically correct, psychologically tailored, and convincing. 

“Fraudsters share information better than legitimate businesses do,” noted Gareth Dothie from the City of London Police’s domestic corruption unit. “They’ve created industrial-scale operations, complete with professional call centres that provide better ‘customer service’ than many legitimate businesses.” 

Social media manipulation has emerged as a particular concern, with criminals using AI bots to artificially boost credibility through likes, followers and comments before launching scams. This manufactured social proof makes fraudulent schemes significantly more persuasive to potential victims. 

Defensive applications

Participants also discussed that, while criminals exploit AI’s capabilities, financial institutions are deploying even more sophisticated systems to detect and prevent fraud. Featurespace demonstrated how their deep neural network models have achieved a 76% decrease in decline rates while maintaining fraud detection levels. 

Alex West, who leads PwC’s Banking and Payments Fraud Practice, identified three key areas where AI offers significant defensive advantages: “First, improved detection accuracy through machine learning models. Second, disrupting criminality through technologies like the AI chatbot ‘Daisy’ from O2, which wastes scammers’ time and gathers intelligence. And third, operational efficiency that allows resources to be redirected from routine tasks to complex investigations.” 

Models are proving value, especially for smaller organisations with limited transaction data. By pooling information across institutions, these models can identify patterns invisible to individual companies while maintaining privacy through emerging technologies like PETS (Privacy Enhancing Technologies).

Implementation best practices

Deploying AI effectively requires careful planning and realistic expectations. Aisling Twomey, who runs business banking financial crime at Monzo, stressed that organisations must focus on data quality before attempting AI solutions. 

“Without good data, you’re building on sand,” Twomey explained. “We need to make sure that data is accessible, usable, and not hidden in different systems that don’t speak to each other.” 

The panellists recommended cross-functional teams combining data engineers, risk specialists and operations staff to ensure AI solutions address real-world challenges. Documentation proved another crucial factor, allowing organisations to learn from both successes and failures as they iterate. 

For implementation, experts recommended a gradual approach using “shadow testing” (running new systems alongside existing ones) followed by “canary testing”, where 5% of transactions are processed through the new system before full deployment.

The human element

Despite technological advances, human expertise remains irreplaceable. Dothie cautioned against viewing AI as a replacement for skilled analysts: “This is about transitioning skills towards a future model, not eliminating roles.” 

The most effective approach combines AI efficiency with human judgment. As Sutton explained, “You can use AI to handle the easy cases, freeing your human analysts to focus on complex situations that require accountability and judgment.” 

Financial crime experts must develop a basic understanding of AI systems, while data scientists need greater familiarity with fraud patterns. This “meeting of minds” creates teams capable of both developing and effectively utilising advanced technologies. 

“I’m not that old, but I remember saying even five years ago, ‘I’m not a technologist; I’m a fraud specialist,'” reflected West. “I don’t think it’s acceptable to say that anymore. We all have to be technologists because it’s all about how we integrate technology into human processes.” 

Regulatory considerations

As AI deployment accelerates, regulators are taking notice. The panel described a “cautious optimism” among regulatory bodies, who recognise benefits while harbouring concerns about explainability and fairness. 

“There’s a very real danger of over-reliance or putting too much trust in a black box whose processes the organisation can’t explain when asked,” Dothie warned. “And you could very well get asked by a regulator.” 

The level of scrutiny varies by application. AI used for operational efficiency faces fewer regulatory concerns than systems that make consequential decisions about customers. The panellists emphasised that organisations must maintain the ability to explain AI-based decisions, particularly those affecting customer outcomes. 

“This is not new ground if we’re talking about AI in the broad sense,” noted West. “Model-based approaches for fraud detection have been around for a long time. It’s generative AI that feels newer and perhaps less well integrated into payments infrastructure.” 

Takeaways

As artificial intelligence continues to shape the landscape of payment fraud prevention, several key themes emerged from the discussion. Tools designed to detect deepfakes and synthetic media are developing rapidly, but they require constant refinement to keep pace with evolving threats. Meanwhile, some institutions are exploring customer-facing AI solutions that support individuals in identifying and avoiding scams at the source.

A recurring point among panellists was the strategic value of data collaboration. With appropriate privacy and governance frameworks in place, there is growing potential for cross-sector data sharing—among financial institutions, telecoms providers, and social media platforms—to enhance fraud detection and risk mitigation at scale.

Ultimately, effective fraud prevention will rely on a balanced approach: one that integrates advanced technological capabilities with informed human oversight, fosters cross-industry collaboration, and prioritises flexibility in adapting to an ever-changing threat environment.

LinkedIn
Email
X
WhatsApp

Read more Payments Intelligence

Membership

Merchant Community Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Continue reading

Explore how AI is transforming fraud prevention in the payments sector. Join The Payments Association to read the full article.

Become a member to continue reading

Member of The Payments Association? Log in to continue reading

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?