Navigating the rise of AI-enabled fraud

by George Iddenden, The Payments Association

Share this post

What is this article about?

The growing threat of AI-enabled fraud in the payments sector and how firms can combat it with advanced technologies.

Why is it important?

It highlights the urgent need for payments firms to address AI-driven fraud to protect financial security, maintain customer trust, and comply with regulations.

What’s next?

Firms must adopt cutting-edge AI tools, strengthen partnerships, and invest in employee training to stay ahead of evolving fraud tactics.

The rise of AI-enabled fraud in the payments sector has become an increasingly significant concern as artificial intelligence and machine learning technologies evolve. Fraudsters are leveraging these advanced tools to enhance the sophistication and scale of their attacks, making it harder for traditional security measures to keep pace.

According to research from Edgar, Dunn & Company, 85% of senior payment professionals identified fraud detection and prevention as the primary use case for AI in the payments industry

From synthetic identities and deepfakes to automated phishing and transaction manipulation, AI is being used to bypass security protocols, targeting both financial institutions and consumers alike. With payment systems becoming more digital and interconnected, the risk of AI-driven fraud grows, urging payments firms to adopt cutting-edge solutions to protect themselves and their customers from these emerging threats. This is evidenced by the fact that payment card fraud alone is projected to increase by over $10 billion between 2022 and 2028, according to the data.

Husnain Bajwa, VP of product, risk solutions at SEON believes the industry is at somewhat of a crossroads when it comes to the issue of fraud, comparing it to the evolution seen in autonomous vehicles. “It’s a complex dance of sensors, decisions and actions that exists on a spectrum from fully human-controlled to fully automated. Just as self-driving cars use multiple types of sensors (cameras, lidar, radar) to build a comprehensive view of their environment, modern fraud prevention systems need to synthesise diverse signals – device intelligence, behavioural patterns, transaction data and digital footprints – to understand the full context of any given interaction.”

This is necessary due to the diversification of methods fraudsters are using AI to execute cyber threats. Traditional fraud prevention methods, which depend on fixed rules and human intervention, are no longer sufficient to detect and mitigate the complex and evolving tactics utilised by fraudsters.

Modern natural language processing (NLP) enables fraudsters to produce flawless content across multiple languages, including regional dialects and colloquialisms. Bajwa explains: “This capability powers convincing phishing campaigns, social engineering attacks and fraudulent customer service interactions that can deceive even vigilant targets in global markets.”

Not only this, but real-time deepfake generation has reached new heights in quality and speed, according to Bajwa. “Fraudsters can now create convincing video and audio impersonations of executives, customers and authority figures, posing a threat to biometric authentication systems and enabling sophisticated social engineering attacks,” he says.

AI-powered fraud systems can launch thousands of personalised, contextually relevant attacks simultaneously. This high-volume, high-velocity approach overwhelms traditional human-based fraud detection methods. These AI models continuously learn and adjust their strategies based on success rates, creating an ever-evolving threat landscape. This adaptability quickly renders static fraud detection models obsolete as the systems rapidly pivot to more effective approaches. The growing threat of the technology has prompted 55% of online merchants to focus on improving fraud AI/ML accuracy as a top priority in e-commerce fraud management.

The growing threat of AI-related fraud puts financial institutions at risk of significant financial losses and jeopardises consumer trust and compliance with regulations. As payment systems become more digitised and interconnected, the attack surface expands, and the stakes for payments firms to invest in robust, AI-driven fraud detection and prevention systems have never been higher.

Understanding AI-enabled fraud

As fraudsters invest in more sophisticated technology, including deepfakes, synthetic identities and automated phishing, the need for payments firms to adopt better AI-driven fraud detection systems grows. Other ways fraudsters leverage the latest technology to fraudulently extort firms and consumers are through machine learning, natural language processing (NLP), and deep learning to conduct sophisticated scams and bypass traditional security measures.

Machine learning algorithms enable fraudsters to analyse vast amounts of data, identify patterns, and predict vulnerabilities in payment systems. This allows them to automate attacks and continuously adapt their tactics in real time, staying one step ahead of static security protocols. Natural language processing (NLP) is often used in AI-driven phishing attacks, where fraudsters craft highly convincing, personalised messages that mimic legitimate communication, making it difficult for recipients to distinguish between genuine and fraudulent messages. Additionally, deep learning models are used to generate deepfakes—hyper-realistic, AI-generated videos or audio recordings—that fraudsters use to impersonate individuals, such as executives or customers, in order to authorise fraudulent transactions or manipulate employees into divulging sensitive information.

These AI technologies give fraudsters a significant advantage over traditional security systems, especially legacy banks, which often cannot detect the evolving and more complex tactics employed in these attacks.

In order to combat the rising threat, the financial sector’s AI spend is predicted to grow from $35 billion in 2023 to $97 billion by 2027. The fintech sector, in particular, is a driving force behind this trend, with research from Mordor Intelligence suggesting the market size for AI in fintech is expected to exceed $50 billion by 2029, growing at a CAGR of 2.91%.

The projected rise in AI spending within the financial sector further indicates a significant shift towards AI-driven innovation in payments. As fraud becomes more sophisticated, payments firms will invest in AI-powered fraud detection tools to stay ahead of evolving threats. AI will also enhance customer experiences through personalised services, improve operational efficiency by automating processes, and enable the development of tailored financial products. Additionally, AI will support compliance by automating regulatory checks and reporting. As a result, firms will most likely need to increasingly rely on AI to improve their security, efficiency, and customer engagement.

Since the inception of Large Language Models (LLM), the generative AI space has also contributed massively to the rising fraud rates. Generative AI models are used to create large-scale, high-quality faces. These models can generate conditional images, such as people wearing medical masks and even lower-quality vector images. The self-learning nature of the neural networks allows the models to generate specific problems without relying on external data.

However, there are historical issues with bias in facial biometrics, which generative AI aims to address through more inclusive and specialised training approaches. The ability of generative AI to create highly realistic synthetic documents, including passports and IDs, is a growing concern, as these can be used for fraudulent activities. Detecting these deep fakes requires sophisticated AI-based tools to analyse multiple aspects of the documents, from depth and lighting to background and data integrity.

VE3’s Managing Director Manish Garg explains the benefits of AI enhancing security by offering advanced authentication methods, such as biometric verification. “These tools add an extra layer of protection, minimising the risk of unauthorised access to payment systems. Furthermore, AI-powered encryption algorithms safeguard sensitive data during transmission and storage, ensuring compliance with privacy standards,” he says.

The risks for payments firms

The threat of AI-powered fraud is evolving rapidly, with deepfake-related identity fraud cases skyrocketing between 2022 and 2023 in many countries. For example, fraud attempts in the US rose by 3000% year-on-year from 2022-2023.

Beyond financial losses, fraud can erode customer trust, which is crucial in the payments industry. A PwC Global Economic Crime and Fraud Survey 2022 found that 60% of businesses reported that fraud incidents led to reputational damage, which can take years to repair.

This, according to Bajwa, is a critical risk. “Even one single breach or successful fraud attack can undermine consumer confidence, particularly for fintech and payment firms operating in high-stakes industries. Furthermore, failing to mitigate fraud exposes companies to greater regulatory and compliance risks, especially under stringent AML and KYC frameworks,” he explains. These risks underscore the importance of robust, AI-powered fraud prevention strategies to safeguard both financial stability and organisational reputation.

As customers increasingly seek secure and reliable payment services, any breach in security can lead to a loss of consumer confidence, ultimately affecting a firm’s long-term viability and growth and, with the growing sophistication of AI-driven attacks, the impact on both finances and reputation can be severe, making it imperative for payments firms to adopt advanced AI tools to detect and prevent fraud effectively.

Garg believes that while AI solutions can offer a breadth of potential when it comes to helping to combat AI-powered fraud, it also brings some challenges, particularly concerning the data privacy conversation. “Payments companies face a fundamental challenge: innovating responsibly without breaching data privacy laws. AI thrives on vast datasets, yet privacy regulations often limit the scope of data usage. Balancing these opposing forces requires investments in privacy-preserving technologies, such as differential privacy and federated learning, which enable AI to function without compromising user confidentiality.”

Garg also acknowledges the difficulty in managing the changing regulations in relation to AI and data privacy.  “Missteps can lead to legal consequences and the erosion of consumer trust—a critical currency in the payments ecosystem. By aligning innovation with compliance, companies can create AI-powered solutions that respect privacy while driving progress, ensuring both competitive advantage and adherence,” he adds.

In addition to the regulatory pressures and potential reputational damages, another significant strain of AI-powered fraud is the man-hours taken away from internal security teams, the resources necessary to address the issues and the dampened response times. According to a 2024 survey, 40% of online merchants reported gaps in fraud tool capabilities as a top challenge, while 39% cited a lack of internal resources.

As fraudsters increasingly use AI-driven tools to execute sophisticated attacks, security teams are challenged to keep up with the speed and complexity of these threats. Traditional fraud detection systems often require manual intervention and rely on static rules, which can’t adapt quickly enough to the evolving tactics used by AI-driven fraudsters. This results in longer response times and potentially delayed identification of fraudulent activities.

Moreover, the sheer volume of data generated by AI-enabled attacks demands more resources to monitor, analyse, and respond effectively. Security teams are forced to deploy advanced AI technologies themselves to counter these threats, but this requires significant investment in both talent and infrastructure. Without the right tools and training, the security team’s ability to respond promptly to emerging threats is further hindered, potentially leaving the firm vulnerable to significant losses.

To counter these threats, security teams are increasingly adopting AI technologies themselves. This year, 42% of online merchants reported currently using generative AI for fraud management, with another 24% likely to add it in the next 12 months.

However, this adoption comes with its own challenges, which include the issue of distrust of AI outputs among developers (66.1%) and lack of proper training (29.6%), highlighting the need for significant investment in both talent and infrastructure.

Key strategies to combat AI-powered fraud

AI-powered fraud detection tools are essential for identifying fraud patterns, anomalies, and threats faster and more accurately than traditional systems. These tools leverage machine learning algorithms to continuously learn and adapt, enabling them to spot evolving fraud tactics that may otherwise go undetected by static rule-based systems.

By analysing large datasets, AI can quickly recognise unusual behaviours and detect fraud in real time, significantly reducing the risk of fraudulent transactions. However, In addition to adopting advanced fraud detection tools, payments firms must implement robust authentication methods such as multi-factor authentication (MFA), biometric verification (e.g., fingerprints, facial recognition), and behavioural biometrics (e.g., keystroke dynamics or mouse movements). These technologies add an extra layer of security by ensuring that only authorised individuals can access accounts or approve transactions, making it more difficult for fraudsters to impersonate legitimate users.

Cosegic Senior Associate Rabih Zeitouny explains how, while AI has been a cornerstone in fraud detection for years, it is now able to transform how banks and payment firms operate. “For example, AI is being used to transcribe and analyse audio communications, significantly expanding the scope of monitoring and enabling firms to review a much larger portion of recorded interactions than was previously possible. It has also expanded the scope of compliance by covering previously challenging areas to monitor.”

“AI can normalise and translate distinctive dialects and regional nuances, enabling firms to analyse audio transcriptions from regions or populations that were, historically, hard to assess.”

Real-time monitoring systems that use dynamic risk scoring are vital in detecting fraudulent activities as soon as they occur. These systems continuously assess the risk associated with each transaction, taking into account factors such as transaction history, user behaviour, and device data. With continuous monitoring, firms can instantly flag suspicious activity and take immediate action to prevent financial losses. However, for these systems to function effectively, firms must ensure their employees are trained to spot potential fraud.

Zeitouny adds: “Recent advances, including generative AI and real-time analytics, make it possible to process larger datasets faster and detect complex fraud schemes, like multi-layered money laundering operations that span multiple jurisdictions, that were previously harder to catch. These enhanced systems also improve accuracy by reducing false positives, allowing the compliance team to focus on genuine threats while adapting to evolving fraud tactics.​”

As AI-powered fraud techniques become more advanced, it is crucial for payments firms to train their employees in fraud detection and prevention strategies. Regular training ensures that staff can recognise the signs of fraud, understand how AI can assist in identifying threats, and respond effectively when a fraud attempt is detected. Firms recognise the importance of upskilling in AI and automation, with 81% of HR departments considering onboarding programs for reskilling employees as essential or very important.

Leveraging the tech to stay ahead

To stay ahead of increasingly sophisticated methods fraudsters, payments firms must leverage cutting-edge technologies that enable proactive fraud prevention. AI and machine learning are at the forefront of this effort, as they allow firms to implement predictive models that can identify potential threats before they occur.

By analysing historical transaction data and using algorithms that continuously adapt to emerging fraud tactics, AI can automatically detect anomalies and flag suspicious activities in real-time, significantly reducing the time it takes to respond to potential fraud. In addition to AI, blockchain DLT offers promising solutions for enhancing security in payment systems.

Blockchain’s transparency and decentralisation can also make it particularly useful for preventing fraud, as every transaction is recorded on a tamper-proof ledger that is visible to all participants. This creates a secure and transparent environment where fraudsters find it much more difficult to manipulate or falsify transaction records. As blockchain technology continues to mature, it will likely become an integral part of fraud prevention strategies within the payments industry.

Blockchain technology, while not as widely adopted as AI, could still considered a promising solution for enhancing security in payment systems. In 2021, 42% of financial services industry professionals saw blockchain as a tool for secure information exchange. However, it’s worth noting that blockchain’s importance in payments has decreased since then, with only 11% of payment professionals considering it essential in 2024.

Garg also pays testament to the seamless user experience offered by AI. “These advancements underscore AI’s indispensable role in the payments industry, where safeguarding sensitive information is paramount. By integrating AI intelligently, payments firms can deliver secure, efficient services while staying ahead of evolving cyber threats,” he says.

The benefit of collaborating with fintech starters or third-party solutions for legacy institutions must not be overlooked. These partnerships can provide access to innovative tools and expertise that may not be readily available in-house. By working with external experts, payments firms can stay at the cutting edge of fraud detection and prevention, ensuring they are equipped to tackle new and evolving fraud threats effectively.

Takeaways

As AI-powered fraud continues to evolve, payments firms must take proactive steps to protect themselves and their customers. The increasing sophistication of AI-driven fraud presents significant financial, reputational, and operational risks. Fraudsters are leveraging cutting-edge technologies like deepfakes, synthetic identities, and machine learning to bypass traditional security measures, making it harder for firms to detect and prevent fraudulent activities.

This challenge is compounded by the sheer volume of data generated by these attacks, placing further strain on security teams and resources. The payments sector is investing heavily in AI technologies to combat these rising threats.

“We’re at an inflection point similar to where autonomous vehicles were a few years ago. Traditional rules-based fraud prevention is like having a human driver following a rigid set of if-then instructions: ‘If you see a red light, then stop.’ This worked when the “roads” of digital commerce were simpler and less crowded. But today’s fraud landscape is more like navigating rush hour in a major city during a storm – the conditions are complex, dynamic and require split-second adaptability,” Bajwa adds.

According to Bajwa, purely rules-based approaches will be as outdated as a traffic system that can only handle stop signs and traffic lights within a few short years. He adds: “The future belongs to hybrid systems that combine human insight with AI’s ability to process vast amounts of data and adapt to new threats in real-time. Organisations that don’t make this transition risk bringing a bicycle to what has become a Formula 1 race.”

He explains: “This isn’t just about automation – it’s about augmentation and adaptation. The most successful approaches will create a seamless partnership between human expertise and AI capabilities, much like the best autonomous vehicle systems still benefit from human oversight while handling the split-second decisions that humans simply can’t make fast enough.”

Ultimately, the future of fraud prevention in the payments industry will rely on a combination of cutting-edge technology, strategic partnerships, and continuous investment in employee training. By embracing AI and other emerging technologies, payments firms can protect their systems, maintain customer trust, and stay competitive in a rapidly changing landscape. As the rise of AI-enabled fraud continues, the need for robust, adaptive, and forward-thinking security solutions has never been more critical.

LinkedIn
Email
X
WhatsApp

Read more Payments Intelligence

More To Explore

Membership

Merchant Community Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?