The rise of generative AI in payment security: A double-edged sword for data privacy

by George Iddenden, Reporter, The Payments Association

Share this post

What is this article about?

The dual impact of generative AI on payment security, highlighting its potential to enhance fraud detection while posing significant data privacy risks.

Why is it important?

It underscores the need for payment firms to balance AI innovation with robust privacy and regulatory compliance to protect sensitive consumer data.

What’s next?

Firms must adopt transparent AI practices, enhance regulatory frameworks, and continuously train models to navigate the evolving landscape of AI-driven threats.

The strides that Artificial Intelligence (AI) has made across various industries have been well documented. The financial services sector is at the forefront of this transformation. AI technologies are being integrated into core financial operations, particularly in the realm of payment services, where they promise to enhance security, stream stream transactions, and improve customer experiences. From fraud detection to customer support, AI-driven solutions are revolutionising how payments are processed and safeguarded.

Among the most influential AI advancements is generative AI, particularly large language models (LLMs) such as ChatGPT and Google’s Gemini, which have become increasingly essential in strengthening payment security. LLMs are sophisticated algorithms trained on vast amounts of text data, enabling them to generate human-like text, interpret complex queries, and process vast amounts of transactional data. These capabilities make LLMs ideal for real-time fraud detection, transaction monitoring, and identity verification, helping payment providers stay one step ahead of fraudsters and ensuring a more secure financial ecosystem.

However, as payment services rely more heavily on these AI technologies, they face a growing challenge: how to harness the power of LLMs without compromising data privacy. While AI generally offers significant improvements to payment security, its deployment also raises concerns about protecting sensitive customer information. Data leakage, model biases, and a lack of transparency in AI decision-making are just a few of the potential privacy risks that must be considered.

This intersection of data privacy and AI is a critical conversation in the payments industry due to the highly sensitive and personal nature of the data involved, FScom Senior Manager Anna Sweeney explains. “Payment data is inherently vulnerable because its compromise can have significant financial and personal consequences for consumers.”

Data privacy remains a significant concern, with 62% of respondents worried about LLMs being trained on publicly available user data without explicit consent, according to Microsoft’s Global Online Safety Survey.

Sweeney compounds this: “If the data isn’t properly protected or AI models aren’t transparent, there’s a real risk that sensitive information could be misused or exposed. This creates a delicate balance between leveraging AI’s capabilities for efficiency and ensuring strong data privacy and security measures are in place to protect consumers.”

The promises of generative AI in payment security

Generative AI is proving to be a game-changer in the fight against payment fraud. By leveraging machine learning algorithms, AI can analyse vast amounts of transaction data to identify patterns that human analysts might miss. These algorithms are capable of detecting subtle anomalies in real-time, such as unusual transaction sizes, changes in spending behaviour, or irregular geographical locations, which are often indicative of fraud.

This high adoption rate is reflected in the fact that 42% of online merchants are currently using generative AI for e-commerce fraud management, with an additional 24% likely to implement it within the next 12 months.

Sweeney says: “Payment systems must facilitate near-instant payments, which leaves little time for traditional analysis or verification. This is where AI shines, as it can rapidly analyse vast amounts of data to detect fraud and ensure efficiency. However, the same speed that AI brings also introduces risks.

“If the data isn’t properly protected or if AI models aren’t transparent, there’s a real risk that sensitive information could be misused or exposed. This creates a delicate balance between leveraging AI’s capabilities for efficiency and ensuring strong data privacy and security measures are in place to protect consumers.”

Quickly spotting discrepancies helps financial institutions block suspicious transactions, minimising losses. As fraudsters grow more sophisticated, generative AI simulates fraud scenarios, prepping payment platforms for new threats. AI generates ‘fake’ data that mirrors potential fraud, training systems to recognize and counteract these evolving attacks.

Enhanced fraud detection extends to biometric authentication, a key element of payment security. Biometrics provide a secure, user-friendly alternative to PINs and passwords, but their integration into payments requires ongoing security improvements.

AI improves biometric recognition by simulating diverse facial expressions, lighting, and aging. Likewise, AI-generated speech samples enhance voice recognition, distinguishing genuine users from impersonators. This adaptability strengthens biometric authentication, adding protection for consumers and businesses.

Generative AI also addresses training challenges for AI models. Access to high-quality, diverse training data has been a hurdle due to privacy concerns over real transaction data. Using sensitive information poses a privacy risk; generative AI crafts synthetic data that replicates real-world patterns without personal details, enabling training of fraud detection models and secure simulation of user interactions. not only enhances security but also ensures that privacy concerns are properly managed.

Moreover, synthetic data can address issues of data bias, as generative AI can create more balanced datasets that reflect a broader range of scenarios. This results in more accurate predictions and better protection against a wide variety of payment security threats, allowing firms to build more robust systems while safeguarding consumer privacy.

Data privacy risks introduced by generative AI

While there are a plethora of reasons to use generative AI as a solution for efficiency in payments, it’s essential to examine the risks that this technology introduces, particularly concerning data privacy. Generative AI’s potential to create increasingly sophisticated threats is one of the most pressing concerns. These concerns are compounded by the data suggesting that generative AI potentially benefits cyber attackers (55.9%) more than defenders (8.9%)

These AI-generated attacks are difficult to detect and even harder to guard against as they become more realistic and harder to distinguish from legitimate communications. Fraudsters can employ generative AI to simulate entire conversations, impersonate trusted contacts, or create fake documents that pass through verification systems undetected.

According to IDVerse Senior Vice President and FC360 speaker Russ Cohn, “Another critical consideration is the potential for algorithmic bias. AI systems are trained on vast amounts of data, and if this data contains biases, the AI’s decision-making may perpetuate or even amplify those biases.”

In 2019, 42% of employees across industries reported experiencing ethical issues resulting from AI use, including unfair or biased outcomes. Interestingly, North American companies are less concerned about AI fairness risks (20%) compared to Europe and Latin America (over 30%), indicating varying levels of awareness or prioritisation across regions.

Cohn believes regulation will impose stricter requirements for organisations to assess and mitigate the potential for algorithmic bias in AI-powered payment systems. “This could involve regular audits of AI systems, rigorous testing procedures, and ongoing monitoring of their performance to identify and address discriminatory patterns.”

As a side note, he adds: “The informed consent and bias transparency changes enumerated above dovetail neatly into the proposal for synthetic generated data, which serve as strong preventative and mitigatory tools of compliance.”

Another risk lies in the use of AI to generate synthetic data for training security systems. While this synthetic data reduces the need for using real consumer data, which is a win for privacy, it also opens up a new avenue for unintentional privacy breaches.

If not properly monitored, AI could generate synthetic data that mimics real user behaviours or characteristics too closely, inadvertently revealing private information about individuals or groups. These unintended leaks could potentially expose sensitive user details, such as transaction habits or personal preferences, despite efforts to maintain privacy.

Cohn is more optimistic about the use of synthetic data: “Privacy-first AI that includes synthetic data training represents a decisive shift in identity verification for the payments industry, protecting users from fraud while eliminating risks associated with data misuse and improper consent management.”

Following on from Sweeney’s point on the speed with which generative AI operates also complicates the challenge of data protection. In the rush to stay ahead of fraudulent activity, payment platforms may inadvertently overlook the long-term implications of using AI-driven systems. Without adequate oversight, AI’s ability to swiftly process and generate new data could lead to vulnerabilities in security measures. If AI systems are not transparent or auditable, it becomes difficult for regulators to assess whether the systems are operating in a manner that protects consumers’ privacy.

Sweeney believes that ensuring individuals’ privacy should be prioritised above all else. “Payment firms can take proactive steps like anonymising or pseudonymising personal data before using it in AI systems. By doing so, even if the data is exposed or accessed by unauthorised parties, it cannot be traced back to an individual without further information. This adds an important layer of security and peace of mind for customers. It’s all about minimising risks while fostering transparency and confidence in the system,” she explains.

The challenge of regulating generative AI in payment security

As generative AI continues to shape the landscape of payment security, its rapid evolution presents significant challenges for existing data privacy regulations. Laws such as the General Data Protection Regulation (GDPR) and Payment Services Directive 2 (PSD2) were designed to safeguard user data and ensure secure transactions. However, these regulations were not originally crafted with generative AI in mind.

Cohn adds: “We cannot ignore that the increasing use of AI in payments will carry continued concerns about the security and privacy of personal data. The risk of data breaches and unauthorised access will not disappear, or lessen.

“We expect that regulations will impose even stricter cybersecurity standards for organisations using AI in payments, including robust data protection measures, regular security assessments, and incident response plans.”

However, compliance with regulations like GDPR has been identified as a significant challenge for firms using AI in financial services and payment security. In 2024, 61% of companies cited compliance as the biggest hurdle in AI implementation.

On GDPR, which offers strong protections regarding data privacy, its application to AI-driven systems, particularly those that create synthetic data or simulate transactions, remains somewhat unclear. Sweeney explains: “GDPR has data minimisation principles that require companies to only collect and use data that’s strictly necessary, but AI models often need vast amounts of information to function effectively. This could put businesses in a tough spot when trying to balance the need for detailed data with respecting individuals’ privacy rights.

“Additionally, there’s always the concern that AI systems could unintentionally expose personal information or make decisions that aren’t easily understood by consumers, which can erode trust. So, while AI offers huge potential in payments, it’s crucial to ensure that it’s being used responsibly and in line with privacy laws.”

For non-compliant firms, GDPR violations have resulted in substantial fines, highlighting the regulatory risks. As of September 2024, non-compliance with general data processing principles led to over 2.4 billion euros in fines.

Similarly, PSD2, which focuses on enhancing security for electronic payments and fostering competition in the payments sector, does not adequately address the nuances of AI technology. While PSD2 mandates strong customer authentication and secure payment processes, its framework does not fully account for the risks AI poses, such as the generation of synthetic identities or the potential for AI-driven breaches. As AI technologies evolve, regulators will need to adapt these frameworks to address the emerging risks posed by AI in payment security.

One of the primary regulatory challenges in the use of generative AI for payment security is ensuring AI transparency and accountability. AI systems, particularly those designed to detect fraud or create new security protocols, often operate as “black boxes,” making it difficult for regulators, businesses, or consumers to understand how decisions are made. This lack of transparency can undermine trust in AI-driven systems. For example, if an AI system incorrectly flags a legitimate payment as fraudulent, consumers and businesses may have difficulty understanding why the decision was made, potentially leading to customer dissatisfaction or lost revenue.

Additionally, AI models that generate new security protocols or simulate breaches must be transparent to ensure they comply with existing regulations and do not inadvertently introduce vulnerabilities. Without clear accountability mechanisms, assessing whether an AI system’s actions align with privacy regulations or ethical standards becomes nearly impossible. Ethical considerations also play a pivotal role in regulating AI in payment systems. Privacy concerns are at the forefront, as AI can manipulate sensitive user data or create synthetic data that closely resembles real consumer behaviours.

The potential for misuse of AI in creating deepfakes or facilitating fraud adds another layer of ethical complexity. Moreover, there are growing concerns about bias in AI algorithms. If not properly designed, AI systems may inadvertently reinforce existing biases, leading to discriminatory practices in payment processing, such as profiling certain users as high-risk based on flawed or incomplete data. This could result in unfair treatment of certain customer segments, undermining trust in payment systems and the broader financial sector.

VE3 Managing Director Manish Garg believes payments companies face a fundamental challenge: how to innovate responsibly without breaching data privacy laws. “AI thrives on vast datasets, yet privacy regulations often limit the scope of data usage. Balancing these opposing forces requires investments in privacy-preserving technologies, such as differential privacy and federated learning, which enable AI to function without compromising user confidentiality,” he says.

Mitigating the risks: Best practices for firms

One of the most important steps to mitigating the risks is ensuring AI transparency and explainability. AI systems, particularly those used in fraud detection or biometric authentication, must be transparent in their decision-making processes. Firms should implement explainability protocols that allow stakeholders to understand how AI models reach their conclusions, especially in sensitive areas such as transaction approvals or the identification of fraudulent behaviour.

This transparency not only fosters trust with consumers but also ensures compliance with regulations like the GDPR, which emphasises the need for clear data handling and decision-making processes. Regular audits of AI systems should be conducted to ensure they operate in line with data protection laws and ethical guidelines. By making AI decision-making processes more visible and understandable, firms can better manage risks and avoid potential breaches of consumer trust.

Sweeney also points out the role of Consumer Duty. “First, firms must recognise that AI transparency will be a fundamental part of their Consumer Duty for those firms that use AI in decisions that impact their customers. This means ensuring that AI systems are designed and implemented in a way that prioritises consumer protection, avoiding any unintended harm, such as biased or discriminatory decisions. For example, AI models used for fraud detection should be regularly audited to ensure they’re functioning fairly, without bias, and in alignment with the principle of treating customers fairly.”

“Taking this proactive approach not only helps build trust with consumers but also shows a real commitment to safeguarding personal data and meeting regulatory expectations, such as those set by the GDPR. Regular audits and transparency in AI decision-making can go a long way in mitigating risks while making sure that consumers feel safe and respected in all interactions. Ultimately, by placing a strong focus on fairness and privacy, payment firms can foster a sense of confidence and security among their customers.”

Finally, ongoing training and adaptation of AI models is essential to staying ahead of the evolving landscape of payment security threats. AI systems must be continuously trained with new data to recognise emerging fraud patterns, adapt to changes in consumer behaviour, and respond to new attack methods.

Payment firms should also invest in training their AI models with diverse, up-to-date datasets to ensure they remain effective against increasingly sophisticated threats. This also involves adapting the AI’s capabilities as fraud tactics evolve and new technologies emerge. Firms should also establish robust mechanisms for feedback and improvement, allowing them to quickly respond to weaknesses or vulnerabilities identified by AI simulations or detected during audits.

The goal is not only to mitigate current risks but also to future-proof security systems by ensuring that they can effectively adapt to new challenges. By prioritising continuous training and keeping security measures flexible, payment firms can maintain strong protection over time, even as AI-driven threats become more advanced.

Takeaways

The rise of generative AI in payment security undoubtedly offers vast opportunities to enhance fraud detection, streamline transactions, and improve overall system efficiency. With the power to process massive amounts of data quickly, AI is helping to prevent fraud, bolster biometric authentication, and create more robust payment platforms.

However, as we’ve seen, this technology comes with its own complex challenges, particularly regarding data privacy and security. The speed and sophistication that make AI such a powerful tool also create vulnerabilities that, if left unchecked, could compromise sensitive customer information.

Navigating the intersection of AI innovation and data privacy will require careful balance. Regulatory frameworks like GDPR and PSD2, designed to protect consumers, must evolve to address the emerging risks of generative AI, such as data leakage, synthetic data misuse, and algorithmic bias. Payment firms must proactively ensure transparency and accountability in their AI systems, fostering trust and compliance while safeguarding user privacy.

The key to mitigating these risks lies in adopting best practices, including AI explainability, robust encryption, and ongoing model training. By prioritising these practices, payment firms can harness AI’s benefits without compromising their customers’ privacy and security. Ultimately, those who invest in responsible AI practices will not only enhance the security of their payment systems but also build long-term consumer confidence, positioning themselves for success in an increasingly AI-driven world.

Facebook
Twitter
LinkedIn

Read more Payments Intelligence

More To Explore

Membership

Merchant Community Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Continue reading

This content is only available to members - please see instructions below!

Become a member to continue reading

Member of The Payments Association? Log in to continue reading

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?