Rage against the machine? ChatGPT comes to payments

Share this post

Stripe and Klarna are among the first in payments to implement the use of ChatGPT in their products and services. 

What is this article about? ChatGPT’s current and future uses in the payments sector and regulatory implications.

Why is this important? Generative AI has the potential to radically alter parts of the payments sector across customer service, credit decisioning, fraud prevention and beyond.

What’s next? The rapid month-by-month development of AI will force regulators to evaluate how it can make it safe and fair.

The introduction of ChatGPT has arguably had the most profound influence on artificial intelligence (AI) than anything else in recent years. It has been met with cautious excitement and vocal scepticism across all sectors.

Among the possibilities for the payments sector are optimising customer journeys by combining authentic chat capabilities and payment functionality, enabling real-time fraud detection and approval, and even conduct credit scoring.

One early and public adopter was San Francisco-based Stripe, which announced in March it would implement GPT-4 into its products and services.

While the company has used other AI technologies to help manage fraud and increase conversion rates previously, it will leverage GPT for documentation purposes, allowing software developers to type out a question and receive concise answers rather than searching through developer documentation.

Launched in November 2022, ChatGPT is an AI chatbot built by OpenAI. It became a rapid success, counting over 100 million users just two months after launching and attracting a further investment from Microsoft of over US$10 billion in OpenAI.

The basic free version of ChatGPT provides access to the GPT-3.5 model, while paying subscribers to ChatGPT Plus have limited access to the more-advanced GPT-4 model, which is “more reliable, creative, and able to handle much more nuanced instructions”.

Swedish payment giant Klarna has enabled ChatGPT for customers to request product recommendations, as well as receive links to shop-recommended products through its search and compare tool.

Klarna CEO and co-founder Sebastian Siemiatkowski says the tool is “easy to use” and “genuinely solves a ton of problems”.

Could Chat GPT increase APP fraud?

ChatGPT does not come without its risks; it lacks any sort of encryption or access logs, which can bring about vulnerability to security threats. ChatGPT will also be the first to tell you it “may produce inaccurate information about people, places, or facts”.

Leveraging the platform for information gathering has also become a cause for concern – in a study conducted in March by a Czech academic, researchers were able to get ChatGPT to aggregate all the different IT systems a particular bank uses.

The report concluded that this could be used to form the first step of a cyberattack “when the attacker is gathering information about the target to find where and how to attack the most effectively”.

Although ChatGPT has been widely touted for its use in detecting fraud, by identifying unusual behaviour, anomalies and patterns in financial transactions, Natalie Lewis, partner at law firm Travers Smith and head of its cross-disciplinary fintech, market infrastructure and payments group, says it goes both ways.

“Generative AI is a hugely exciting, fast-developing area which could bring many advantages to the payments sector. For example, it could identify fraudulent behaviour quicker than the technology we have today, but we should also ask – could it be used for phishing and other scam attempts?

“It is so sophisticated, there is an argument it might exacerbate the problem by giving fraudsters themselves better technology. We will always need to balance risks against rewards.”

Lewis gives the specific example of how this could play out where generative AI is used to impersonate someone’s voice over the phone, potentially worsening the rise in authorised push payment (APP) frauds – a growing problem in the UK.

“APP fraud flies under the radar for a period, as the target is led to believe by the AI that they are paying a legitimate entity such as HMRC, and pushes it themselves with no need for any hacking of the target’s own bank,” she explains.

LLMs such as ChatGPT have also been touted for tackling more impactful areas of payments businesses, such as credit decisioning. Using its algorithms, the platform could potentially identify patterns, risk factors and spending behaviours to make predictions about future creditworthiness.

The need for regulatory clarity

While LLMs could make the process more accurate and give companies a better idea of their risk, it is not that straightforward, says Karishma Brahmbhatt, a data and technology lawyer at Allen & Overy.

“They would first need to address the data protection angle, including the fact that there is a whole regime around making automated decisions that could have a legal or similarly significant effect on individuals – if you automatically deny someone credit, it may have this type of effect on them,” she says.

Brahmbhatt notes the EU’s draft AI Act – on which the European Parliament adopted its negotiating position on 14 June and which will now undergo the trilogue process – as an example of regulators looking to keep up with technology that is moving at the speed of light.

“The draft is peppered with references to responsible behaviour, accountability and transparency, and there are a number of compliance hoops to jump through before you even get to the stage where it can be used for credit decisioning,” she explains.

“We are not at a point where the technology can be let loose for credit decisioning without a human first overseeing and sense-checking the outcomes”.

Banks have also reportedly been active in restricting the use of ChatGPT with their walls. Deutsche Bank disabled access to the app, with Citigroup and Goldman Sachs taking similar measures.

Speaking on the condition of anonymity, one individual at a London-based payments company says that while some employees have used it informally within the workplace, the organisation was keen to wait for regulatory clarity on all use cases before implementing it officially in any outward facing capacity.

It is not just the EU that is looking at how best to oversee AI’s development, which has spread disquiet among many. Last month, the Center for AI Safety released a statement signed by hundreds of academics and executives calling for its designation as a “societal risk”.

UK prime minister Rishi Sunak is actively looking to update the government’s rules on AI after being warned its March white paper was outdated just two months after publication, and recently touted the UK as the potential global centre for artificial intelligence regulation.

In the background other regulators are actively looking into AI already; the Competition and Markets Authority recently launched a review of AI models, while the Digital Regulators Cooperation Forum is exploring algorithmic auditing.

Meanwhile in the US, a senator recently introduced a bill to create a Federal Digital Platform Commission to oversee AI regulation, with the Federal Trade Commission currently the only body active in the regulatory space by announcing it would apply pre-existing rules to AI companies.

While the technology continues to move at breakneck speeds, it remains to be seen how well regulators can capture the moment.

More To Explore

Membership

Merchant Community Membership

Are you a member of The Payments Association?

Member benefits include free tickets, discounts to more tickets, elevated brand visibility and more. Sign in to book tickets and find out more.

Welcome

Log in to access complimentary passes or discounts and access exclusive content as part of your membership. An auto-login link will be sent directly to your email.

Continue reading

This content is only available to members - please see instructions below!

Become a member to continue reading

Member of The Payments Association? Log in to continue reading

Having trouble signing?

We use an auto-login link to ensure optimum security for your members hub. Simply enter your professional work e-mail address into the input area and you’ll receive a link to directly access your account.

First things first

Have you set up your Member account yet? If not, click here to do so.

Still not receiving your auto-login link?

Instead of using passwords, we e-mail you a link to log in to the site. This allows us to automatically verify you and apply member benefits based on your e-mail domain name.

Please click the button below which relates to the issue you’re having.

I didn't receive an e-mail

Tip: Check your spam

Sometimes our e-mails end up in spam. Make sure to check your spam folder for e-mails from The Payments Association

Tip: Check “other” tabs

Most modern e-mail clients now separate e-mails into different tabs. For example, Outlook has an “Other” tab, and Gmail has tabs for different types of e-mails, such as promotional.

Tip: Click the link within 60 minutes

For security reasons the link will expire after 60 minutes. Try submitting the login form again and wait a few seconds for the e-mail to arrive.

Tip: Only click once

The link will only work one time – once it’s been clicked, the link won’t log you in again. Instead, you’ll need to go back to the login screen and generate a new link.

Tip: Delete old login e-mails

Make sure you’re clicking the link on the most recent e-mail that’s been sent to you. We recommend deleting the e-mail once you’ve clicked the link.

Tip: Check your security policies

Some security systems will automatically click on links in e-mails to check for phishing, malware, viruses and other malicious threats. If these have been clicked, it won’t work when you try to click on the link.

Need to change your e-mail address?

For security reasons, e-mail address changes can only be complete by your Member Engagement Manager. Please contact the team directly for further help.

Still got a question?