Share this post
Stripe and Klarna are among the first in payments to implement the use of ChatGPT in their products and services.
What is this article about? ChatGPT’s current and future uses in the payments sector and regulatory implications.
Why is this important? Generative AI has the potential to radically alter parts of the payments sector across customer service, credit decisioning, fraud prevention and beyond.
What’s next? The rapid month-by-month development of AI will force regulators to evaluate how it can make it safe and fair.
The introduction of ChatGPT has arguably had the most profound influence on artificial intelligence (AI) than anything else in recent years. It has been met with cautious excitement and vocal scepticism across all sectors.
Among the possibilities for the payments sector are optimising customer journeys by combining authentic chat capabilities and payment functionality, enabling real-time fraud detection and approval, and even conduct credit scoring.
One early and public adopter was San Francisco-based Stripe, which announced in March it would implement GPT-4 into its products and services.
While the company has used other AI technologies to help manage fraud and increase conversion rates previously, it will leverage GPT for documentation purposes, allowing software developers to type out a question and receive concise answers rather than searching through developer documentation.
Launched in November 2022, ChatGPT is an AI chatbot built by OpenAI. It became a rapid success, counting over 100 million users just two months after launching and attracting a further investment from Microsoft of over US$10 billion in OpenAI.
The basic free version of ChatGPT provides access to the GPT-3.5 model, while paying subscribers to ChatGPT Plus have limited access to the more-advanced GPT-4 model, which is “more reliable, creative, and able to handle much more nuanced instructions”.
Swedish payment giant Klarna has enabled ChatGPT for customers to request product recommendations, as well as receive links to shop-recommended products through its search and compare tool.
Klarna CEO and co-founder Sebastian Siemiatkowski says the tool is “easy to use” and “genuinely solves a ton of problems”.
Could Chat GPT increase APP fraud?
ChatGPT does not come without its risks; it lacks any sort of encryption or access logs, which can bring about vulnerability to security threats. ChatGPT will also be the first to tell you it “may produce inaccurate information about people, places, or facts”.
Leveraging the platform for information gathering has also become a cause for concern – in a study conducted in March by a Czech academic, researchers were able to get ChatGPT to aggregate all the different IT systems a particular bank uses.
The report concluded that this could be used to form the first step of a cyberattack “when the attacker is gathering information about the target to find where and how to attack the most effectively”.
Although ChatGPT has been widely touted for its use in detecting fraud, by identifying unusual behaviour, anomalies and patterns in financial transactions, Natalie Lewis, partner at law firm Travers Smith and head of its cross-disciplinary fintech, market infrastructure and payments group, says it goes both ways.
“Generative AI is a hugely exciting, fast-developing area which could bring many advantages to the payments sector. For example, it could identify fraudulent behaviour quicker than the technology we have today, but we should also ask – could it be used for phishing and other scam attempts?
“It is so sophisticated, there is an argument it might exacerbate the problem by giving fraudsters themselves better technology. We will always need to balance risks against rewards.”
Lewis gives the specific example of how this could play out where generative AI is used to impersonate someone’s voice over the phone, potentially worsening the rise in authorised push payment (APP) frauds – a growing problem in the UK.
“APP fraud flies under the radar for a period, as the target is led to believe by the AI that they are paying a legitimate entity such as HMRC, and pushes it themselves with no need for any hacking of the target’s own bank,” she explains.
LLMs such as ChatGPT have also been touted for tackling more impactful areas of payments businesses, such as credit decisioning. Using its algorithms, the platform could potentially identify patterns, risk factors and spending behaviours to make predictions about future creditworthiness.
The need for regulatory clarity
While LLMs could make the process more accurate and give companies a better idea of their risk, it is not that straightforward, says Karishma Brahmbhatt, a data and technology lawyer at Allen & Overy.
“They would first need to address the data protection angle, including the fact that there is a whole regime around making automated decisions that could have a legal or similarly significant effect on individuals – if you automatically deny someone credit, it may have this type of effect on them,” she says.
Brahmbhatt notes the EU’s draft AI Act – on which the European Parliament adopted its negotiating position on 14 June and which will now undergo the trilogue process – as an example of regulators looking to keep up with technology that is moving at the speed of light.
“The draft is peppered with references to responsible behaviour, accountability and transparency, and there are a number of compliance hoops to jump through before you even get to the stage where it can be used for credit decisioning,” she explains.
“We are not at a point where the technology can be let loose for credit decisioning without a human first overseeing and sense-checking the outcomes”.
Banks have also reportedly been active in restricting the use of ChatGPT with their walls. Deutsche Bank disabled access to the app, with Citigroup and Goldman Sachs taking similar measures.
Speaking on the condition of anonymity, one individual at a London-based payments company says that while some employees have used it informally within the workplace, the organisation was keen to wait for regulatory clarity on all use cases before implementing it officially in any outward facing capacity.
It is not just the EU that is looking at how best to oversee AI’s development, which has spread disquiet among many. Last month, the Center for AI Safety released a statement signed by hundreds of academics and executives calling for its designation as a “societal risk”.
UK prime minister Rishi Sunak is actively looking to update the government’s rules on AI after being warned its March white paper was outdated just two months after publication, and recently touted the UK as the potential global centre for artificial intelligence regulation.
In the background other regulators are actively looking into AI already; the Competition and Markets Authority recently launched a review of AI models, while the Digital Regulators Cooperation Forum is exploring algorithmic auditing.
Meanwhile in the US, a senator recently introduced a bill to create a Federal Digital Platform Commission to oversee AI regulation, with the Federal Trade Commission currently the only body active in the regulatory space by announcing it would apply pre-existing rules to AI companies.
While the technology continues to move at breakneck speeds, it remains to be seen how well regulators can capture the moment.