
Share this post
AI-driven fraud is evolving fast—banks must adopt adaptive AI models to detect and prevent scams in real-time.
In the first half of 2024, criminals stole £571 million in the UK through unauthorised and authorised fraud—a stark reminder that existing fraud prevention methods are failing to keep pace. The scale and sophistication of fraud in the UK and around the world are rapidly rising, but what isn’t rapidly rising is banks’ ability to stop criminals in their tracks. Despite the £710 million in unauthorised transactions thwarted during this same period of 2024, the amount of fraud that was able to leave victims’ bank accounts is shocking.
Regulations like PSD2 and strong customer authentication forced fraudsters to adapt their tactics. This led to a significant shift towards social engineering, exploiting customers’ trust in their devices, directly resulting in the surge of authorised push payment (APP) fraud. The ease with which criminals can now target victims on their trusted devices, combined with the rise of sophisticated AI tools, has made these attacks significantly more difficult to detect.
Whether the scammer befriends the individual through a lengthy romance scam, perhaps asking them to send money to support a sick family member masquerading under a fake profile on a social media app selling tickets, scammers are looking to impersonate ‘someone trusted’ for a direct bank transfer. Artificial intelligence (AI) has been a game changer for criminals here, turning easy-to-spot scams into deep, complex, layered social engineering attacks.
AI is in the pockets of criminals
Criminals are using AI to create incredibly realistic and convincing fake profiles online. The technology is leveraged to create photorealistic identities, communicate in any language, and develop personalised messages that can be used for manipulation. This sophisticated use of AI renders traditional methods of detecting fake profiles largely ineffective, making relying solely on end users to identify and prevent these attacks unrealistic.
Deepfake technology can make calls appear authentic, presenting criminals as the individuals they are impersonating. Five years ago, if you were speaking to a friend who had met someone online but never spoken to them on the phone or met them in person, concerns would be raised. Today, scammers talk to their victims on the phone and via video calls, nurturing deep, intimate relationships over months or even years.
The AI technology they use can enable them to bypass identity verification processes. They may employ “injection stream attacks,” which involve inserting malicious data or code to deceive systems into accepting fraudulent inputs. Or they might use straightforward methods, like a phone app, to alter their appearance and resemble someone else.
The technology also gives criminals more time to scale operations and scam hundreds, even thousands, of individuals simultaneously. These AI-generated interactions make it even easier to build emotional connections online, often leading victims to trust and eventually send money to scammers.
Banks are now mandated to pay victims back up to £85,000, but this is a win–win for fraudsters. Not only is committing these crimes relatively easy, but criminals also know victims are likely to recoup their losses, leaving financial institutions to bear the brunt of the financial burden.
Adapting new models to tackle fraud

To date, banks have typically used static fraud prevention models, which are trained once and used for an extended period without being updated. This approach is limited as they cannot keep up with an ever-changing fraud pattern, which criminals know and rely on.
Banks need to evolve and find new ways to fight fraudsters. With Daily Adaptive AI-driven Models, financial institutions can monitor spending behaviours and identify suspicious activity, keeping one step ahead of the scammers. By analysing hundreds of thousands of data points—from the location of the receiving bank account to the erratic nature of the payment made in an app—daily adaptive models can identify and prevent fraudulent transactions in less than a second.
The rapid evolution of fraud tactics necessitates a paradigm shift in fraud prevention. Static models are no longer sufficient. Daily Adaptive AI-driven models offer real-time detection capabilities, analysing hundreds of thousands of data points to identify and prevent fraudulent transactions in less than a second. To effectively combat AI-driven fraud, financial institutions must adopt AI-powered solutions that continuously learn and adapt, keeping them one step ahead of the ever-changing landscape of financial crime.