
Share this post
High levels of data and machine learning are key if financial institutions want to tackle the sophisticated nature of payment scams.
What is this article about? Financial institutions can leverage AI to detect and fight payment fraud more effectively as the tool itself is increasingly being used by scammers.
Why is this important? The recent rise of payment scams, especially with the use of AI, means that firms are required to use machine learning to better protect customers.
What’s next? High levels of data is required to train and tailor AI to effectively fight the varying types of payment fraud and firms can review how they can use the technology to fight fraud in their organisation.
With the rapid rise in payment scams, the more traditional methods of prevention that are labour intensive and onerous have been unable to handle the sheer speed and scale of payment fraud attacks.
Savvy fraudsters are more easily able to circumvent older, rule-based prevention tools that lack real-time adaptability. According to data by Sift, fintech companies that have learned to leverage artificial intelligence (AI) and machine learning (ML) have found significantly reduced incidences of fraud.
Sift provided the example of crypto exchange company Uphold, which managed to slash its credit card/ACH fraud rate to 0.01% with the use of AI. Likewise, digital wallet organisation Curve reduced its chargeback rate by 80% by leveraging ML fraud prevention.
Significant advancements in AI including the use of predictive AI, which can make predictions about potential risks can help payment firms streamline their know-your-customer (KYC) and anti-money laundering (AML) safeguards.
In particular, machine learning models can be tailored to flag common behaviours or activities that signal a specific kind of fraud, such as unauthorised transactions, phishing scams and identity theft.
“Machine learning models are only as good as the data that goes into training them, and the more quality data the technology has access to, the ‘smarter’ it gets, which is essential in keeping up with today’s malicious actors,” says Jane Lee, trust and safety architect at Sift.
Lee explains that for unauthorised transactions ML models can be trained to recognise suspicious transactions for an online merchant, flagging potentially fraudulent transactions for further investigation, even before a fraud occurs.
In the case of phishing attacks, the most effective way to implement ML is to train the models to identify and flag certain keywords, especially when certain trigger words are grouped together, creating a “text cluster,” which is often an indicator of a scam.
“For identity theft, ML can also be used as a preventative tool by learning to recognise activities that suggest an inauthentic user is posing to be someone they are not,” says Lee.
Similarly, AI can be tailored to help fight fake accounts, but as with any machine learning tool a significant amount of data is required.
“In the case of fake accounts, you would need the highest amount of data points you can look at, for example you can examine somebody’s digital footprint such as their email address, phone number, IP address, and score them to assess whether the person is legitimate,” says Tamas Kadar, CEO at Seon.
“We provide a score based on someone’s digital footprint and on how it compares to previous attempts,” adds Kadar. “When there are confirmed cases of fake accounts then you know, these transactions will have a number of data points which will rank up closely to another attempt of new account openings.”
While being unable to detect potential scams can be costly, Roy Waligora, partner and UK head of investigations at KPMG states that identifying false positives is just as harmful.
“We’re using AI to identify red flags and patterns. Where AI comes into it is around handling the volume and essentially training the machine to detect red flags, because one of the overarching themes in analytics is avoiding false positives.
“Having too many false positives is almost as expensive as having the fraud, because it just takes you down different parts in terms of fixing and needing to look at things that don’t need being looked at,” adds Waligora, at KPMG.
He explains that to minimise false positives, AI can form neural networks that are just layers of queries, which can build on top of one another to better review whether a transaction meets a red flag criteria.
AI has helped financial institutions prevent and fight payment fraud, however, there are still gaps in the fight against scams that it has yet addressed.
“Even sophisticated AI-powered fraud prevention tools have struggled to effectively detect and prevent the increase in social engineering scams that impact the fintech space,” says Lee at Sift.
“This is, in part, due to the fact that these types of scams coax legitimate users into unwittingly authorising fund transfers to scammers, evading the traditional signals a fraud prevention system would detect.”
Therefore, it is crucial to also educate customers on fraud prevention tools. Payment firms can carry out education initiatives to ensure that offer guidance and information on the latest tactics that can help people falling victim to scams.