Share this post
Account takeover is probably most often thought of as a bad actor using someone’s genuine yet stolen credentials to access their online account and steal their funds.
However, developments in technology and the increasing sophistication of the cybercrime industry and the organized crime gangs operating within it give rise to new avenues and methods of attack.
These approaches to impersonate or manipulate legitimate users can not only outwit legacy authentication techniques; some of them can even outsmart a victim’s own friends and family. Here are four complex account takeover techniques you need to be aware of right now.
If there was ever a sign that we’re living in an age where technologies previously confined to the realms of science fiction were becoming a reality, it is deepfakes. But in a world where Spotify can analyze a user’s emotions to offer them calming music if they feel stressed, or burgers can be cooked for you by a robot, is it really so strange that it is now possible for a machine to replicate someone’s voice or even reanimate a photo of a deceased relative? These are precisely the sort of things companies like MyHeritage or VocaliD are doing.
MyHeritage takes videos or images created using AI-powered software to portray people doing and saying things they never actually did or said. It uses this technology to give people the chance to animate old family photos to “experience your family history like never before!”.
VocaliD’s artificial intelligence-powered technology can clone voices to an alarming degree of accuracy. The software can pick up not just your accent but also your timbre, pitch, pace, the flow of speaking, and where and how you breathe while talking to accurately clone your voice after listening to someone speak for as little as 10 minutes.
VocaliD was set up as an extension of its founder’s clinical work, creating artificial voices for people who have lost their voice due to surgery or other patients who are otherwise unable to speak without assistance. When technology is used for such causes, there can be no arguments about its benefits. However, the potential dual use of such technology is a significant cause for concern.
If someone looks like you, sounds like you, and has access to your personal information, what’s to stop them from coming after your account and succeeding?
Deep fakes can be so convincing that Russians used deepfake filters on video calls to trick senior parliamentary members in Europe into thinking they were different people. Hence, it’s not a giant leap for bad actors to exploit this tech for their gain.
Deepfakes can be used to back up a synthetic identity – a type of false ID often used by bad actors that blends false and genuine information to increase its chance of bypassing financial services’ security – or to compromise the call center, for example by persuading call center agents that they are someone they’re not.
SIM swap scams
Have you ever lost your phone or had it stolen? When you have got hold of a new phone, the first thing you’ll want to do is change your old number over to the new phone, and the process to do so is pretty simple. A particularly pernicious type of scam, bad actors can use this very process to commit SIM swap fraud and access almost anyone’s account.
They can use either confidence tricks or stolen information to deceive mobile providers into switching someone’s genuine number onto another SIM card in their possession.
They can then put this SIM card into their phone to access bank verification details. Once they’ve gained access, bad actors can reap the rewards before account holders even know anything is wrong and can even reset all the other account information and lock the genuine owner out of their own account.
Bad actors only need basic information to perpetrate this type of attack, including someone’s name, date of birth, and address. Data breaches and lost information, phishing scams, and information sold on the dark web are all ways in which bad actors can uncover this information. Still, bad actors can often perform simple online searches to gather what they need to answer the security questions a call center agent asks before registering the new SIM.
As above, bad actors could even go as far as cloning the legitimate user’s voice and using that to strengthen the illusion that they are the genuine account owner.
This type of fraud has mushroomed in recent years. According to Action Fraud, SIM swap fraud has increased substantially since 2015 and has resulted in losses of more than £10m to UK consumers alone.
SMS OTP fraud
At first glance, sending a one-time passcode (OTP) to a user to make sure they are who they say they are seems like a good way of increasing authentication security. However, now we know how easy it is for bad actors to pull off a SIM swap scam; it’s suddenly clear that perhaps it might not add so much security after all. It wouldn’t take much work for a bad actor to switch someone’s number onto their device and intercept the OTP that way.
Adding to this threat is malware capable of intercepting OTPs and resending them to attackers. In the same way your mobile phone can intercept a text message sent to your phone, copying and pasting the OTP to the requesting app, this malware also sends the OTP to the bad actors.
An even more insidious threat comes from bad actors capable of compromising a mobile provider’s servers intercepting all text-based OTPs. So instead of creating a more secure authentication process, adding a mobile number for two-factor authentication can be creating a back door for bad actors to exploit.
Adding a mobile number for two-factor authentication users can be creating a back door for hackers to exploit instead of increasing security.
In fact, as long ago as 2015, SMS-based authentication was listed in the Strong Authentication Requirements for internet payments, as issued by the European Banking Authority (EBA), as a method “to be avoided”. While bad actors continue to leverage advanced technologies to commit their crimes, the security world knows what some institutions are struggling to admit: it’s time for organizations using SMS OTPs to move on.
Session hijacking via RATs
Remote Access Trojans (RATs) are authentic-looking applications containing malware that can be accidentally downloaded onto a device. Once downloaded, they provide a way for hackers to give them administrative control over the targeted device. RATs sneakily piggyback on legitimate-looking files; for example, the malware Vizom spreads through spam-based phishing campaigns disguised as popular video conferencing software, a tool that has become crucial during the pandemic.
Bad actors can also use RATs to perform remote overlay attacks to target online banking sessions after users have legitimately logged in to their accounts. This form of malware is often known as a Rat-in-the-Browser (RitB), a third-generation Trojan attack that can work alongside a RAT to hijack a session. The installed RAT can then alert the cybercriminal when the customer is logging on.
The attacker can then overlay their windows on top of the target app. Victims then input information such as login credentials or bank card numbers. Instead of dealing with their banking app, they are handing over their private information to the bad actors, giving them the means to take over their accounts and steal their funds.
Originality is key
Financial services need to leverage authentication technologies based on input that genuinely cannot be replicated – especially with the rise of deep fakes and other highly advanced account takeover methods. As bad actors use increasingly sophisticated technology that can learn and adapt to bypass security systems, financial institutions need to fight fire with fire.
By implementing artificial intelligence and deep learning to know each and every customer through their online behavior, answering the question “are you really you?”. In other words, these companies need to know their customer through analyzing their behavioral biometrics.
A fraud prevention solution founded in behavioral biometrics can analyze thousands of parameters, including how a user types or moves the mouse and combine this information with device and network assessments to create a BionicID, which works like a digital fingerprint.
No two BionicID’s are the same, and, what’s more, they are impossible to replicate.
By profiling users at a granular level and using deep learning mechanisms to ensure the solution gets smarter and more accurate with each login, financial institutions can protect their customers – even from people who seem to look and sound exactly like them.