Behavioural analytics: leveraging the weakest link

By Luke Reynolds, Chief Product Officer, Featurespace.


The world of fraud is quickly evolving, as are the ways in which fraudsters access compromised personal and financial information of unsuspecting customers. However, as quickly as the world of fraud is changing, one truth remains: humans are the weakest link in stopping fraud.

Abusing our good nature

One type of fraud we’ve seen grow in frequency over the last couple of years is ‘social engineering’, where criminals take advantage of people’s trusting nature, particularly the elderly and vulnerable. Social engineering can take a number of different forms and involves a fraudster impersonating a bank or organisation to persuade customers to divulge financial information and ultimately transfer money directly into a criminal’s account. It has proven to be a very effective form of attack, conning innocent victims out of their cash and savings.

Criminals are becoming increasingly sophisticated and dynamic in the ways in which they target customers and exploit human weakness – social engineering attacks alone can take a number of different forms, including:

  • Phishing: sending an email in the hopes that the intended victim will click on a malicious link – phishing is the most commonly known method for social engineering attacks
  • Vishing: a fraudster calls a customer claiming to be from a bank or trusted organisation, often including enough personal information about the customer to make the call appear genuine, pretending to be seeking to check on recent account activity or to ‘warn’ the victim of a (false) imminent threat or data breach
  • Smishing: scam texts are sent to a target mobile phone, masquerading as a trusted organisation such as a bank or a customer’s insurance provider

Social engineering attacks can be difficult to spot and stop. The account activity would, on the surface, largely appear genuine and this is where the challenge lies for banks and customers. So, how can banks, customers and the latest technology solutions prevent human weakness from being exploited for criminal gain?

Crack in consumer confidence

A recent report by Financial Fraud Action (FFA) UK found that many consumers believe they are too clever to be scammed. In fact, 80% of people said that they could confidently identify a fraudulent approach. The reality is only 9% of the 63,000 people who completed the Too Smart to be Scammed? Quiz received a perfect score. The combination of self-confidence coupled with the inability to spot a scam creates the perfect environment for fraudsters to thrive.

By tapping into what the FFA UK refers to as ‘patterns of trust’ – techniques used by fraudsters to overcome a customer’s scepticism – criminals convince customers to share financial and personal information. Simple tactics such as using apologetic and understanding language, creating a false sense of urgency or simply sounding knowledgable and nice are often enough to convert a sceptic.

How do we fight this? The key to preventing fraud on the customer side is education – the more consumers understand about financial fraud and the underhand techniques used by fraudsters to gain access to sensitive information, the better prepared they will be to identify a genuine call from a fraudulent one.

However, an education programme of that scale takes time – with that in mind, what can be done to mitigate risk and prevent more people from falling victim to opportunistic fraudsters picking up the phone or sending a text?

Understanding human behaviour

While fraudsters use human nature as a means to exploit consumer weaknesses, banks are starting to use human behavioural characteristics against the criminals by investing in smart solutions in the battle against financial fraud. One such fraud prevention solution uses a unique real-time approach called Adaptive Behavioural Analytics to understand behavioural patterns in the same way that we naturally observe behaviour as human beings.

Humans intuitively profile each other over time. We assess any risks in real-time, and continually adapt our sense of normative patterns of behaviour in people around us, allowing us to recognise any anomalous behaviour in others. It’s how we can sense if someone is acting strangely, dangerously or out of character, even if we know very little else about that person. Now machines are capable of doing the same.

An assessment of UK credit card fraud in 2017 from UK Finance showed that currently around £3.50 of every £10 stolen from credit cards in the UK is missed by existing fraud prevention technologies. The incumbent rules-based approach being used by many organisations is becoming increasingly ineffective when used as the sole fraud prevention system, as it relies heavily on identifying generalised ‘bad behaviour’. This is inefficient as fraudsters learn the rules and how to get around them, and the fraud goes unchecked.

However, sophisticated fraud systems are changing the game by combining rules-based approaches with advanced machine learning techniques. A system using this approach builds individual statistical profiles of customers using a variety of data sources, to model and understand individual behaviour patterns. These data sources can include thousands of real-time data points, including transactions types, volumes and values, online account monitoring, and payment transfer activity.

By understanding each individual consumer’s ‘good’ behavioural patterns, an adaptive behavioural analytics system evaluates and predicts the risk of what happens next. From that, it can tell when the consumer in question is acting out of character – for example, transferring a large sum of money into a previously unknown account or a more subtle change such as a new pathway through their online bank activity that they never typically take, which could indicate they are acting under instruction from a fraudster impersonating the bank.

Capitalising on machine learning

One of the additional benefits of using machine learning with adaptive behavioural analytics is that it ‘selflearns’ in real time. This means that the fraud system is constantly analysing and adapting to new events and behaviour, which is how it catches new and unknown types of fraud. You may get a pay raise, move house or win the lottery, and your spending behaviour would change as a result. The machine will use this information to make a more informed decision.

Focusing on the good as opposed to categorising the bad works. According to Capital One UK, fraud detection increases by 35% and cards are wrongly blocked 47% less when using adaptive behavioural analytics.

We, as humans, will never stop being the most vulnerable point in the fraud chain, and there will always be criminals out there ready to try and take advantage of our good nature. That’s why we need to turn to machines to learn what ‘good’ looks like, to spot the bad transactions as they happen and cut out the weakest link in the fraud prevention chain – us.

Related reading

Finance more evolution than revolutionary change