The AI that learns our habits and knows when people cheat
Share on Linkedin
(Credit: Getty Images)
Detecting online crime typically means knowing what to look for. Artificial intelligence that spots hidden patterns can do it better – and even step in when behaviour gets out of hand.

For people who play the video game Counter Strike online, it's hard enough watching your back at the best of times. In the fast-paced first-person shooter, there are always players with quicker reflexes or a sharper eye.

But at the height of its popularity a few years ago, people started to come up against other players with skills that were too good to be true. Games like Counter Strike and Half Life – another shooter that was very popular online – had a problem with players who used software cheats that steadied their aim or let them see through walls.

So in 2006, when the stakes were raised by an online competition with cash prizes, an unusual pair of referees were called in. David Excell and Bill Fitzgerald were mathematicians who had just spun out an artificial intelligence company called Featurespace from their lab at the University of Cambridge. Their software was very good at one thing: spotting weird behaviour.

By looking out for unusual behaviour, Featurespace's AI was able to catch players who used bots to cheat in online games like Counter Strike (Credit: Getty Images)

By looking out for unusual behaviour, Featurespace's AI was able to catch players who used bots to cheat in online games like Counter Strike (Credit: Getty Images)

Featurespace had developed a machine learning system that detected unexpected changes in real-time data. Using those anomalies, it then takes an educated guess at the probable cause – which often turns out to be people doing things they shouldn’t.

Spotting players who were cheating in video games was the AI’s first test. "Our technology let the games company be sure that people were playing against people and not robots," says Excell. But Featurespace’s AI is now casting its watchful eye over many more of our activities. It has become a silent sentinel at the heart of the online banking, e-commerce and insurance industries. It is changing the way fraud and malware are detected online – and even helping compulsive gamblers.

Automatically detecting anomalies in real-time data is not new – it’s how spam filters weed out junk emails or antivirus software catches malicious code, for example. But detecting such things typically requires the system to know what it’s looking for. Antivirus software needs to be kept up to date with the fingerprints or signatures of malware known to be on the loose, for example.

Our tech checked that people were playing against people and not robots – David Excell, Featurespace

But that does not help you spot previously unseen types of activity. So Excell and Fitzgerald set out to build a system that could detect any type of behaviour that seemed to break from the norm and learn as it went. 

Their AI – called the adaptive real-time individual change identifier, or Aric – is based on the work of 18th Century cleric and mathematician Thomas Bayes. Bayes developed a way to think about probability in which the likelihood of something happening is calculated based on what has been observed to happen before. Bayesian probability was used by Alan Turing and the codebreakers at Bletchley Park in World War Two to work out where Nazi U-boats might be found, based on their past activity.

And it can be used to detect when a Counter Strike player is probably cheating. By monitoring frame-by-frame data from the game, Aric flagged unusual hikes in the accuracy of some players' shooting. It was clear they had downloaded sharp-shooting bots to play the game for them, says Excell. Aric also noticed that some players were unusually quick to attack their opponents, suggesting that they were using a known cheat that made walls in the game transparent.

Slot machines that monitor people's behaviour can detect when they are acting obsessively and may be in need of help (Credit: Getty Images)

Slot machines that monitor people's behaviour can detect when they are acting obsessively and may be in need of help (Credit: Getty Images)

Next up, Featurespace used its tech to reduce the number of drones that the British military were losing in the air. By detecting anomalies in flight-control data, Aric found previously unknown errors that were causing the drones to crash.

Fitzgerald died in 2014, but the tech he helped develop is now changing how fraud is detected. Featurespace’s first big commercial application was with the UK-based online gambling firm Betfair, where Aric is used to detect reckless spending on bets – a sign that somebody might be betting with someone else's money. If Aric raises an alert, Betfair can look into the situation immediately – a transaction can be stopped in mid-flow if necessary.

Aric has also started looking out for gamblers themselves. Streaks of high-stakes bets can also be a sign that people are behaving compulsively. As well as online betting, the system can also monitor activity on slot machines for warning signs. "If you can predict which players look as if they may be becoming addicted you can actually try to intervene before the damage occurs," says Featurespace’s CEO Martina King. Aric is now being used by a number of betting firms.

If you can predict which players might become addicted you can try to intervene – Martina King, Featurespace

But it is banks and payment system providers that are Aric's biggest users. By watching over every stage of a transaction as it happens - every click of the mouse on every dropdown menu, and the way a person usually navigates through a website - some unexpectedly powerful crime-fighting tools have been made possible.

For instance, the system can tell if somebody is using stolen banking details to log in. A red flag will be raised if the way the person uses the website does not match the patterns associated with the owner of the stolen information.

Similarly, if someone is unusually hesitant in the way they use a website it could be a sign that they are entering their bank details under stress or duress of some kind. This might happen if they are the victim of a so-called “vishing” attack, where fraudsters call pretending to be a bank employee and ask people to transfer funds out of their account on some fake pretext. Again, that hesitant interaction would raise an alert with the bank, which could investigate.

But it is not all about the software, says Kirk Bresniker at Hewlett Packard in Palo Alto, California. To make such anomaly detection more powerful still, Bresniker and his colleagues are building computers specifically designed to handle the dense datasets that machine learning software like Aric feeds on. Hewlett Packard’s hardware, which they call The Machine, adds a massive amount of memory to each of its processors, which can communicate with each other at blistering speeds. 

If someone is unusually hesitant when entering their banking details online it could be a sign that they are under duress (Credit: Getty Images)

If someone is unusually hesitant when entering their banking details online it could be a sign that they are under duress (Credit: Getty Images)

The upshot is a big bump in the amount of data that can be analysed at once, which is essential for detecting anomalies in increasingly large and complex data. Hewlett Packard plans to target hackers and advanced malware rather than fraudsters. But other Silicon Valley firms are getting in on the act. Chip maker Intel recenty acquired San Francisco-based Saffron Technology, which makes systems that detect and prevent fraud by monitoring what it calls chaotic unstructured data. And Featurespace has its own hardware plans for Aric, combining software and faster hardware to minimise false alerts.

Anomaly detection is set to get better. Aric can detect potential criminal activity just by picking up on an activity that looks different to what it has seen in the past – just as a human might pick up on suspicious behaviour. It can warn that something suspicious is happening even if it does not quite know what it is.

"Fraudsters chase the weaknesses in different banking systems and exploit them as quickly as they can,” says Excell. “Our platform is self-learning so it is always up to date with current fraud trends. The best predictor of a fraud today is the fraud that took place yesterday."

Keep up to date with Future Now stories by joining our 800,000+ fans on Facebook, or follow us on Twitter, Google+, LinkedIn and Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, Travel and Autos, delivered to your inbox every Friday.

Around the BBC