Cyber criminals have their fingers on the pulse, and they move fast. They’re frequently among the first to adopt new technologies for their own gain, and industries often struggle to pre-empt vulnerabilities to stay one step ahead of criminals. As we see the accelerating rollout of machine learning for positive purposes, we’re also seeing a whole new range of cyber attacks.
We’re seeing this emerge at two levels:
1. Using machine learning to override common security tools.
2. The exploitation of machine learning systems themselves.
We’re all familiar with CAPTCHA – the tool that asks you to either type out a random sequence of characters or to click on the images on the screen with a specified feature, like traffic lights, to prove you’re not a robot. Well, it turns out that robots can do this too, and cyber criminals are capitalising on it.
Machine learning can be taught to recognise objects like bikes, buses, and traffic lights. So, as image recognition improves, machine learning is seeing high success rates against leading CAPTCHA tests.
Even passwords are vulnerable. Machine learning can be used to examine our social media accounts and a dataset can be implied using information we willingly put into the public realm.
Think, for example, how often mums include their maiden name on social media, and then consider how often a password security question asks you for your mother’s maiden name. Best friend? Chances are you’ve tagged that person the most. And how many times have you used your date of birth in a password to make it more secure?
Targeted password infiltration attempts can be made with greater success than standard brute force methods, and they can run thousands of possible password combinations of your social media data in mere moments.
Malware is a huge problem for software and OS suppliers, since these suppliers depend on being able to identify malware and provide updates to protect users. Machine learning is being exploited as a means of creating Chameleon malware, which purposefully targets Wi-Fi access points and morphs its profile to avoid detection.
Criminals are increasingly exploiting new vulnerabilities within machine learning models themselves, for example by tampering with learning datasets.
Most machine learning algorithms need a large set of data to learn trends and train models. To lighten the load, developers sometimes download these datasets from public platforms. If a hacker is able to introduce a subtle change to a publicly available dataset, a single white pixel on an image would suffice, then this vulnerability can very easily spread across the web and could lead to a widespread data poisoning attack.
Similarly, if a hacker is able to tamper with a source dataset and introduce a bias, then the model becomes ‘poisoned’ and ineffective. Consider this scenario: machine learning reads a huge range of news articles to identify sentiment, but then it’s targeted by enough fake news articles to introduce a bias and impact the overall outcome.
In models with small-scale datasets, including some medical trials, it may even be possible to reverse engineer the machine learning model to uncover the original data, which could have huge implications for patient confidentiality.
Machine learning can be very beneficial for businesses: it increases process automation, enhances user behaviour analysis, and improves security. However, as with all forms of technology, machine learning also has the potential to be exploited by cyber criminals, given the opportunity. It’s therefore essential for all organisations to deploy the necessary technologies and invest in hiring top-tier cyber security talent to make sure they remain one step ahead of the competition.
Are you looking to protect your organisation from the threat of cyber attacks? Register a job with us today and benefit from the expertise of our specialist consultants and our extensive talent network of cyber security experts.
Tim Olsen, National Technology Director
Tim worked in Project Management for 20 years developing solutions to improve user journeys and experience for blue chip clients. More recently he built the UK’s largest RPA CoE from scratch and went on to help organisations overcome their barriers to scaling automation. He is a thought leader and evangelist for Intelligent Automation.