Artificial Intelligence (AI) and Machine Learning (ML) tools could substantially help in the fight against cybercrime. But even these technologies can’t guarantee absolute security, and they could even be exploited by malicious hackers. Here we will consider some of the implications about the use of these new instruments in the cybersecurity sector.
In 2020 cyber criminals pose a growing threat to all kinds of organisations and companies, as well as their customers. Businesses are doing their best to defend themselves, but it’s hard to predict what new types of cyberattacks will emerge and how they’ll work, which cyber criminals tend use in their favour.
Artificial Intelligence and Machine Learning can give a decisive contribution to cybersecurity
AI and ML are playing an increasingly important role in cybersecurity, powering security tools that can analyse data from millions of previous cyber incidents, and use it to identify potential threats or new variants of malware. These tools are particularly useful if we consider that cyber criminals are always trying to modify their malware code so that security software is no longer able to recognise it as malicious.
By applying AI and ML, cyber-defenders are attempting to stop even the unknown, new types of malware attack. The machine-learning database can draw upon information about any form of malware that’s been detected before. Therefore, when a new form of malware appears, either a variant of an existing malware, or a new kind entirely, the system can check it against the database, examining the code and blocking the attack on the basis that similar events have previously been deemed as malicious. That’s even the case when the malicious code is bundled up with large amounts of benign or useless code in an effort to hide the nefarious intent of the payload.
Tracking and analysing users’ behaviour
But detecting new kinds of malware isn’t the only way that AI and ML technologies can be deployed to enhance cybersecurity: an AI-based network-monitoring tool can also track what users do on a daily basis, building up a picture of their typical behaviour. By analysing this information, the AI can detect anomalies and react accordingly. This way AI and ML enable cybersecurity teams to respond in an intelligent way, understanding the relevance and consequences of a breach or a change of behaviour, and developing in real time an adequate response.
For example, if an employee clicks on a malicious link, the system can work out that this was not a normal behaviour and could therefore be a potentially dangerous action. Using ML, this kind of event can be spotted almost immediately, blocking the potential damage of an intrusion and preventing many criminal activities. And all of this is done without impacting the daily activity of the company, as the response is proportionate: if the potential malicious behaviour is on one machine, locking down the whole network is not required.
A great support with some potential risks
A huge benefit derived from the use of ML in cybersecurity is that the system will be able to identify and react to potential problems almost instantly, preventing the disruption of the business. By deploying AI-based cybersecurity to automate some of the defence functions, it’s possible to ensure that the network is going to be safe, without relying on humans having to perform the impossible task of monitoring everything at once. In fact, the growing volume of data and its variety make it practically impossible for humans to manage it and automated tools can greatly help in this sense.
This statement is further supported when observing how employees operate on the network. Many large companies train their staff to improve cybersecurity, but it’s possible that some employees will attempt to take shortcuts in an effort to do their job more efficiently, which could possibly lead to serious security problems. AI and ML can manage this issue.
Human cybersecurity staff will still be needed
While AI and ML do provide great advantages for cybersecurity, it’s important for companies to realise that these tools cannot completely replace human cybersecurity staff. It’s possible for a machine learning-based security tool to be programmed incorrectly, for example, resulting in unexpected attacks being missed by the algorithms. Something like this could lead to very serious problems and it must be taken into account right from the start.
That’s why AI-based cybersecurity tools need to be regularly evaluated like any other software on the network. There’s also the risk that AI and ML could even create additional problems, because it’s highly likely that cyber criminals themselves are going to use these same techniques in an effort to make their attacks more efficient and disruptive.
AI and cybercriminals
A report by the Europol’s European Cybercrime Centre has warned that Artificial Intelligence is one of the emerging technologies that could make cyberattacks more effective and more difficult to identify than ever before. It’s even possible that hackers have already started using these techniques to conduct hacking and malware attacks.
It’s very likely that by using ML, cyber criminals could develop self-learning automated malware, ransomware, social engineering or phishing attacks. Currently, they might not access to the deep wells of technology that cybersecurity companies have, but there exists code that can provide cyber criminals with access to these resources. In that case, it’s correct to assume that these instruments will soon be part of a criminal’s toolkit, if they aren’t already.
While it may be unclear if hackers have used machine learning to help develop or distribute malware, there is already evidence of AI-based tools being used to conduct cybercrime. Last year, for example, it was reported that criminals used AI generated audio to impersonate a CEO’s voice and trick employees into transferring a great amount of money to them.
Machine learning systems could also be used to send out phishing emails automatically and learn what sort of language works in the campaigns, what generates clicks and how attacks against different targets should be developed. Like any machine-learning algorithm, success would come from learning over time, meaning that it’s possible that phishing attacks could be driven in the same way security teams try to defend against them.
Having said all of this, if AI-based cybersecurity tools continue to develop and improve, and are applied correctly alongside human cybersecurity teams, rather than instead of them, this could help companies and governments stay secure against increasingly sophisticated and effective cyberattacks. Ultimately, AI could greatly help us in creating a world where our whole cybersecurity sector is much improved, thanks to a self-learning and self-healing network that can identify in advance negative behaviours and stop them from happening. In any case it’s clear that these new technologies will be at the heart of the cybersecurity of the future.