The enemies of the future will not necessarily need bombs, missiles or atomic weapons to instil terror in civilian populations. They will need only some tape, scissors and good practical skills and they can magically transform a stop sign into a green light in the eyes of a self-driving car, causing crashes and disorder.
Using an Artificial Intelligence attack (AI attack) adversaries can manipulate AI systems in order to alter their behaviour to serve a malicious end goal. The real effect of these attacks grows as artificial intelligence and IoT systems are further integrated into critical components of society (e.g. smart grid, transportation, healthcare, military etc.). In fact, the AI attacks represent an emerging and systematic vulnerability with high potential to lower the overall level of security.
The five areas most immediately affected by artificial intelligence attacks are: content filters, the military, law enforcement, traditionally human-based tasks being replaced by AI, and the civil society. These areas are attractive targets for threat actors, and are growing more vulnerable due to their increasing adoption of artificial intelligence and machine learning technologies for critical tasks.
Unlike traditional cyberattacks that are caused by “bugs”, patching process errors or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. An AI attack can take different forms that strike at different weaknesses in the underlying algorithms:
• Input Attacks: manipulating what is fed into the AI system in order to alter the output of the system to serve the attacker’s goal. In fact, at its core every AI system is a simple machine that takes an input, performs some calculations, and returns an output. According to this process, manipulating the input allows attackers to affect the output of the system.
• Poisoning Attacks: corrupting the process during which the AI system is created so that the resulting system malfunctions are modified in a way desired by the attacker. One direct way to execute a poisoning attack is to corrupt the data. This is because the data are the machine learning key methods powering the AI “learning” process. These means that poisoning the data can also compromise the whole learning process itself and as AI systems are integrated into critical commercial and military applications, these attacks can have serious, even life-and-death, consequences.
Moreover, AI attacks can be used in different ways to achieve specific malicious end goal:
• Speech Recognition and Natural Language Processing (NLP) for phishing attack: The attackers want to identify and replicate NLP algorithm defining characteristics of an individual’s communication patterns in order to snatch important data. In precise, a natural language processor could interpret incoming text messages or email and improvise responses which approximate the language of a known contact. The system could be trained to maintain the major acuteness to the genuineness and to be convincing at the same time to perpetrate phishing and spare-phishing attacks.
• Cause Damage: the attacker wants to cause damage by attacking the AI system so that, as an example, it incorrectly recognizes a stop sign as a different sign or symbol. The attacker can cause the autonomous vehicles to ignore the stop sign and crash into other vehicles and pedestrians.
• Hide Something: the attacker wants to evade detection by an AI system, for example by modifying the content filter task, allowing the publication of banned files.
• Degrade Faith in a System: the attacker wants an operator to lose faith in the AI system, leading to the system being shut down. An example of this is an AI attack that causes an automated security alarm to misclassify regular events as security threats, triggering a barrage of false alarms that may lead the system to be taken offline, therefore allowing to evade the detection of a true incoming threat.
This unprecedent technology evolution will trigger numerous AI-enabled hacking, with criminals becoming increasingly capable of targeting vulnerable users, devices and systems. Computer security firms will likewise lean on an AI defensive system in a never-ending effort to keep up.