New cyber threats made with the support of AI

Cyber threat intelligence has fully entered the age of artificial intelligence and must now take into account its use by criminals of every kind. Scenarios that, until last year, were considered only in predictive analyses are now becoming concrete realities. Disinformation operations, BEC frauds, and government espionage campaigns are increasingly supported by deepfake content.
At the same time, AI-generated threats are emerging that, in their most sophisticated forms, show specific features of mimicry and adaptability. Some of these, described in recent months, represent particularly interesting examples.
EvilAI was entirely designed with AI
Among the recently discovered malware, EvilAI has claimed victims worldwide, with significant incidence in Europe, including Italy. The threat hides within fully functional applications whose names and interfaces, while not explicitly imitating those of popular brands, still carry an aura of authenticity, making them more difficult to detect. In an almost ironic loop, some of these apps themselves pose as AI-based tools.
In the case of EvilAI, the use of AI is systematic and covers all stages of offensive creation. Criminals leveraged it to write the malicious code, generate the apps in which the malware was embedded, and design the web portals through which it was distributed to victims.
The code appears “clean,” does not trigger static scanners, has new evasion capabilities, and includes features that make reverse engineering complex. To further strengthen its defenses against tracking, digital signatures are abused and, in some cases, even code-signing certificates.
EvilAI is mainly used as a stager – obtaining initial access, establishing persistence, and preparing the infected system for additional payloads – but it may also include an infostealer component.
PromptLock demonstrates potential ransomware evolutions
This summer, security researchers discovered a proof-of-concept ransomware with particular features based on automating certain tasks and adapting to the context of different targets.
PromptLock, the name of the ransomware, is written in Golang and uses OpenAI’s gpt-oss-20b model locally, via the Ollama API, to generate and then execute malicious Lua scripts on the fly. Its functions include local file system enumeration, inspection of target files, exfiltration of specific files, and encryption.
The code, which already during initial tracking showed characteristics incompatible with concrete attack scenarios, turned out to be the result of an academic project, published on Cornell University’s website just a few days after the release of the analysis paper. The project specifically presents a new threat that leverages Large Language Models (LLMs) to autonomously plan, adapt, and execute the ransomware attack lifecycle.
In any case, the discovery of PromptLock, far from downplaying the potential risk of this field of action, highlights its many harmful possibilities and warns of the chance that similar threats may already be circulating.
The ransomware operator landscape has always been – and remains – extremely diverse.
The adoption of AI in this sector seems to confirm what was feared, among others, by the UK’s National Cyber Security Center (NCSC), which anticipated both an increase in effectiveness for already well-equipped attackers and a rise in low-level criminal activity, facilitated by the use of new technologies.
An example of this second scenario is the eccentric Ransomware-as-a-Service group Funksec – an entity that has systematized cybercrime and pro-Palestinian hacktivism with limited resources and relatively low technological know-how – which has explicitly declared the use of ransomware developed with AI support.
MalTerminal has integrated LLM functionality
MalTerminal has been presented as the first known example of malware with integrated LLM functionality. It is a threat generated dynamically by directly querying OpenAI’s GPT-4.
Specifically, unlike “traditional” malware, part of MalTerminal’s logic is not precompiled but is generated at runtime via queries to GPR-4. This allows the operator to choose between ‘encryptor’ or “reverse shell” modes for the code being generated. The tracked artifacts include a set of Python scripts, Windows executables, and an LLM-based security scanner.
Analysts have provided a terminus ante quem for dating MalTerminal, namely the presence in one of the analyzed samples of an API endpoint for completing OpenAI chats, which was deprecated in early November 2023.
Furthermore, as in the case of PromptLock, they point out that they have found no evidence of any in-the-wild implementation of these tools, nor of any attempts to sell or distribute them. Therefore, there is a reasonable possibility that these are Proof-of-Concept malware or tools for red teaming activities.
TS-Intelligence
The information reported is the result of the collection and analysis work carried out by the specialists of Telsy’s Threat Intelligence & Response team with the support of the TS-Intelligence platform, a proprietary, flexible, and customizable solution that provides organizations with a detailed risk landscape.
It is available as a web-based and full-API platform, designed to be integrated into the organization’s systems and defensive infrastructures, with the goal of enhancing protection against complex cyber threats.
The platform’s continuous research and analysis on threat actors and emerging online threats—whether APTs or cybercrime—produces a constant stream of exclusive intelligence, delivered in real time and structured into technical, strategic, and executive reports.
Discover more about our Intelligence services.