The cyber challenge inlight of the new regulatory framework on AI

AI normativa orizz

On 17 September 2025, the Senate definitively approved Law 132/2025, entitled “Provisions and Delegations to the Government on Artificial Intelligence,” which regulates the adoption of artificial intelligence tools across various economic sectors, including cybersecurity.

While on the one hand the development of AI is encouraged to strengthen national cybersecurity, on the other hand it is necessary to ensure that the automation of certain processes through AI does not expose companies, organizations, and users who rely on it to risks concerning data and system security, as well as market competitiveness.

 

The new AI law and its complementarity with the AI Act

On 17 September 2025, the Senate definitively approved Law 132/2025, entitled “Provisions and Delegations to the Government on Artificial Intelligence.”

The purpose of this law is to create a regulatory framework that can support the country in adopting artificial intelligence tools. It fits within a complex regulatory landscape that includes a broad body of European legislation, most notably EU Regulation 2024/1689, which establishes harmonized rules on artificial intelligence (the so-called AI Act), complementing and tailoring them to individual sectors.

AI normativa 1The national and European regulations are closely interconnected: both share the same fundamental principles, including human intervention and oversight (the so-called anthropocentric approach), transparency and robustness of systems, attention to diversity, non-discrimination and fairness, and respect for privacy, ensuring effective data governance. The subsequent sections of the law contain specific provisions for individual economic sectors—such as healthcare and the regulated professions—where these principles find practical implementation.

Cybersecurity is a key focus of the law. Article 18 grants the National Cybersecurity Agency (ACN) the authority to promote the development of AI as a resource to reinforce national cybersecurity, while ensuring that the use of systems and models does not undermine citizens’ fundamental rights or the democratic life of the State. Moreover, under Articles 3 and 6, AI systems or models developed or used by ACN for the protection of national security in cyberspace are excluded from the scope of the law.

 

Interconnections between Cyber and AI

The inseparable and growing interaction between the cyber domain and AI—now explicitly recognized in primary legislation—stems from the need to enhance an organization’s (corporate or otherwise) general security measures in terms of forecasting and preventing potential digital attacks and threats. This requires the use of AI systems primarily for identifying, processing, and analyzing vulnerabilities in information and outputs, with the ultimate goal of protecting an organization’s assets, both digital and non-digital.

AI normativa 3Today, we are witnessing a prioritization of the cyber-AI connection in corporate operations and business strategies, with significant impacts on risk management and business continuity, while avoiding any trade-off between security and market competitiveness. Key practical applications include machine learning and deep learning techniques for detecting security incidents, AI-driven endpoint management, and XDR and SIEM solutions that enable a proactive response to cyber threats. These are the areas in which AI provides concrete advantages in cybersecurity, improving so-called cyber scalability through the processing of growing volumes of data, automation, and adaptable responses to increasingly sophisticated and complex malware.

However, the extraordinary computational power of AI models also poses cybersecurity risks, as cybercriminals can use AI to reduce the cost and time required to carry out cyberattacks, automating complex operations and minimizing the need for human intervention. More advanced attack models enable, for example, the creation of artificial voices similar to real ones (used in vishing attacks), the creation of personalized and highly convincing emails that increase the success rate of phishing, or the creation of deepfakes that make it difficult to distinguish between reality and manipulation.

The significant risk to cybersecurity is inherent in the way AI functions, since the proper operation of neural networks requires the use of enormous amounts of constantly updated data, including personal data. Any data breach—or even the voluntary disclosure of data by an individual—could grant cybercriminals access to confidential and sensitive information concerning millions (or even billions) of users, with entirely unpredictable consequences.

 

Conclusions

In light of the above, AI has been a topic of discussion for years, and legislators are now defining its areas of application and key aspects. While awaiting future practical developments and shared policies at the European level, users, citizens, and companies must reflect not only on practical implications but also on ethical considerations, assessing the correct and safe uses of these new technologies which, through process automation, certainly offer advantages but also present a certain level of risk.

Cybersecurity is among the sectors addressed by lawmakers because it is now indispensable for the proper and safe execution of daily activities in an increasingly digitalized and dematerialized world. Thus, within the AI–cybersecurity dichotomy, it becomes crucial to identify a meeting point between different needs, choosing a responsible approach that, on the one hand, embraces technological progress and innovation but, on the other, does not sacrifice fundamental rights or endanger market dynamics.

 


The authors

Federica Lucrezia Romeo, graduated in Law from La Sapienza University in Rome with a thesis in criminal law entitled “Risk nexus and interruption of the causal relationship in the most recent jurisprudential developments,” which earned her an internship at the Public Prosecutor’s Office in Frosinone. Previously, she worked as a lawyer and currently holds the position of Legal Specialist at Telsy.

Marco Rosafio, Law Graduate from the LUISS Guido Carli University in Rome with a thesis in bankruptcy law; he has obtained, at the same University, a Level II Master’s Degree in Business Law. He has collaborated with a law firm working in the areas of business contracts, corporate law and litigation. Currently, he holds the position of Legal Assistant in Telsy, delving into the same issues within the corporate context.

Niccolò Francesco Terracciano, law student at LUISS Guido Carli University in Rome, he has gained experience in non-profit associations, having the opportunity to deepen his knowledge related to commercial law and business consulting. Currently, he holds the position of Legal Specialist in Telsy where he is developing, in the corporate field, the theoretical knowledge learned during his studies in civil, corporate and new technology law.

Erica Onorati, Law Graduate from LUISS Guido Carli University in Rome with a thesis in civil law entitled “The renegotiation clauses,” focusing on the analysis and applicability of renegotiation in contractual matters. She then obtained an Executive Master’s Degree from the Il Sole 24 Ore Business School in Cybersecurity and Data Protection, focusing on the analysis of strategies to protect corporate assets and prevent cyber risks. Specializing in the civil law profile, she has delved into topics related to contractual and non-contractual liability and corporate and commercial law. After several experiences gained in the legal field in corporate contexts as a corporate lawyer, she currently holds the position of Legal Supervisor in Telsy, with a focus centered on the management of corporate contracts, legal advice provided to the business lines involved in the various areas of corporate operations, extraordinary transactions, and corporate secretarial work.