It is now clear that artificial intelligence can no longer be understood or governed as a mere enabling technology. It is increasingly taking shape as a socio‑technical infrastructure that permeates the entire organization of contemporary societies, affecting decision‑making processes, the distribution of power, economic competitiveness, and, above all, the conditions of knowledge.
Within this perspective the conference “Artificial Intelligence, Data Governance, Digital Equity between Ethics, Rights and Innovation,” held on 26 March 2026 at the University of Rome Unitelma Sapienza, represented a moment of advanced synthesis of the European and national debate on the legal governance of digital and algorithmic innovation.
The event marked the concluding moment of the university research project “Towards a Legal Regime for Big Data as Global Commons,” scientifically coordinated by Gabriella Mazzei, and held under the patronage of Presidenza del Consiglio dei Ministri, the National Cybersecurity Agency (ACN), the Agency for Digital Italy (AgiD), and the Italian Data Protection Authority.
The conference offered a systemic reading of the new European regulatory mosaic—AI Act, GDPR, Data Governance Act, Data Act—placing at its center the relationship between technological development, fundamental rights, and public responsibility.
From both academic and institutional contributions, the inadequacy of a reductionist vision of artificial intelligence—conceived as a set of neutral tools—clearly emerged.
On the contrary, AI is grounded in a complex supply chain encompassing data, models, computational infrastructures, technological supply chains, and energy. This chain produces systemic and cumulative effects that make the issue of public responsibility central to the governance of innovation.
Moreover, unlike other engineered systems that exhibit an explainable correlation between input and output, the results produced by AI systems – even when plausible or correct – are in practice characterised by opacity (non‑explainability) and thus are difficult to control in the presence of variable inputs and changing contexts. From a legal perspective, this opacity translates into problems of liability, evidentiary burden, and verifiability of compliance.
Within this framework, law does not merely intervene ex-post to correct distortions, but assumes a constitutive function, shaping design and organisational choices from their very origin. The European Regulation on Artificial Intelligence (AI Act) represents the most advanced expression of this approach.
The European AI Act: Law as the Infrastructure for Innovation
The AI Act emerged as a constitutional framework law for technology, in a certain sense continuing and expanding the experience of the GDPR. As emphasized in institutional interventions, Europe has consciously chosen to regulate technology in order to guide its development, rather than passively accepting it as a purely economic or geopolitical fact. This approach is grounded in a model of balancing innovation and rights, distinguishing the European Union from both U.S. liberalism and Chinese dirigisme.
The AI Act (Reg. EU 2024/1689) introduces a risk-based regulation, imposing stringent requirements for data quality, transparency, human oversight, and accountability for high-risk AI systems, strengthening the ex-ante dimension of legal protection.
The AI Act therefore translates typically engineering requirements — such as data quality, logging, technical documentation, human oversight, robustness, and cybersecurity — into binding legal obligations, especially for systems classified as high risk.
This convergence between technical architecture and legal responsibility is particularly evident in the area of data governance. Data are no longer just an ancillary production factor but constitute the foundational infrastructure of the competitiveness and legitimacy of artificial intelligence systems.
Having generated and made available large quantities of data and implemented tools for their management is no longer sufficient for digital transformation: data quality becomes a fundamental requirement for the development of reliable and non-discriminatory systems.
Quality assumes ethical relevance, especially in high-risk scenarios. In engineering terms, quality means at least: completeness, accuracy, representativeness, up to date-ness, consistency between sources and traceability. In legal terms, these same attributes become criteria of diligence and accountability, as well as essential elements to mitigate bias and discrimination. Data quality emerges as an essential ethical and legal requirement.
The European legislator, through the Data Governance Act and the Data Act, has consistently aimed, on the one hand, to achieve a fair digital society and, on the other, to build a common data market and digital spaces for sharing data for the purpose of research and services. To this end, driven by the need to regulate the flow of large amounts of data, the European regulatory framework seeks to reconcile the protection of personal data with the need to enable such massive information flows of public interest, moving beyond the exclusive paradigm of individual consent in favor of institutional guaranteed mechanisms. In certain sectors of public interest (e.g., healthcare), this balancing is particularly challenging.
Alongside these aspects, the issue of security emerges strongly. The widespread adoption of generative systems and analytical tools broadens the attack surface and makes it easier to reconstruct an organization’s competitive “core” (know ‑ how, processes, identity), requiring investments in data discipline, segregation, access controls, and contractual management of data ownership. Cybersecurity by design, referenced both by the AI Act and the NIS2 framework, becomes a structural requirement for AI governance.
Completing the regulatory scenario is the so-called Digital Omnibus , the European package for simplification and coordination of digital law, currently still under discussion. Its aim is to rationalize the existing regulatory mosaic with a transversal rewrite, without, at least in the initial intentions, altering its underlying values.
At the national level, the discussion intersects with Legge Quadro 132/2025 , which represents Italy’s first attempt, within the European Union, to build a systemic framework for implementing the AI Act and for the public governance of artificial intelligence.
Correspondence between technical-engineering requirements and European legal safeguards
| Technical requirement (engineering) | Operational technical description | Legal obligation / safeguard |
| Logging and traceability | Systematic recording of inputs, outputs, model versions, datasets used, and decision parameters throughout the entire lifecycle of the AI system. | AI Act (Reg. EU 2024/1689): obligations for logging and technical documentation for high-risk systems; a prerequisite for audit, accountability, and evidence in case of incident. |
| Data quality | Control over completeness, accuracy, representativeness, updating, and provenance of datasets (training, validation, testing). | AI Act + Data Governance Act / Data Act : data quality requirements for mitigating bias and reliability; ethical-legal significance of data quality. |
| Human-in-the-loop / human oversight | Effective human supervision at critical decision points, with the possibility of intervention, override, and challenge. | AI Act : obligation for human supervision for high-risk systems; protection of fundamental rights and guarantee of non-automatic decision-making. |
| Cybersecurity by design | Protection against attacks, information leakage, prompt injection, model inversion, data exfiltration; management of the attack surface. | NIS2 , AI Act , Digital Omnibus : obligations for security and resilience; coordination and simplification of incident reporting (single-entry point). |
| Contextual explainability | Ability to explain outputs, limitations, and reliability conditions of the system in relation to the specific context of use (not abstract explainability). | AI Act : obligations for transparency and user information; legal safeguard against decision opacity and “apparent authority” effect. |
| Model change management | Controlled management of updates, re-training, fine-tuning, and model drift. | AI Act : continuity of compliance throughout the lifecycle; relevant for responsibility and maintaining risk classification. |
| Data & model governance | Clarity regarding data ownership, licences, usage rights, and relationships with third-party providers (foundation models, cloud, API). | Data Act , GDPR , AI Act : protection of data ownership, accountability of the controller, and responsibility of private powers. |
Epistemia: the cognitive risk of generative artificial intelligence
Within this technical-legal framework the concept of Epistemia, developed and elaborated by the group coordinated by Prof. Walter Quattrociocchi, director of the Center of Data Science and Complexity for Society at Sapienza, is particularly relevant.
Epistemia refers to a structural condition in which linguistic plausibility is mistaken for knowledge, generating the illusion of knowing without the epistemic work of verification, judgment, and cognitive responsibility.
The notion of Epistemia represents one of the most significant theoretical contributions to understanding the impact of generative artificial intelligence systems on the epistemic conditions of digital societies.
In more recent works, Epistemia is not described as a mere defect of models or as a problem of hallucinations, but rather as an emergent property of generative systems when used as cognitive surrogates.
From an engineering perspective, Epistemia arises because Large Language Models are statistical optimization systems for linguistic plausibility, not knowledge systems anchored to causal models of the world. They produce outputs that are coherent, well-formed, and convincing, but lack epistemic grounding.
From a legal perspective, this characteristic translates into a systemic risk: decisions, evaluations, administrative or business acts may appear rational and motivated while resting on epistemically unfounded grounds, making the chain of responsibility, attribution, and proof fragile.
Epistemia becomes even more significant when placed in the public and institutional space. As emerged during the conference, in a global context where democratic values are deteriorating, the widespread use of tools that produce discursive authority without knowledge risks eroding collective rationality even more.
Epistemia is therefore not a contingent error or a dysfunction that can be corrected with technical improvements, but a risk that affects the quality of knowledge, legal responsibility, and potentially the very stability of democratic institutions.
In the recent paper Epistemological Fault Lines Between Human and Artificial Intelligence, Quattrociocchi, Capraro, and Perc identify actual fault lines between the human epistemic pipeline and that of generative models:
- absence of grounding
- lack of experience and motivation
- inability for authentic causal reasoning
- absence of metacognition and sensitivity to values
These concepts have direct implications for the law of artificial intelligence. If the main risk is not just technical error, but the cognitive illusion, then regulation cannot be limited to the functional safety of systems but must address the issue of epistemic reliability.
From this perspective, the provisions of the AI Act regarding transparency, data traceability, and human oversight take on a deeper significance: they aim to counter Epistemia by institutionalizing mechanisms of knowledge responsibility, requiring that technology remains at the service of humans and does not replace their judgment, thus preserving human epistemology within decision-making processes mediated by technology.
In other words, European law does not try to correct the statistical nature of AI, but to prevent the improper replacement of human judgment. From this perspective, the AI Act, the broader European framework for data governance, and scientific community research can be seen as attempts to implement anti–Epistemia policies.
Conclusions: governing digital transformation to remain human
We can clearly see the image of a Europe that grounds its identity not on technological power, but on law as the infrastructure of digital coexistence. In a world where only a minority of the population lives in fully democratic contexts, AI regulation becomes a matter of upholding the public sphere and collective rationality.
The concept of Epistemia reminds us that the challenge is not only technical or legal, but epistemological: without a clear governance of knowledge, innovation risks becoming socially regressive. To govern AI means to govern the conditions of knowledge in societies increasingly mediated by algorithmic systems, with the fundamental goal of keeping humans at the center of digital transformations.
REFERENCES
Università degli Studi di Roma UnitelmaSapienza, news: “Intelligenza artificiale, Data governance, Equità digitale tra Etica, Diritti e Innovazione”, 26 Marzo 2026
EU AI ACT 2024/1689
European Commission, institutional articles on Artificial Intelligence and its implementation within AI ACT, incl. governance/enforcement (AI Office, AI Board,national authorities).
W. Quattrociocchi, V. Capraro, M. Perc, “Epistemological Fault Lines Between Human and Artificial Intelligence”, arXiv:2512.19466 (submitted 22 Dec 2025), DOI: 10.48550/arXiv.2512.19466.
Edoardo Loru, Jacopo Nudo, Niccolò Di Marco, Alessandro Santirocchi, Roberto Atzeni, Matteo Cinelli, Vincenzo Cestari, Clelia Rossi-Arnaud, Walter Quattrociocchi, The simulation of judgment in LLMs, arXiv:2502.04426 submitted 6th Feb 2025
