Difesa Giudiziale e Intelligenza Artificiale Generativa: profili di responsabilità professionale e deontologica

Abstract

The integration of Generative Artificial Intelligence (GenAI) into the legal and judicial worlds presents significant risks. While GenAI offers unprecedented efficiency in research and document drafting, its fundamental flaw (the propensity for “hallucination”) poses a critical danger. Hallucinations are instances where the AI fabricates information, generating seemingly authoritative content, such as entirely fictional case law, statutes, or legal precedents, that simply do not exist. Relying on such invented material can lead to severe consequences, including miscarriages of justice, erroneous legal advice, and professional sanctions for legal practitioners. The lack of reliable source attribution and the models’ opaque decision-making processes further complicate verification, making it imperative that the legal community exercises extreme caution and establishes robust oversight to mitigate the perilous influence of these unpredictable errors. The critical analysis must be accompanied by a comparative study distinguishing the implications of GenAI in common law and civil law systems. The diverse research methodologies and the authority of sources in these legal orders necessitate the adoption of specifically tailored risk mitigation strategies to prevent algorithmic error from undermining the stability and fairness of each system.

Download
JELT-2026-1-5.pdf (465.38 KB)
Basso E., Carli A. (2026) "Difesa Giudiziale e Intelligenza Artificiale Generativa: profili di responsabilità professionale e deontologica ", Journal of Ethics and Legal Technologies, 8(1), 112-136. DOI: 10.25430/pupj-JELT-2026-1-5  
Year of Publication
2026
Journal
Journal of Ethics and Legal Technologies
Volume
8
Issue Number
1
Start Page
112
Last Page
136
Date Published
05/2026
ISSN Number
2612-4920
Serial Article Number
5
DOI
10.25430/pupj-JELT-2026-1-5
Issue
Section
Articles