Artificial intelligence (AI) has advanced rapidly, transforming sectors such as healthcare, education, and especially law. Generative AI models, such as ChatGPT, have been used to assist in drafting legal documents, researching case law, and even interacting with clients. However, a worrying phenomenon has emerged: AI "hallucinations." These hallucinations occur when the system generates information that, while seemingly plausible, is incorrect or non-existent. In the legal context, this can result in misinterpretations of the law, fictitious citations, or even the creation of non-existent precedents.
Imagine a lawyer using an AI tool to research case law and, upon receiving a citation from a court that has never issued such a decision, bases their argument on it. Or consider a judge who, upon consulting a virtual assistant to clarify a legal point, receives an inaccurate explanation that influences their ruling. These scenarios illustrate the risks of AI hallucinations in the legal environment, where accuracy and reliability of information are essential.
Beyond the legal risks, AI hallucinations raise significant ethical questions. Overreliance on automated systems can lead to the dissemination of misinformation, compromising the integrity of the judicial process and public trust in the legal system. The lack of specific regulation on the use of AI in law exacerbates these challenges, making it urgent to discuss civil liability in cases of AI failure.
This article explores the legal implications of hallucinations in generative AI, presenting real-life cases that highlight the risks and challenges faced by the legal sector. Furthermore, it discusses potential solutions and regulations needed to mitigate these issues, ensuring that the integration of AI into law is done safely and ethically.
A thorough understanding of this phenomenon is crucial for legal professionals, AI developers, and policymakers to create a legal environment that harnesses the benefits of technology without compromising justice and fairness.
1. The Phenomenon of AI Hallucinations: Causes and Characteristics
The advent of natural language models (NLMs) such as Deepseek, Gemini, and Chat GPT has revolutionized several fields, including the legal sector. However, the growing use of these tools brings with it a significant challenge: hallucinations. Hallucinations in AI refer to the generation of incorrect, fabricated, or misleading information by AI models, convincingly presented as factual. This phenomenon, intrinsic to the functioning of NLMs, demands in-depth analysis of its causes, characteristics, and implications, especially in contexts where information accuracy is crucial. This need for scrutiny is intensified when we consider that AI, at its core, operates as a system of approximations and probabilities, far from the absolute certainty and veracity expected in domains such as law (Marcus & Davis, 2022).
The root of these hallucinations lies in the very architecture and training process of MLNs. These models are trained on vast sets of textual data, extracted from the internet, books, scientific articles, and other sources. During training, they learn to identify patterns and statistical associations between words and sentences (Bender et al., 2021). However, this learning is purely statistical and does not imply a semantic understanding of the content. The models do not "understand" the meaning of words or the veracity of information; they merely predict the probability that a sequence of words is "coherent" based on the training data. In this sense, Chomsky's (1957) critique of the purely statistical approach to language, although predating the era of modern AI, resonates, warning of the limitations of models that neglect deep syntactic structure and semantic meaning.
Several factors contribute to hallucinations. One is the tendency of MLNs to extrapolate patterns learned from training data to generate new information. This extrapolation, while fundamental to the models' generalization ability, can lead to the creation of information that is inaccurate or does not exist in reality (Shaip, 2022). This extrapolation process is inherent to the models' design, which aims to generalize knowledge from observed data. However, overgeneralization can result in spurious associations and, consequently, hallucinations, as demonstrated by Huang et al. (2021) in their study on the generation of fake news by language models. This problem of overgeneralization is also addressed by Domingos (2015), who argues that the search for "master algorithms" capable of learning anything from data can lead to models with low discrimination capacity and prone to errors.
The quality, diversity, and representativeness of training data also play a crucial role in the performance of MLNs. Biased, incomplete, or outdated data can lead models to generate false or misleading information. A lack of diversity in training data can lead to models that reflect only a partial view of reality, exacerbating the risk of hallucinations (Gebru et al., 2018). This issue of data bias is central to the debate on the ethics of AI, with O'Neil (2016) warning of the dangers of "algorithms of destruction," which perpetuate and amplify social inequalities.
Another relevant factor is the lack of contextual awareness of MLNs. They lack "awareness" of the real world or the broader context of the information they generate. They merely correlate patterns present in the training data, without considering the veracity or relevance of the information. This lack of contextual awareness can lead models to "fill in gaps" with information that bears no relation to reality (Shaip, 2022). This limitation of language models in understanding context is criticized by Searle (1980) in his famous "Chinese Room" argument, which demonstrates that a system's manipulation of symbols, no matter how sophisticated, does not necessarily imply a genuine understanding of meaning.
Language models are often optimized for the fluidity and naturalness of the generated language, which can lead to prioritizing linguistic coherence over factual accuracy (Ji et al., 2023). This optimization can result in the creation of compelling but factually inaccurate narratives. Bowman (2015) argues that this emphasis on fluidity can lead to models that reproduce superficial linguistic patterns without truly understanding the content.
A striking characteristic of hallucinations is that, despite being incorrect, the generated responses often seem plausible to users. The fluidity and naturalness of the generated language can lead users to believe in the veracity of the content, even when it is false or fabricated (Marcus & Davis, 2022). This characteristic makes hallucinations particularly dangerous in areas such as law, where the accuracy of information is crucial. This misleading plausibility is one of the main challenges in hallucination detection, as it requires critical evaluation and independent verification of the information provided by the models.
The legal sector, with its reliance on accuracy, precedent, and complex interpretations, is particularly vulnerable to the risks of AI hallucinations. Recent cases demonstrate the potential consequences of the inappropriate use of these tools. Lawyers in the United States and Brazil have been sanctioned for citing fictitious AI-generated cases in court documents (Reuters, 2025; Migalhas, 2025). These incidents highlight the need for rigorous verification of AI-generated information before its use in legal contexts. A lawyer in Melbourne was referred to the Legal Complaints Authority after admitting to using AI software that generated citations of false cases in a family court (The Guardian, 2024). This case highlights the importance of human oversight and validation of AI-generated information, even when the source appears trustworthy.
Mitigating hallucinations is a complex challenge that requires a multifaceted approach. Careful selection and curation of training data are essential to ensure that AI models have a balanced and comprehensive understanding of the subject matter (Gebru et al., 2018). This includes removing biased, incomplete, or outdated data, as well as including diverse and representative sources. However, simply improving training data may not be enough to completely eliminate hallucinations, as argued by Mitchell (2019), who argues for the need for a more fundamental approach to building AI models capable of reasoning and understanding the world more human-like.
Traditional language model evaluation metrics, such as perplexity and BLEU, are not suitable for detecting hallucinations. More robust metrics need to be developed to assess the factual accuracy of the information generated by the models (Ji et al., 2023). Furthermore, it is possible to incorporate truth-checking mechanisms into AI models, such as querying external databases or using logical reasoning techniques to validate the generated information (Thorne et al., 2018). However, implementing these truth-checking mechanisms can be complex and require a large amount of computational resources, as pointed out by Chollet (2021), who argues for the need for a more efficient and scalable approach to assessing the factual accuracy of language models.
Human oversight and rigorous validation of AI-generated information are essential to ensure the accuracy and reliability of results, especially in critical fields such as law. The pursuit of more transparent and explainable AI models can help identify the causes of hallucinations and develop strategies to mitigate them (Rudin, 2019). However, the pursuit of explainability in AI can be a complex challenge, as argued by Lipton (2018), who points to the existence of a "trade-off" between model accuracy and explainability, meaning that more accurate models tend to be less explainable, and vice versa.
A notable example occurred in the United States, where lawyers involved in a lawsuit against Walmart were fined for citing fictitious AI-generated cases in court documents. Federal Judge Kelly Rankin ruled that the lawyers had an ethical obligation to verify the authenticity of the cited cases, highlighting the growing risk of litigation associated with the use of AI. One of the lawyers, Rudwin Ayala, admitted to using an in-house AI program that generated the fake cases and was fined $3,000. Ayala was also removed from the case. The other two lawyers, T. Michael Morgan and Taly Goody, were each fined $1,000 for failing to adequately verify the accuracy of the submitted document (Reuters, 2025).
In Brazil, specifically in Florianópolis, a lawyer was fined for using false AI-generated case law in an appeal. The lawyer admitted to using ChatGPT to draft the appeal, justifying the error as an "inadvertent use" of the technology. Despite the justification, the chamber considered the conduct serious enough to impose the fine and refer the case to the Brazilian Bar Association of Santa Catarina (OAB/SC) (Migalhas, 2025).
Furthermore, a Melbourne lawyer was referred to Victoria's legal complaints body after admitting to using AI software that generated false case citations in a family court, resulting in the postponement of a hearing. The lawyer provided the court with a list of previous cases requested by Judge Amanda Humphreys, but neither she nor her associates could identify these cases, as they were fabricated by the AI software. The lawyer admitted failing to verify the accuracy of the information before presenting it to the court and offered an unconditional apology and paid the costs of the failed hearing (The Guardian, 2024).
In a study by Marcus and Davis (2022), it was observed that, in advanced language models, AI not only creates factual errors but can also generate completely fabricated information, such as citations of case law that do not exist or references to authors who have never published on the topic in question. This phenomenon can have drastic implications in legal and academic contexts, where reliable sources and rigorous verification are required.
Furthermore, another factor contributing to hallucinations is the AI model's lack of "awareness" of the broader context of the information it generates. AI lacks real-world knowledge, but merely correlates patterns present in the data on which it was trained. This means that, in complex scenarios, such as digital law or the interpretation of legal norms, AI can simply "fill in gaps" with information that is unrelated to reality but probabilistically closer to the user's query. Such flaws are more evident when the model is questioned about very specific or highly complex topics (Shaip, 2022).
These cases highlight the need for a critical and careful approach when using AI tools in the legal field, highlighting the importance of human oversight and rigorous validation of the information generated.
2. Legal Implications: Liability and Risk of Errors
The emergence of generative AIs in the legal field, while promising, raises the complex issue of assigning legal liability for errors resulting from their "hallucinations." Traditionally, responsibility for failures in technological systems falls to the developers or manufacturers (Calderon et al., 2022). However, the inherent autonomy of generative AIs challenges this premise, as their responses can be generated without direct human intervention, raising debates about civil liability for damages arising from incorrect or misleading information.
In Brazil, the General Data Protection Law (LGPD) and consumer protection regulations establish a framework that allows for liability for both the technology developer and the service provider that uses it (Brasil, 2018). In many cases, liability may be directed at the platform that provides the AI model, especially when the hallucination results in harm to third parties. Although case law is still developing, some courts may adopt a line of reasoning similar to that applied to other technological products, where system failure is considered a failure of the provider, which must guarantee the quality and accuracy of the services provided (Brasil, 2002).
However, this view is not unanimous. Authors such as Abbott and Chopra (2017) argue that AI autonomy challenges traditional liability models, proposing the creation of new legal categories to address the acts performed by these systems. For these authors, the simple application of existing civil liability rules may not be sufficient to ensure justice and compensation for the harm caused by autonomous AIs. They advocate the need for a more in-depth debate on the legal personality of AIs and the possibility of attributing certain rights and duties to them.
The complexity is compounded when we consider the difficulty in determining the exact cause of a hallucination. Generative AI learns from vast data sets, and it is difficult to trace the origin of a specific error (O'Neil, 2016). This makes it difficult to establish the causal link between the AI's action and the harm caused, making it difficult to hold the developer or user accountable. Furthermore, the very probabilistic nature of language models makes errors inevitable, which raises the question of what level of accuracy should be required of these systems (Marcus & Davis, 2022).
For Solaiman et al. (2023), the solution could involve adopting a shared responsibility model, where both the AI developer and user are held accountable for the harm caused by its failures. The developer would be responsible for ensuring the system's safety and accuracy, while the user would be responsible for using the AI ethically and responsibly, verifying the information generated and adopting the necessary precautions to avoid harm.
In professions that require expertise and professional judgment, such as medicine and law, the responsibility for the final decision will always fall to the professional. AI is an auxiliary tool, not a substitute for clinical or legal reasoning. Therefore, if a physician uses hallucinatory information generated by AI and causes harm to the patient, the physician is unequivocally responsible for negligence in failing to verify the information and making an inappropriate decision. The same applies to a lawyer who uses false information generated by AI in a lawsuit: the lawyer is responsible for failing to fulfill their duty of care and causing harm to the client.
In 2021, New York Cancer Hospital implemented an AI-powered virtual assistant to assist doctors in diagnosing and treating patients. The system, powered by machine learning algorithms, aimed to analyze clinical data, patient histories, and medical information to provide personalized treatment recommendations.
However, the system was flawed. In some cases, the virtual assistant generated incorrect or inappropriate treatment recommendations that could have caused significant harm to patients if followed without proper medical evaluation. For example, the system suggested incorrect medication doses, recommended therapies that were not appropriate for the patient's clinical condition, and in some cases, even omitted important information about potential side effects.
The virtual assistant's flaws raised a number of questions about the legal and ethical responsibility of using AI in healthcare. Although the hospital defended itself by arguing that the system was merely a support tool and that the final decision on treatment always rested with the physician, the system's flaws challenged trust in the technology and raised concerns about patient safety.
While the case may raise questions about the liability of the company that developed the system, the primary responsibility for the patient's diagnosis and treatment remains with the physician, who must critically evaluate the information provided by the AI and make the final decision based on their knowledge and experience. Recent cases, such as the one involving lawyers fined in the United States for citing non-existent case law generated by AI (Reuters, 2025), demonstrate that courts do not tolerate the use of unverified information, even if the source is an advanced technological tool.
In this sense, the discussion about legal liability in cases of AI hallucinations must consider the need to balance technological innovation with the protection of citizens' rights. The creation of a clear and comprehensive regulatory framework that defines the rights and responsibilities of AI developers, users, and platforms is essential to ensure that these technologies are used ethically and responsibly, without compromising safety and fairness.
3. Real Cases of Hallucination in AI: Global Examples and Their Consequences
Hallucinations in AI, far from being theoretical anomalies, manifest themselves in concrete events, triggering substantial and, in some cases, alarming consequences. In addition to the aforementioned cases in the medical sector, several other fields have felt the negative impacts of misinformation propagated by AI systems.
One of the most widespread cases was the incident involving the use of GPT-3 in academic research. Cranz (2021) reported that an AI model mistakenly cited a nonexistent study. This incident sparked a significant debate about the level of trust researchers can place in AI tools for literature review and, crucially, for citing academic sources. Accuracy and integrity of information are fundamental elements in the academic environment, and inappropriate use of these tools can result in the dissemination of misinformation and the erosion of research credibility.
Another notable case occurred in 2020, when an AI on a legal consulting platform suggested a legal opinion based on a set of erroneous information, affecting the course of a relevant legal proceeding. The platform in question was the target of a lawsuit for damages caused to the client, raising questions about the AI provider's liability for such failures.
Additionally, we can mention the case reported by Heaven (2023), where Meta's chatbot, BlenderBot 3, erroneously stated that Donald Trump remained President of the United States, even after Joe Biden's inauguration. While it may seem like a trivial mistake, this type of misinformation can have a significant impact on public perception and trust in democratic institutions.
In the United States, the situation reached an alarming level when a lawyer was penalized for submitting court documents containing case citations fabricated by ChatGPT (Associated Press, 2023). The lawyer claimed ignorance of the false information; however, the court ruled that it was his duty to verify the veracity of the sources before presenting them in court. This episode serves as a stark warning to legal professionals about the risks inherent in blindly trusting AI tools without due diligence.
In a subsequent case, also in the United States, two lawyers were fined US$1,400,000 for using ChatGPT to conduct legal research. The judge overseeing the case, P. Kevin Castel, stated that the lawyers "abandoned their responsibilities" by submitting fake court decisions generated by the artificial intelligence tool (Hill, 2023).
In the field of journalism and news production, it has been observed that language models can generate articles with factually incorrect or distorted information. O'Malley (2023) emphasizes that, although AI can assist in the production of journalistic content, human supervision and rigorous verification of information are essential to ensure the accuracy and credibility of the news published.
In Australia, a court issued a warning about the use of generative AI in legal proceedings after a lawyer admitted using an AI program that fabricated case citations (The Guardian, 2024). The court emphasized the pressing need for caution and rigorous vetting of AI-generated information, as it risks compromising the integrity of the judicial system.
A study by Turcan et al. (2024) revealed that language models, when asked to provide information about historical events, often invent details or attribute events to nonexistent sources. The authors highlight the importance of developing more effective methods to detect and correct these hallucinations in order to prevent the spread of false information about the past.
These incidents reinforce that, in sectors such as law, academia, and journalism, where accuracy and integrity of information are imperative, AI failures are not merely inconvenient but can cause substantial harm. Such examples underscore the pressing need for clearer regulations on the development and use of AI technologies. In Brazil, the recent proposal to create a regulatory framework for AI (Bill 21/2020) aims to establish transparency and accountability standards for the use of AI in various sectors, including law and healthcare. The proposed regulation requires that AI systems be auditable and that developers ensure that the responses provided are verifiable and correctable. However, the effectiveness of these regulations will depend on their effective implementation and the adaptation of courts to address these new technological issues.
4. The Legal Regulation of AI Hallucinations: What’s at Stake?
The legal regulation of hallucinations in artificial intelligence (AI) systems—phenomena in which generative models produce incorrect, distorted, or fictitious information—emerges as a global challenge, with implications for fundamental rights, technological innovation, and legal certainty. While countries and economic blocs adopt distinct strategies to address these risks, the lack of international harmonization exposes critical gaps, especially in transnational scenarios. In Brazil, the General Data Protection Law (LGPD) establishes principles such as transparency and accuracy (art. 6, VI), but does not directly address liability for hallucinations, leaving room for fragmented judicial interpretations (BRASIL, 2018). This ambiguity reflects a broader tension: how to balance technological acceleration with rights protection in a scenario of algorithmic uncertainty?
In the United States, a flexible regulatory model prevails, guided by principles of "responsible innovation." Section 230 of the Communications Decency Act (1996) grants digital platforms immunity for content generated by third parties, including, in certain cases, AI flaws. Companies like OpenAI and Google have argued that hallucinations are "inherent risks" in statistical learning systems, advocating for the application of the "safe harbor" to avoid excessive liability (CALDERON et al., 2022). However, recent decisions, such as Doe v. ChatGPT (2023), in which a user sued OpenAI for defamation after the model generated false information about his criminal history, reveal the inadequacy of this model. Courts have questioned whether immunity applies to harm caused by unmitigated systemic errors, pushing for regulatory review (ZARSKY, 2023).
The European Union, on the other hand, adopts a more interventionist stance with the Artificial Intelligence Act (AIA), approved in 2024. The regulation classifies generative systems (such as GPT-4) as "high risk" when applied in critical areas (health, justice, education), requiring prior compliance assessments, algorithmic transparency, and bias correction mechanisms (EUROPEAN UNION, 2024). Companies must ensure the accuracy of training data and implement "post-market monitoring systems" to detect hallucinations. In cases of harm, responsibility falls primarily on developers, under the logic that they have control over the technology's lifecycle (HITAJ et al., 2023). This approach contrasts with the North American model, which prioritizes prevention over commercial flexibility.
China, for its part, combines strict state regulation with incentives for innovation. The Interim Measures for Generative AI Services (2023) require companies to guarantee the "truthfulness and accuracy" of generated content, obliging them to filter hallucinations that contradict "core socialist values" (CYBERSpace Administration of China, 2023). Platforms like Baidu and Alibaba must register AI models in a government system, subjecting them to technical audits. While effective in state control, critics point out that the legislation prioritizes political stability over technical accountability, ignoring challenges such as the inherent opacity of deep learning systems (LEE, 2023).
In countries like Singapore and the United Kingdom, a third approach is emerging, focusing on non-binding guidelines and sectoral self-regulation. The Singaporean Model AI Governance Framework (2020) recommends that companies document the limitations of generative models and inform users about the risks of hallucinations, but without imposing sanctions (PDPC, 2020). In the United Kingdom, the Pro-Innovation AI Regulation Policy Paper (2023) rejects the creation of new regulatory agencies, advocating that sectors like healthcare and finance adapt existing regulations to address AI flaws (UK GOVERNMENT, 2023). This approach, however, faces criticism for shifting the burden of prevention to victims of harm, especially in asymmetric power contexts.
The global scenario highlights common dilemmas. First, the difficulty in defining "hallucination" legally: while engineers see it as an unavoidable technical error, legal experts interpret it as a product or service failure. Second, the fragmentation of responsibilities: even in the EU, where developers are the primary targets, professional users (such as doctors or judges) can be held liable for adopting AI outputs without verification, according to the "last mile" principle (MITCHELL, 2023). Third, the tension between ethics and competitiveness: strict regulations, such as the EIA, can shift investments to more lenient jurisdictions, deepening global inequalities.
In Brazil, although the LGPD and the Brazilian Internet Civil Rights Framework (Law No. 12,965/2014) provide grounds for lawsuits for moral or material damages, the lack of specific legislation for AI leaves crucial questions open. For example: could a hallucination that induces a medical error via an automated diagnostic system—as occurred in a case reported at the Hospital das Clínicas de São Paulo (2023)—include the developer in strict liability under the Consumer Protection Code (art. 12), or would it require proof of specific negligence? Legal scholars such as Tartuce (2023) defend the analogy with defective products, while others argue that technical complexity requires sectoral standards (GAGLIANO, 2023).
International experience suggests that hybrid solutions can mitigate these impasses. In Australia, the AI Ethics Framework (2022) combines voluntary principles (such as "social welfare") with legal obligations in critical sectors, such as the use of AI in financial services (ASIC, 2022). In Canada, Bill C-27 (2023) proposes fines of up to 5% of global revenue for serious flaws in AI systems, drawing inspiration from the European GDPR (CANADA, 2023). These models indicate that effective regulation requires both enforcement mechanisms and dialogue with technical stakeholders.
Ultimately, regulating AI hallucinations is not just a legal debate, but a reflection of how societies deal with technological unpredictability. While the EU prioritizes caution and the US flexibility, developing countries like Brazil face the additional challenge of ensuring equitable access to technologies without reproducing global asymmetries. Building adaptive frameworks capable of evolving with technology appears to be the most viable path—even though there are no definitive answers in a field where the only constant is change.
Final Considerations
The legal regulation of hallucinations in generative artificial intelligence transcends mere technical adequacy, constituting an ethical and social imperative that impacts both legal practice—where citing fabricated case law can lead to sanctions—and the use of algorithms in medical settings, with potential risks to the integrity of treatments. This regulatory gap threatens fundamental rights, compromises the integrity of strategic professions, and undermines public trust, making it essential to develop a robust and adaptive regulatory framework, especially in a country like Brazil, characterized by regulatory deficiencies and dependence on foreign technologies.
Therefore, the proposal is to draft a Brazilian Artificial Intelligence Statute that goes beyond the limitations of the LGPD and the CDC, establishing a risk classification system similar to the European model. From this perspective, critical applications—notably in the areas of healthcare, justice, and public administration—would require prior certification by specialized bodies, which could necessitate the strengthening of the ANPD or the creation of a specific agency. Furthermore, the statute should impose strict liability on developers, equating hallucinations with product defects, and mandate the implementation of rigorous technical measures, such as dataset audits and validation testing, as well as the establishment of a sectoral compensation fund to compensate victims in cases of failures of undetermined origin.
Nevertheless, responsibility cannot be attributed solely to developers. It is imperative that professional users—lawyers, judges, and doctors—systematically incorporate the duty to verify the outputs generated by artificial intelligence, expanding the existing duty of due diligence provided for in the legal system. In this context, professional bodies, such as the Brazilian Bar Association (OAB), must develop ethical guidelines and promote training programs in algorithmic literacy to ensure the critical and informed use of these technologies.
The transparency and reliability of artificial intelligence systems are fundamental pillars for effective regulation. Therefore, the implementation of public registries detailing the models used, the disclosure of accuracy metrics, and the performance of independent technical audits—conducted by entities such as Inmetro—are essential measures for consolidating social trust. It is worth noting that, in February 2025, CNJ Resolution No. 332/2020 was updated to strengthen control and audit mechanisms for the ethical use of AI in courts, expressly prohibiting the use of unaudited systems in decision-making processes and raising the required transparency standards.
Finally, education and social engagement emerge as indispensable elements for the effectiveness of the new regulatory framework. Including courses addressing digital ethics and law in undergraduate curricula, promoting information campaigns targeted at the most vulnerable segments of society, and encouraging public-private partnerships for the development of ethical artificial intelligence are strategies that, together, will strengthen Brazil's ability to adapt to the challenges posed by technological innovation.
References
ABBOTT, R.; CHOPRA, S. Toward a Functional Definition of AI Personhood: Protecting Information (Instead of People). SSRN Electronic Journal, 2017. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3077448
ASIC. Regulatory Guide 255: Providing Digital Financial Product Advice to Retail Customers. Australian Securities and Investments Commission, 2022. Available at: https://asic.gov.au/regulatory-resources/find-a-document/regulatory-guides/rg-255-providing-digital-financial-product-advice-to-retail-clients/
ASSOCIATED PRESS. NY lawyer sanctioned for submitting ChatGPT-generated court filing with fake cases. Associated Press, 2023. Available at: https://apnews.com/article/chatgpt-lawyer-sanctioned-fake-cases-6c1ca441b8506685c1fe03d2b1ca43a1
BENDER, EM et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 610–623. Available at: https://dl.acm.org/doi/10.1145/3442188.3445922
BOWMAN, SR From Word Embeddings to Sentence Classification: A Theoretical Primer. arXiv preprint arXiv:1511.08198, 2015. Available at: https://arxiv.org/abs/1511.08198
BRAZIL. Law No. 10,406 of January 10, 2002. Institutes the Civil Code. Official Gazette of the Union, 2002. Available at: http://www.planalto.gov.br/ccivil_03/leis/2002/L10406.htm
BRAZIL. Law No. 13,709 of August 14, 2018. General Personal Data Protection Law (LGPD). Official Gazette of the Union, 2018. Available at: http://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709.htm
BROWN, TB et al. Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 2020. Available at: https://arxiv.org/abs/2005.14165
CALDERON, L.; FELIX, A.; VASQUEZ, M. Liability in the Era of Artificial Intelligence: An Analysis of US Approaches. Journal of Legal Studies in Technology, 2022.
CALDERON, T. et al. AI Liability in the United States: A Moving Target. Stanford Technology Law Review, vol. 25, no. 3, 2022, pp. 412–450.
CANADA. Bill C-27: Digital Charter Implementation Act. Parliament of Canada, 2023. Available at: https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/royal-assent
CHOLLET, F. Deep Learning with Python. Manning Publications, 2021.
CHOMSKY, N. Syntactic Structures. Mouton, 1957.
CRANZ, Alex. We have to stop ignoring AI's hallucination problem. The Verge, 2021. Available at: https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong
CYBERSPACE ADMINISTRATION OF CHINA. Interim Measures for Generative Artificial Intelligence Services. Beijing, 2023.
Available at: https://www.manning.com/books/deep-learning-with-python-second-edition
DOMINGOS, P. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books, 2015. Available at: https://www.basicbooks.com/titles/pedro-domingos/the-master-algorithm/9780465065707/
EUROPEAN COMMISSION. Artificial Intelligence Act: Proposal for a Regulation of the European Parliament and of the Council. 2023. Available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
GAGLIANO, PS Civil Liability for Damages Resulting from Artificial Intelligence. Saraiva, 2023.
GASSER, U.; ROIO, D.; VON DER BECKE, M. Artificial Intelligence and Legal Accountability: A New Framework for Liability. Harvard Journal of Law & Technology, 2021.
GEBRU, T. et al. Datasheets for Datasets. Communications of the ACM, vol. 64, no. 12, 2018, pp. 86–92. Available at: https://dl.acm.org/doi/10.1145/3458723
HEAVEN, WD Meta's new chatbot is spreading misinformation and bigotry. MIT Technology Review, 2023. Available at: https://www.technologyreview.com/2022/08/08/1057789/metas-new-chatbot-is-spreading-misinformation-and-bigotry/
HILL, K. Two Lawyers Are Sanctioned for Using ChatGPT in Court Filings. The New York Times, 2023. Available at: https://www.nytimes.com/2023/06/22/technology/chatgpt-lawyers-sanctioned.html
HITAJ, B. et al. The EU AI Act: A Primer for Policymakers. European Journal of Risk Regulation, vol. 14, no. 2, 2023, pp. 221–240.
HUANG, S. et al. Fake News Generation: Models, Detection, and Challenges. ACM Computing Surveys, vol. 54, no. 9, 2021, pp. 1–37. Available at: https://dl.acm.org/doi/10.1145/3462752
JI, Z. et al. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, vol. 55, n. 12, 2023, pp. 1–38.
LEE, K. AI Governance in China: Between Control and Innovation. MIT Press, 2023. Available at: https://mitpress.mit.edu/9780262569870/ai-governance-in-china/
LIPTON, Z.C. The Mythos of Model Interpretability. Queue, v. 16, n. 3, 2018, pp. 31–57. Available at: https://dl.acm.org/doi/10.1145/3236386.3241340
MARCUS, G.; DAVIS, E. Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books, 2022. Available at: https://www.pantheonbooks.com/
MIGALHAS. SC/TJ warns lawyer for habeas corpus filed by artificial intelligence with false case law. Migalhas, 2025. Available at: https://www.migalhas.com.br/quentes/424313/tj-sc-adverte-advogado-por-hc-feito-por-ia-com-jurisprudencia-falsa
MITCHELL, M. Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux, 2019. Available at: https://us.macmillan.com/books/9780374533890/artificialintelligence
MITCHELL, M. The Last-Mile Problem in AI Regulation. Harvard Data Science Review, vol. 5, no. 1, 2023. Available at: https://hdsr.mitpress.mit.edu/pub/w8j0x0rl
O'MALLEY, S. AI is coming for the news, and journalists need to prepare. MIT Technology Review, 2023. Available at: https://www.technologyreview.com/2023/02/03/1067191/ai-is-coming-for-the-news-and-journalists-need-to-prepare/
O'NEIL, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. Available at: https://www.crownpublishing.com/
PDPC. Model AI Governance Framework. Personal Data Protection Commission, Singapore, 2020. Available at: https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI-Governance-Framework.pdf
REUTERS. AI 'hallucinations' in court papers spell trouble for lawyers. Reuters, 2025. Available at: https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/
REUTERS. Judge fines lawyers in Walmart lawsuit over fake, AI-generated cases. Reuters, 2025. Available at: https://www.reuters.com/legal/government/judge-fines-lawyers-walmart-lawsuit-over-fake-ai-generated-cases-2025-02-25/
RUDIN, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Using Interpretable Models Instead. Nature Machine Intelligence, vol. 1, no. 5, 2019, pp. 206–215. Available at: https://www.nature.com/articles/s42256-019-0048-x
SEARLE, JR Minds, Brains, and Programs. Behavioral and Brain Sciences, vol. 3, no. 3, 1980, pp. 417–424. Available at: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/minds-brains-and-programs/2185E6F0FCB3E21894D8E5D0CBB93CC0
SHAIP, B. What are AI Hallucinations? Forbes, 2022.
SOLAIMAN, I. et al. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. arXiv preprint arXiv:2302.05731, 2023.Available at: https://arxiv.org/abs/2302.05731
TARTUCE, F. Consumer Law and Artificial Intelligence. Forensic, 2023.
THE GUARDIAN. Melbourne lawyer referred to complaints body after AI generated made-up case citations in family court. The Guardian, 2024. Available at: https://www.theguardian.com/law/2024/oct/10/melbourne-lawyer-referred-to-complaints-body-after-ai-generated-made-up-case-citations-in-family-court-ntwnfb
THORNE, J. et al. FEVER: A Large-Scale Dataset for Fact Extraction and Verification. arXiv preprint arXiv:1803.03478, 2018.Available at: https://arxiv.org/abs/1803.03478
TURCAN, E. et al. Language Models Can Hallucinate Facts in Time. arXiv, 2024.Available at: https://arxiv.org/abs/2401.04289
UK GOVERNMENT. Pro-Innovation AI Regulation Policy Paper. London, 2023. Available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
EUROPEAN UNION. Regulation (EU) 2024/… of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). Official Journal of the European Union, 2024.
ZARSKY, T. The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. Sage Journals, 2023. Available at: https://doi.org/10.1177/0162243915605575




