1. Introduction
On December 16, 2024, the Federal Supreme Court (STF) launched MarIA – the Artificial Intelligence Writing Support Module – a tool based on generative artificial intelligence aimed at optimizing the drafting of procedural documents. Developed with support from Elogroup Consulting Ltda., which granted the rights to the AI source code and components, the initiative represents a milestone in the modernization of the Brazilian Judiciary. However, its implementation also raises sensitive questions about transparency, accountability, and the impact of automation on due process.
The MarIA tool has three main functions: (i) generating automatic summary judgments based on the justices' votes and decisions; (ii) preparing procedural reports for extraordinary appeals (REs) and extraordinary appeals with appeal (AREs); and (iii) conducting an initial analysis of complaints received by the Supreme Federal Court (STF), assisting in identifying repetitive demands. Chief Justice Luís Roberto Barroso emphasized that the initiative aims to reduce procedural overload and ensure faster processing of cases, considering the backlog of more than 83.8 million cases pending in the Brazilian Judiciary.
MarIA's promising procedural optimization capabilities, however, raise challenges related to the transparency of the tool's methods and the need for effective human oversight. Although the Supreme Court's Technology and Innovation Secretariat has emphasized that all decisions will continue to be reviewed by justices and staff, the use of artificial intelligence in the Judiciary requires an in-depth debate on its implications for the principles of adversarial proceedings, full defense, and the justification of judicial decisions.
Furthermore, the adoption of MarIA reflects a global trend toward incorporating technological solutions into the law, driving structural changes in the way the Judiciary interacts with procedural information. This scenario reinforces the need to establish governance mechanisms that ensure equity, impartiality, and respect for rights.
2. Ethical and Legal Challenges of Automation
Automating judicial tasks through MarIA presents ethical and legal challenges that go beyond simply modernizing the judiciary. One of the main obstacles concerns the opacity of the algorithms used, known as black box algorithms (black-box algorithms). When a report or procedural summary is generated by a system whose logic remains unknown, doubts arise about whether the criteria used comply with constitutional principles such as transparency, predictability, and due process.
The lack of explainability in artificial intelligence models has been widely discussed in various areas of law and public administration. An emblematic case occurred in the Netherlands, where the government used an automated system, SyRI (System Risk Indication), to detect fraud in social benefits. The algorithm operated without transparency, cross-referencing data from thousands of citizens without disclosing the analysis criteria. Following allegations of discrimination against immigrants and socioeconomically vulnerable groups, the system was declared illegal by the Hague Tribunal in 2020, claiming it violated fundamental rights such as privacy and non-discrimination.
Another relevant example occurred in the United Kingdom, where an algorithm was used to grade students during the COVID-19 pandemic, when in-person exams were canceled. The system, designed to replace traditional assessments, ended up harming public school students, unfairly reducing their grades based on historical academic performance patterns. After a wave of protests and legal challenges, the British government was forced to abandon the methodology and reconsider the assessment criteria.
These examples demonstrate that, without clear auditing and explainability mechanisms, algorithms can reinforce inequalities and make problematic decisions without providing those affected with effective means of appeal. In the case of MarAI, although its objective is to assist in the systematization of procedural information and not replace human judgment, the opacity of its operation can generate similar risks.
The neutrality of algorithms is also open to question. Without continuous evaluation, automation can reproduce or exacerbate existing biases, challenging the very idea of judicial impartiality. Finally, it is essential to clearly define responsibility for the reports generated: if an inconsistency or error is identified, what level of oversight and correction is possible? This question raises the debate on the limits of technical autonomy and highlights the need for constant oversight, ensuring that technology is a tool for improving justice, not reinforcing structural inequalities.
3. Algorithmic Transparency: A Democratic Imperative
The implementation of MarIA in the Federal Supreme Court (STF) represents a significant leap forward in the modernization of the Judiciary, using generative artificial intelligence to assist justices and staff in analyzing and synthesizing procedural information. This innovation, while not granting decision-making autonomy, plays a crucial role in the formulation of votes, the preparation of reports, and the preliminary analysis of complaints, directly and indirectly influencing the judges' interpretations. Given this scenario, algorithmic transparency is essential to ensuring the ethical use of technology, in strict compliance with the principles of due process.
To ensure that MarIA acts as a reliable support tool, it is imperative that the criteria, data, and parameters that guide its responses are widely disseminated to legal practitioners. This disclosure does not necessarily require unrestricted access to the source code, but rather the establishment of mechanisms that allow for an understanding of the tool's operating logic. This way, potential biases can be identified and corrected proactively, avoiding distortions that could compromise judicial practice.
The need for algorithmic transparency is supported by several real-life cases that highlight the risks of opaque systems. For example, academic research and analyses of predictive policing systems employed in urban centers in the United States have shown that, when criteria and parameters are not widely disclosed, algorithms can direct resources in a discriminatory manner. Such studies point to increased scrutiny of historically marginalized communities, demonstrating that a lack of transparency and auditing can exacerbate social inequalities and reproduce structural biases.
Another relevant example is the episode that occurred in the United Kingdom during the COVID-19 pandemic, when an algorithm used to determine student grades disproportionately affected public school students. This case illustrates how a lack of clarity in assessment criteria can undermine trust in the decision-making process, limiting the possibility of appeal and significantly compromising fairness.
The predictability of the results generated by MarIA must be guaranteed through constant audits and a robust monitoring system. Only through continuous monitoring will it be possible to improve the data synthesis and organization mechanisms, mitigating the risk of algorithmic biases going undetected and unduly influencing the preparation of legal documents. This approach not only strengthens the reliability of the technology but also safeguards the impartiality of the judicial process, ensuring that technological innovation supports judicial activity and does not compromise justice.
Therefore, algorithmic transparency emerges not only as a technical requirement, but as an indispensable ethical commitment to strengthening the democratic rule of law. By ensuring that MarIA operates in a clear and scrutinizable manner, the Supreme Federal Court (STF) reaffirms its leading role in building a modern judiciary that integrates innovation and accountability. This stance promotes a digital justice system that respects fundamental rights, inspires trust in society, and becomes a true ally in promoting equity and transparency in decision-making processes.
4. Algorithmic Governance and Decision Transparency
The introduction of MarIA in the Supreme Federal Court (STF) poses challenges that go beyond the technological aspect, requiring the adoption of an algorithmic governance model that regulates its use and establishes guidelines for its oversight. Although the tool does not make judicial decisions, its influence on the preparation of rulings, reports, and preliminary analyses requires strict control over its implementation, preventing errors or structural biases from compromising the neutrality of the judicial system.
MarIA's algorithmic governance must ensure that its use is aligned with constitutional principles and fundamental rights, ensuring that AI-generated content is always reviewed by judges and civil servants before being incorporated into proceedings. Furthermore, a continuous audit protocol must be established, regularly assessing whether the information presented by the tool accurately reflects the jurisprudential understanding and precedents applicable to each case.
Transparency in content generation is crucial for MarIA to fulfill its supporting role without compromising the legitimacy of the judicial process. To achieve this, the parameters used by artificial intelligence in formulating summaries and reports must be documented and accessible to legal professionals, allowing lawyers and parties involved to understand and challenge any inconsistencies.
Furthermore, it is crucial that MarIA governance be flexible and adaptable to technological and legal transformations. As new challenges arise in the application of artificial intelligence in the Judiciary, the standards and protocols governing its use must be continually improved to ensure the tool remains reliable, ethical, and transparent. This way, the modernization of the Judiciary through artificial intelligence can occur safely, ensuring that technology serves justice, not the other way around.
5. The COMPAS Case: Lessons on Transparency and Algorithmic Discrimination
The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system, developed by Northpointe (now Equivant), has been widely used in the U.S. justice system to assess defendants' risk of recidivism. However, its implementation has raised significant questions about algorithmic bias and transparency in AI-assisted judicial decision-making.
The case State v. Loomis (881 N.W.2d 749, 2016) was a landmark case in the debate over the use of COMPAS in sentencing. Eric Loomis was sentenced to six years in prison based, in part, on the risk assessment generated by the algorithm. The defense argued that COMPAS's lack of transparency violated the right to due process, since the exact criteria used to classify the defendant were considered intellectual property of the developer and could not be challenged by the defense. The Wisconsin Supreme Court ruled against Loomis, stating that COMPAS could be used as an adjunct to sentencing, as long as it was not the sole determining criterion. However, the decision raised widespread concerns about the reliability and impartiality of algorithms in the criminal justice system.
In addition to the Loomis case, a 2016 study conducted by ProPublica[1] uncovered evidence that COMPAS exhibited a systematic bias against Black defendants. Investigative journalists analyzed more than 7,000 cases in Broward County, Florida, and found that the algorithm tended to classify Black defendants as "high risk" of reoffending far more often than white defendants, even when their criminal histories were similar.
The study highlighted emblematic cases such as those of Brisha Borden and Vernon Prater. Borden, an 18-year-old Black woman, was classified as high risk after being arrested for taking a bicycle and scooter left on the street, with no clear intent to steal them. Prater, a 41-year-old white man convicted of more serious crimes, including armed robbery, was classified as low risk. Two years later, Borden committed no new crimes, while Prater was arrested again for a robbery. This and other cases demonstrated that COMPAS incorrectly predicted recidivism, biasing it against racial minorities.
Furthermore, the analysis revealed that white defendants were more likely to be misclassified as "low risk" compared to Black defendants. The lack of transparency into how the algorithm worked prevented defense attorneys and judges from fully understanding how these decisions were being made, reinforcing the need for stricter regulation of the use of AI in the justice system.
The investigation of ProPublica also pointed out that the COMPAS methodology was protected as a trade secret by Northpointe, hindering an independent assessment of its criteria and operation. This sparked a debate about the need for greater transparency and auditability of algorithms used in judicial decisions, especially those that influence the deprivation of liberty of individuals.
The experience with COMPAS illustrates the risks of algorithmic opacity in the justice system, which raises similar concerns regarding the implementation of MarIA by the Supreme Federal Court (STF). In the case of COMPAS, the lack of transparency in its scoring criteria has led to questions about its influence on the deprivation of liberty of defendants, highlighting how reliance on automated systems can directly impact the fairness of decisions. Although MarIA does not make judicial decisions, its role in structuring legal research and organizing case law can indirectly influence judges' reasoning.
As with COMPAS, the lack of transparency in how data is processed and presented can result in biases that affect the conduct of judgments. If MarIA prioritizes certain precedents over others or structures legal research selectively, it could create a false impression of jurisprudential consensus, limiting the range of interpretations considered by justices. The fundamental difference between the two systems does not eliminate the risk that algorithmic opacity compromises the impartiality and predictability of the law.
The correlation between the two cases highlights the need for mechanisms that ensure the transparency and auditability of algorithms used in the Judiciary. COMPAS demonstrated how reliance on opaque models can lead to distortions in the administration of justice, and MarIA, albeit in a different context, can also indirectly shape decisions if clear criteria are not established regarding its operation and its influence on the interpretation of the law. If in COMPAS the lack of explainability raised doubts about the fairness of sentences, in MarIA, the lack of transparency can compromise the reliability of the information used by judges, reinforcing the need for control and scrutiny over its application.
Final Considerations
MarIA represents a transformative milestone in the modernization of the Brazilian Judiciary, promising to streamline processes and alleviate the burden that has long challenged the judicial system. However, this innovation must be accompanied by a firm commitment to transparency and collaborative governance. If the algorithms that generate summary judgments, reports, and procedural analyses remain opaque, the risk is that automation will end up reproducing existing biases, compromising the impartiality and predictability of judicial decisions. Therefore, it is crucial that the criteria, data, and parameters that guide MarIA be clearly disclosed, allowing legal practitioners, lawyers, and the public to monitor and question its operation.
This openness does not necessarily mean full disclosure of the source code, but rather the implementation of robust mechanisms for continuous auditing and democratic oversight. Creating an environment where technology is subject to periodic evaluation and constant feedback can transform MarIA into a dynamic system, capable of adapting and evolving as ethical and legal challenges arise. Experience with other automated systems reinforces that a lack of transparency can lead to injustice and discrimination, demonstrating the importance of rigorous oversight when applying technological solutions in the judicial sphere.
For MarIA to fulfill its supportive role without compromising fundamental rights, it is essential that its governance be built collaboratively, involving not only the Judiciary, but also representatives of the Legislature, experts in artificial intelligence, ethics, and civil society. This coordination among diverse stakeholders is crucial to ensuring that the technology serves justice, preventing it from becoming an instrument of exclusion or exacerbating inequalities. Furthermore, creating feedback channels that enable the correction of inconsistencies and the continuous improvement of the tool will foster an environment of constant learning and improvement.
Finally, encouraging research and development of new methodologies to improve the explainability of algorithms is vital for the Judiciary to position itself not only as a consumer of technology, but also as a key player in defining ethical guidelines for artificial intelligence. In this way, innovation can go hand in hand with democratic values, transforming MarAI into a symbol of digital justice that strengthens society's trust in the judicial system and contributes to building a future where technology and ethics intertwine harmoniously and fairly.
References
ALMEIDA, Virgil; et al. Algorithmic Institutionalism: the changing rules of social and political life. Cambridge: Cambridge University Press, 2023.
ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine Bias: There's software used across the country to predict future crimes. And it's biased against blacks. ProPublica, May 23, 2016. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing . Accessed on: February 2, 2025.
BRAZIL. Constitution (1988). Constitution of the Federative Republic of Brazil. Brasília, DF: Federal Senate, 1988. (Reference to the principles of publicity of judicial acts and due process of law: art. 5, LX; art. 93, IX).
MEDINA, Damares. MarIA: Technology, Opacity, and the Future of Constitutional Jurisdiction – The Challenges of an Algorithmic Revolution in the Judiciary. December 23, 2024.
STF. STF launches MarIA, an artificial intelligence tool that will make the Court's services more agile. Available at: https://noticias.stf.jus.br. Accessed on: December 16, 2024.
EUROPEAN UNION. Artificial Intelligence Act: proposal for a regulation laying down harmonized rules on artificial intelligence. Available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence. Accessed on: December 17, 2024.
[1] ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine Bias: There's software used across the country to predict future crimes. And it's biased against blacks. ProPublica, May 23, 2016. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing . Accessed on: February 2, 2025.




