The present blog post aims at confronting one of the core concepts of COHUBICOL, that of legal protection, through the analysis of a set of provisions recently issued by the President of the Brazilian Conselho Nacional de Justiça (National Council of Justice – CNJ), the Resolução nº 332, de 21 de Agosto de 2020 (english, trs. Gianmarco Gori & Tatiana Duarte). This Resolution, which addresses the challenges and opportunities that Artificial Legal Intelligence (ALI) poses for the administration of justice, presents several issues of interest:

Firstly, its positioning, both in a geographical and conceptual sense. On the one hand, the Brazilian legal system is emerging as a pioneer in the field of law and technology. In the absence of concrete initiatives by the legislator, the judiciary has shown a particular interest in the exploration of ALI: on the basis of the needs felt and the technical resources and know-how available within each judicial district (Brehm et al., 2020), Brazilian courts have developed at least 72 different ALI tools (Freitas, 2020).

On the other hand, the resolução fits into a transnational debate on AI and justice. By recalling the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment (CEPEJ, 2018), the resolução transforms the debate into a dialogue that takes place within the community of practicing jurists. Such community aims at advancing answers to those conceptual and practical questions that are left open or not addressed by the current normative framework. It is the judiciary itself, through its self-government organs, that undertakes an effort to identify a more fine-tuned set of rules and principles concerning in particular the role of data-science and AI in the administration of justice.

What is ALI for?

Secondly, it is worth focusing attention on both the human-machine interaction paradigm that has been adopted, as well as the assumptions and expectations related to the potentialities of ALI. The resolução understands ALI as capable of providing both a form of knowledge and a set of tools which, once put in the hands on the judiciary, can be used (i) to deepen “the understanding between law and human action, between freedom and judicial institutions” (art. 1), (ii) to advance the fairness of the judicial system (art. 2), (iii) to guarantee “the equal treatment of absolutely equal cases”, (iv) to secure legal certainty (art. 5), (v) to ensure the respect of human rights (art. 4), and (v) to combat discrimination (art. 7).

Besides being an instrument to acquire and manage information, ALI is identified as a tool to enhance the autonomy of legal decision-makers (art. 17, II). In this regard, the direction taken by the resolução is that of delimiting the role of ALI to that of supporting, not supplanting, the decision-making process carried out by the competent human authority. For this purpose, a fundamental requirement is identified in the explainability of models and the accessibility of data, which can afford both the ability to analyse and supervise the decision recommended by the machine on the part of the judge, and the ability to contest that decision on the part of the external user (and especially the lawyer).

The “communitarian model” and its checks and balances

Thirdly, the approach adopted in relation to the role of the judiciary in the development of ALI. Through a judiciary-centred “communitarian model” (art. 10, II), the resolução bets on a bottom-up approach aimed at promoting the spontaneous exploration of the potentialities of ALI under a framework of procedural and substantial obligations whose respect is supervised by the CNJ.

Other than facial recognition tools (art. 22, § 2), which require the prior authorization of the CNJ, research, development, and implementation of ALI models can all be carried out autonomously by individual courts.

Such a wide freedom of initiative and action is accompanied by the undertaking, by the whole judicial community, of an obligation to ensure the protection of human rights (art. 4). Furthermore, that freedom is counter-weighted by a set of provisions aiming at making the ALI chain a virtuous circle grounded on methodological integrity, transparency, and user-control, and backed by an in itinere supervision of the CNJ and the valorisation of feedback mechanism and contestation.

In this light, one can read the obligation to immediately report to the CNJ (art. 10; 22) at both the start of an ALI project and whenever any adverse events occur (art. 26; 27). The CNJ, in turn, as a means of both avoiding duplication of efforts and ensuring transparency (10, II-III; 11), is required to publish a list of the models of ALI in use or under development.

The ideal of ongoing supervision is expressed also by the provisions requiring the necessity to adjust any ALI model as soon as it fails to ensure the respect of the other requirements of the resolução (art. 22, § 1), in particular in the case of verified discriminatory bias (art. 7, § 2-3). Where compliance cannot be fulfilled via adjustment, any activity related to the ALI tool is to be terminated. The non-compliance with the rules and principles set forth in the resolution and other applicable provisions exposes the member of the judiciary not only to potential civil, administrative and criminal consequences, but also to disciplinary liability.

The occurrence of such a scenario is prevented however by a set of measures aimed at ensuring that, in all stages of the process of research, development and implementation of ALI, the composition of the teams is informed by diversity (art. 20, § 1), representative participation and interdisciplinarity (art. 20, § 4). Moreover, some provisions identify the security (art. 13-16), openness (art. 24), and interoperability (art. 12; 24) requirements of the systems adopted, while others establish a risk-based paradigm and define transparency and accountability measures, such as the identification of users, of the cost imposed, and of the partnership and cooperation with non-public entities (art. 25).

One of the most interesting aspects of the resolução relates to the way in which legal protection and methodological integrity emerge as tied by an indissoluble bond and how such connection is articulated through the ALI chain, from the attention to research design (Hildebrandt, 2018b, p. 8; Hofman et al., 2017; Mitchell et al., 2019) to the critical valorisation of the experience gained through implementation.

In this perspective, together with the requirement related to the quality of data and representativeness of the samples (art. 6), which should be preferably from governmental sources (art. 12), the resolução contains two relevant sets of provisions. Firstly, researchers are required, for each ALI model, to provide the CNJ with information concerning the objectives pursued (art. 8, II; 10, I) and to report the results effectively achieved (art. 25, § IV). Secondly, researchers are required, before putting ALI models into production, to submit them to a procedure of approval intended to identify whether their development has been influenced by bias or generalizations leading to discriminatory tendencies in their functioning (art. 7, § 1). Moreover, research is also required to provide the CNJ with a report for any project that has been terminated due to discriminatory features the reasons that led to such decision (art. 7, § 3).

The latter requirements can inform a research methodology that not only tends towards a form of integrity that rules out forms of malpractice such as p-hacking (Gollnick, 2018) or P-hacking (Hildebrandt, 2018c), but also a form of robustness that does not deny past failures, but acknowledges and fosters their epistemic value.

For what concerns the former, it should be noticed that, although often overlapping with the purpose limitation principle, the requirement to state the objective of ALI research has a value that goes beyond the data protection framework. This reflects the fact that not all that counts for legal protection in ALI is necessarily visible and relevant through the lenses of personal data. The requirement to articulate the objectives of and the result expected from the use of ALI establishes a set of design constraints distinguished by relevant conceptual and methodological implications. If taken seriously, such requirement compels researchers and developers not only to provide a technical description of “how to formalize” in machine readable language a certain task, but also to articulate a theoretical account of “what has to be formalized”, i.e. the assumptions about the functioning and purposes of the legal system and decision making processes on which, more or less consciously, the design and implementation of an ALI tool is grounded. Far from operating only as a negative limit, such constraint constitutes the frame, again technical and conceptual, where considerations of Legal Protection by Design (Hildebrandt, 2018a, b, c) can be articulated.

In this perspective, it is relevant that the resolução places the judiciary with the duty to play a role of both promotion and control in the development and implementation of ALI tools. On the one hand, the engagement of the judiciary throughout the whole ALI chain, together with the accountability framework set forth by the resolution, may contribute to prevent the stabilization of an incentive structure that pushes ALI towards asking the wrong questions or producing solutions that are dangerous or simply “looking for a problem to solve”. At the same time, the - “by design” - involvement of the judiciary may serve to avert a passive reception of ALI tools, therefore mitigating the potential risk of automation bias.

Conclusions

That between Law and AI is an encounter that presents the juristic community with a challenge: taking full advantage of a form of knowledge that can be used to increase the certainty and efficiency of the judicial system, on the one hand; safeguarding and valorising that form of understanding that is vital to ensure a fair and case-centred administration of justice driven by the legal protection of human rights, on the other.

Of course, the resolução does not and cannot per se solve the issues related to such challenges. Nonetheless, it provides resources that, if carefully handled by the juristic community, can be used to make the challenge of ALI a positive-sum game: to have a legal system that encourages the exploration and exploitation of the affordances of ALI to promote efficiency and certainty and, by doing so, not only does not sacrifice, but, on the contrary, fosters legal protection.

While too often AI finds its place under a framework of techno-regulation, inscribing in the shaping of ALI the perspective of practising jurists seems to represent an important step towards preventing the transgression of those procedural and substantial guarantees whose safeguard is assigned to the Judiciary by the architecture of the Rule of Law (Santoro, 2007).


References

  • Brehm K., Hirabayashi M., Langevin C., Rivera Munozcano B., Sekizawa K., Zhu J., ‘The future of AI in the Brazilian Judicial System’ 2020 https://itsrio.org/wp-content/uploads/2020/06/SIPA-Capstone-The-Future-of-AI-in-the-Brazilian-Judicial-System-1.pdf
  • CEPEJ (European Commission for the Efficiency of Justice), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, Strasbourg, 3-4 December 2018
  • Freitas H., ‘Judiciário brasileiro tem ao menos 72 projetos de inteligência artificial nos tribunais’ (2020) Jota 09.07.2020 https://www.jota.info/coberturas-especiais/inova-e-acao/judiciario-brasileiro-tem-ao-menos-72-projetos-de-inteligencia-artificial-nos-tribunais-09072020
  • Gollnick C.A., ‘Induction is not robust to search’, Bayamlioglu E., Baraliuc I., Janssens L., Hildebrandt M., (eds) Being Profiled: Cogitas Ergo Sum (Amsterdam: Amsterdam University Press, 2018) 106
  • Hildebrandt M., ‘Algorithmic regulation and the rule of law’ (2018a) 376 Philosophical Transactions of the Royal Society A
  • Hildebrandt M., ‘Law as computation in the era of artificial legal intelligence Speaking law to the power of statistics’ (2018b) 68 University of Toronto Law Journal 12-35
  • Hildebrandt M., ‘Preregistration of machine learning research design. Against P-hacking’, in Bayamlioglu E., Baraliuc I., Janssens L., Hildebrandt M., (eds) Being Profiled: Cogitas Ergo Sum (Amsterdam: Amsterdam University Press, 2018) 102
  • Hofman J.M., Sharma A., Watts D.J. ‘Prediction and explanation in social systems’ (2017) 355 Science 486–488
  • Mitchell M., Wu S., Zaldivar A., Barnes P., Vasserman L., Hutchinson B., Spitzer E., Raji I.D., Gebru T., ‘Model Cards for Model Reporting’ (2019) Arxiv DOI 10.1145/3287560.3287596
  • Santoro E., ‘The Rule of Law and the “Liberties of the English”: The Interpretation of Albert Venn Dicey’, Costa P., Zolo D., (eds) The Rule of Law. History, Theory and Criticism (Doodrecht: Springer, 2007) 153