On 3-4 December 2020 COHUBICOL holds its second Philosophers’ Seminar, bringing together lawyers, computer scientists and philosophers to engage in an in-depth study of interpretability in machine learning and how it is relevant to legal protection and the rule of law in the context of computational law. The event is meant to practice slow science, based on draft papers that have been shared in advance, read by all participants. No presentations, only in-depth discussion of each paper.

The event will not be streamed or recorded, and the draft papers will not be shared, but we hope they will be submitted to our new Journal of Cross-Disciplinary Research in Computational Law in the course of 2021. Those accepted will become available under the Creative Commons BY-NC license of the Journal. To give you a taste of what we are working on, please see some of the abstracts below.

Computer science

Nico Formanek

Computational reliabilism and the interpretability problem

Two requirements of a satisfactory explanation of an automated (decision) process are transparency and understandability. Computational methods such as machine learning have the property of being epistemically opaque, meaning that no human being can possibly trace all their steps in a lifetime. It seems then, that getting a humanly interpretable explanation from such processes is futile. In this article I’ll show how a combination of factors from expert judgement, history of implementation, theoretical warrants etc. can contribute to the overall reliability of a computational (e.g. machine learning) method. While this does not make the method transparent in the traditional sense, it could suffice to produce an acceptable grade of understanding – this shift in epistemic focus is called computational reliabilism. Historical cases of computational methods which preceded the machine learning paradigm will be shown where similar justificational strategies were employed to great success. Drawing from those examples and utilizing computational reliabilism, I’ll argue that the interpretability problem can be contextually solved.

Back to list

Anna Ronkainen

Reality check: Interpretability and its alternatives in quality assurance of computational legal reasoning

Interpretability is now often highlighted as a necessary feature for discovering errors in the legal analysis performed by intelligent systems. I argue that interpretability must be seen in its proper context, namely overall quality assurance of the legal analysis, and as such it may not be sufficient by itself and in some circumstances not even useful or practicable. I consider the role of interpretability in light of extensive practical experiences from the quality assurance processes for the MOSONG and TrademarkNow systems.

Interpretability must be anchored in existing legal arguments used within the domain in order to be comprehensible and useful. When considering the viability of a ML-based system for recommending (or even performing) decisions within the judiciary, the quality of its decisions must be benchmarked against current practices, namely the decisions of human judges. Here, quantitative data suitable for use as a baseline is generally nonexistent.

As a tentative baseline, I will attempt to generate some quantitative quality metrics based on administrative and court decisions from the same domain as the two example systems, that is, opposition and cancellation proceedings related to the likelihood of confusion for trademarks. I will also consider the argumentation used in a sample of such cases qualitatively to assess their suitability as anchoring points for system interpretability.

Back to list

Law

Julie Cohen

Scaling Interpretability

In this contribution, I will explore whether interpretability needs to be individualized in order to accord with rule-of-law sensibilities. In particular, in order for us to be able to say that a data-driven, algorithmic system is sufficiently transparent that it might be possible to devise systems for holding it accountable, will it be sufficient to say that its operations are interpretable at scale, i.e., over population groups and subgroups? Or does interpretability require more granular self-accountings? In the modern era, law has found the process of triangulation between aggregate explanations, individualized explanations, and legal conclusions increasingly uneasy. It is at least arguable that a rule of law rubric for the networked digital era will need to accommodate aggregates in a less grudging way; if so, it becomes necessary to understand when and why that might be the case.

Back to list

Christopher Markou and Simon Deakin

Evolutionary Interpretation: Law and Machine Learning

We approach the issue of interpretability in AI and law through the lens of evolutionary theory. From this perspective, blind or mindless ‘direct fitting’ is an iterative process through which a system and its environment are mutually constituted and aligned. The core case is natural selection. Legal reasoning can be understood as a step in the ‘direct fitting’ of law, through a cycle of variation, selection and retention, to its social context. Machine learning, in so far as it relies on error correction through ‘backpropagation’, can be used to model the same process. It may therefore have value for understanding the long-run dynamics of legal and social change. This is distinct, however, from any use it may have in predicting case outcomes. Legal interpretation in the context of the individual or instant case requires use of the higher-order cognitive capacities which human beings have evolved to understand their world and represent it through natural language. This type of forward propagation is unlikely, by its nature, to be well captured by machine learning approaches.

Back to list

Frank Pasquale

Explanation as a Constitutive Practice in Law

Sophisticated work at the intersection of machine learning and law tends to assume a reformist teleology: to enhance the fairness, predictability, or transparency of legal processes. The implied ideal is one of gradual improvement of adjudication and administration according to external criteria (such as faster case processing, less disparate impact on underprivileged groups, or focused decision making based on explicit criteria). This paper will explore an alternative ground: namely, that explanation may be constitutive of law.

To say an activity is constitutive of a practice means that the practice (here, law) does not really exist without the activity (explanation). To pursue this argument, I will historicize the current algorithmic turn in law as part of a long series of efforts to replace language with numbers or more ostensibly “objective” criteria, going back to at least to Bentham, and most recently advanced in cost-benefit analysis. Responding to one of Bentham’s intellectual heirs (Richard Posner), the legal and literary scholar James Boyd White observed that “It is in fact the genius of law that it is not a set of commands, but a set of texts meant to be read across circumstances that are in principle incompletely foreseeable. This is what it means to pass a piece of legislation, or to decide a case—or even to draft a contract—at one point in time, with the knowledge that it will in the future be brought to bear by others (or ourselves) in contexts, and with meanings, that we cannot wholly imagine.”

White went on to describe legal training as just as much a “structured experience of the imagination” as it is a set of propositions. While decisions (and even the opinions justifying them) may eventually be automated, a social process of interaction among accountable actors trained to respect fundamental rights to due process and standards of evidence is the core reason why attorneys’ expertise is respected in society. Following Gil Eyal’s sociological account of expertise, I will argue that this process of interaction is the core normative feature of explainability, rather than propositional rules purporting to set forth abstract conditions of interpretability.

Our researcher Emilie van den Hoven had a short discussion with Frank about his contribution, which you can watch here.

Back to list

Geneviève Vanderstichele

Interpretable AI, explainable AI and the sui generis method in adjudication

In earlier research, I have claimed that the outcome of a machine learning algorithm with case law as an input is not a proposition of fact, nor of law, nor a treatise with comment on the law, but a concept and method of its own kind, sui generis, allowing and even obliging courts to engage with such an outcome when it is used in a court case. In addition, I have argued that such outcome cannot have normative value in a particular court case, as machine learning algorithms do not generate reasons for their outcome, whereas giving reasons in a judicial decision is essential, as this is part of the rule of law.

In furthering the thoughts on the sui generis concept and method in the interaction between humans and ML systems in adjudication, the question arises how interpretable ML algorithms can relate to the fundamental legal obligation to justify a court decision, in light of the rule of law – assuming the interpretability problem can be solved.

The paper first examines the obligation for a judge to motivate a decision and to give reasons for it. It argues that the motivation of court judgments serves different purposes. First, it aims to the understanding of the decision by the parties involved. In addition, the obligation to motivate allows to review the decision by higher courts and also by members of the society.

Finally, it can enhance respect of the rule of law, allowing the public to discuss a decision.

The contribution continues with a short analysis of ‘interpretable machine learning’, and the (absence of a) distinction with ‘explainable ML’. The paper analyses the problem to whom a ML model and algorithm should be interpretable and for which task.

In the third part, the article confronts and balances the insights on interpretable ML with the obligation to give reasons in adjudication. Among other things, I shall discuss the impact of interpretable machine learning on legal reasoning and analyse the relationship between interpretable ML and the sui generis method, in light of the rule of law.

The methodology for the paper is interdisciplinary, as it considers elements of constitutional and procedural law, together with aspects of interpretable machine learning and with elements of a nascent theory on dispute resolution by humans with the assistance of digital systems.

Back to list

Philosophy

Patrick Allo

ML and Networks of Abstraction

Classifiers (as well as other learned functions) implement a level of abstraction: they classify entities by taking into account certain features and ignoring other features. When we want to interpret the workings of a classifier, the initial task is always to make explicit the level of abstraction it implements. We want to know on what the classifier bases its prediction that a given entity belongs to a certain class.

But classifiers are never introduced in a void. They are introduced in a world where other decisions are being made relative to different levels of abstraction. In this talk I’d like to reframe, mostly as a thought experiment, the interpretability problem by taking into account that classifiers (and the levels of abstraction they implement) are always introduced in a context where other levels of abstraction are already at work.

The initial structure of this reframing exercise is roughly as follows:

  1. When we interpret a classifier, we make the level of abstraction it implements explicit.
  2. When we justify a classifier, we also want to show that the level of abstraction it implements is appropriate in view of its intended purpose.
  3. When we contest the working of a classifier (or some of its effects), we somehow challenge its justification (the justification that is given, the contention that it is appropriate for its intended purpose, etc.) Relative to these three broadly explanatory tasks, I want to explore the consequences of the fact that these tasks can be understood narrowly (as if a classifier is introduced in a void rather than in a context where other levels of abstraction are already at work) as well as broadly (relative to pre-existing ecology of levels of abstraction). In particular, I want to consider the implications of the fact that different levels of abstraction, each sensible in their own right, need not always to behave nicely when they operate in a shared context.

Back to list

Sylvie Delacroix

Diachronic Interpretability & Automated Systems

This contribution focuses on the challenges raised by preserving interpretability over time. Addressing these challenges requires a sophisticated understanding of the way in which different forms of automation may impact (human) normative agency. Understood as the capacity to stand back and question the ways things are, potentially calling for better ways of doing things, normative agency is central to our interpretive capabilities in a morally-loaded context (such as law). Much of the current ‘interpretability debate’ to date has focused on the types of explanations that may support the interpretive process. Far less attention has been paid to the other side of the interpretability question. If, instead of being taken as a given, normative agency is acknowledged as being sustained by a rich and dynamic background of socio-cultural expectations, one must consider the effect that various automation tools will have upon the continued, dynamic weaving of that fabric of socio-cultural expectations. Considered in this light, the ‘interpretability problem’ is less one of formalising the type of explanations that may support our need for interpretability, and more one of building human-computer interaction modalities that actively support, rather than compromise, our capacity for normatively-loaded interpretations. In this respect, there is much to learn from philosophical debates about the nature of ethical expertise, as well as more technical forays into so-called ‘interactive machine learning’ methods.

Our researcher Emilie van den Hoven had a short discussion with Syvlie about her contribution, which you can watch here.

Back to list

Christoph Durt

Why Explainability is not Interpretability: Machine Learning and its Relation to the World

The concepts of interpretability and explainability of computationally generated output are often used synonymously, but their connotations differ in a pivotal point. Human action and things produced by humans can be interpreted, whereas natural phenomena can be explained but not interpreted. Hence, the concept of interpretability seems to require an account of ML that sees ML in analogy to human beings. The essay recognizes that anthropomorphic explanations are indeed common in accounts of ML and other forms of AI, yet it argues that machine-generated output is not interpretable analogously to human actions and human-generated things. Rather, the relation of ML learning to the world needs to be reconceptualized. The essay does so by considering the relation of ML to the world in the context of its relations to humans and data. This is fundamental for showing possibilities to explain ML, and it allows for a reconceptualization of the problem of interpretability in ML. 

ML integrates in a novel way into the world as it is experienced and understood by humans; neither like a conventional object, nor like a humanoid subject. The question for interpretation of human-generated things arises not because machine output is interpretable analogously to human-generated things but because it changes human interpretation processes. Rather than simply replacing human interpretation, ML can integrate into interpretation processes in which humans still play a role. ML increasingly contributes to, disrupts, and transforms such processes, which before had been reserved for humans. A better understanding of the specifics of human interpretation lays the groundwork for a better understanding of how computation can sensibly become part of activities that involve human sense-making.

Back to list

Elena Esposito

Explainability or Interpretability? The role of Ambiguity in Legal AI

I address the interpretability issue questioning the adequacy of the classic metaphor of artificial intelligence to analyze recent developments in digital technologies and the web. The new generation of algorithms based on machine learning and using big data do not try to artificially reproduce the processes of human intelligence – they do something different. This, I argue, is neither a renunciation nor a weakness, but the basis of their efficiency in information processing and in the interaction with users. This exacerbates the problem of interpretability. Algorithms don’t need to understand their materials and are often incomprehensible. The challenge of interpretability is to establish a relationship with algorithms as communication partners that can be explainable without necessarily being understandable. What the users understand of the explanation of the machine does not have to be the processes of the machine. This actually often happens in human explanations as well, that offer clues to make sense of communication without giving access to the psychic processes of the partner - and the direction in which advanced algorithms’ design is currently moving.

Back to list