This project will dive head-on into the assumptions of data-driven and code-driven law, to unearth their implications for the protection that law and the Rule of Law aim to offer. To situate, understand and describe such assumptions, the COHUBICOL research team will have to start a conversation with those versed in the theoretical underpinnings of computer science. It is only while interacting with CS that we will be able to come to terms with crucial differences between the text-driven nature of modern positive law (and the legal protection it provides) and algorithmic decision making.

Last week I was a guest speaker at the Simons Institute for theoretical computer science at UC Berkeley, a world renowned center of excellence, during a workshop on ‘Beyond Differential Privacy’. This was a wonderful opportunity to zoom in to the different ways of thinking around the topic of privacy, with a host of noted computer scientists working on differential privacy and fair computing and a selection of speakers from other disciplines. One of the issues that repeatedly surfaced in the discussions was the need to define the concepts of privacy or fairness in a way that would allow their formalisation, in order to start solving the problems of privacy and fairness. From a computer science perspective this move is obvious and a dire necessity to get down to work. The need for precisely circumscribed conceptualisation, however, clashes with the pervasive ambiguityof legal terms. This ambiguity relates to the concept of legal certainty, which does not refer to rigid definitional terms but to reasonable foreseeability, depending on legitimate expectationsin light of the circumstances of the case at hand. Both legislation and case law play a crucial role in providing this legal certainty, which is ‘by definition’ adaptive, depending on both the fluidity and the stability of human interaction.

In my article on ‘Law as Computation’ I included a discussion on two different interpretations of legal certainty. I wrote:

Nevertheless, achieving legal certainty as to one’s legal obligations under tax law is a crucial condition for individual freedom, as Jeremy Waldron has insisted:

‘There may be no getting away from legal constraint in the circumstances of modern life, but freedom is possible nevertheless if people know in advance how the law will operate and how they have to act if they are to avoid its application. Knowing in advance how the law will operate enables one to make plans and work around its requirements.’

However, whereas Alarie seems to believe that quasi-mathematical predictive accuracy is the final goal of legal advice, Waldron continues:

‘The institutionalized recognition of a distinctive set of norms may be an important feature. But at least as important is what we do in law with the norms that we identify. We don’t just obey them or apply the sanctions that they ordain; we argue over them adversarially, we use our sense of what is at stake in their application to license a continual process of argument back and forth, and we engage in elaborate interpretive exercises about what it means to apply them faithfully as a system to the cases that come before us.’

Waldron refers to the renowned legal philosopher Neil MacCormick as

‘pointing out that law is an argumentative discipline and no analytic theory of what law is and what distinguishes legal systems from other systems of governance can afford to ignore this aspect of our legal practice, and the distinctive role it plays in a legal system’s treating ordinary citizens with respect as active centers of intelligence.’

Legal certainty, in other words, should not turn into a kind of definitional clarity or predictive accuracy that resists contestability. Legal certainty should afford argumentation and contestation, which are core to the Rule of Law. In view of the rapid advance of various types of algorithmic decision-making in public administration, adjudication and legislation, we need to face the fact that there is a danger that law’s domain-specific particularity becomes washed over by the altogether different domain-specificity of computer science. This is not to say that computer science should incorporate legal certainty (which would be utter nonsense), but to advocate a better mutual understanding of the methodological integrity of each domain.

In a brilliant blog on Structural disconnects between algorithmic decision-making and the law, Suresh Venkatasubramanian (theoretical computer science), explains with keen acuity how text-driven legal requirements clash with algorithmic decision-making systems. Suresh points out two crucial stumbling blocks. First, he opposes the procedural core of the law to the outcome oriented approach in machine learning:

What I fear is that in order to implement AI-driven systems in such a setting, designers will settle for a kind of illusory precision—where a system will be built with arbitrary precise choices made partly by programmers, but that the resulting black box system will be described as having the desirable broader normative properties. The problem is then one of transparency and contestability: the black box can no longer be interrogated to understand the nature of its arbitrary precision and its interpretation cannot be challenged later on. For more on this, I’d strongly recommend reading Danielle Citron’s work on Technological Due Process.

Second, he discusses the ‘vagueness’ or ‘constructive ambiguity’ of legal text:

One might argue that the vagueness in these terms is by design: it allows for nuance and context as well as human expert judgement to play in a role in a decision, much like how the discretion of a judge plays a role in judging the severity of a sentence. Another view of this ‘vagueness by design’ is that it allows for future contestability: if commanders are forced to defend a decision later on, they can do so by appealing to their own experience and judgement in interpreting a situation. In the context of international law,an excellent pieceby Susan Biniaz illustrates the value of constructive ambiguity in achieving consensus. There is extensive literature in the philosophy of law defending the use of vagueness in legal guidelines and arguing that precision might not serve larger normative goals and might also shift the balance of decision-making power away from where it needs to be.

I think that Venkatasubramanian has started a conversation that we must continue. COHUBICOL is after precisely these kind of incompatibilities, frictions and tensions. Not in order to resolve them by succumbing to an algorithmic understanding of law, but by acknowledging these structural disconnects and beginning a difficult but hopefully productive interaction between law and computer science.

To be continued…