In November 2021 COHUBICOL holds its third Philosophers’ Seminar, bringing together lawyers, computer scientists and philosophers to engage in an in-depth study of rule formalisation and self-execution (e.g. using blockchain or ‘rules as code’ (RaC) approaches), in terms of whether and on what basis computer code may have the intended legal effect of binding a constituency as if it were legislation.

We hope to further the understanding of the roles of formalisation and ‘effectiveness’ in the domains of law and computer science in this third seminar, which follows the success of our first two seminars, discussing text-driven and data-driven law (2019 and 2020, respectively).

The event is meant to practice slow science, based on draft papers that have been shared in advance, read by all participants. No presentations, only in-depth discussion of each paper.

The event will not be streamed or recorded, and the draft papers will not be shared, but we hope they will be submitted to our new Journal of Cross-Disciplinary Research in Computational Law in the course of 2022. Those accepted will become available under the Creative Commons BY-NC license of the Journal. To give you a taste of what we are working on, please see some of the abstracts below.

Computer science

Denis Merigoux

The specification problem of legal expert systems

Automated legal decision-making relies on computer programs called legal expert systems, that are executed on machines not capable of legal reasoning by themselves. Rather, it is up to the programmer to ensure that the behavior of the computer program faithfully captures the letter and intent of the law. This situation is merely an instance of the more general “specification problem” of computer science. Indeed, the way programs are written and executed require the programmer to express its intention in a particular form of logic or statistical model imposed by the programming language or framework. On the other hand, the intended behavior of the program or “specification”, here communicated through the law, is usually described using natural language or domain-specific insights. Hence, every software endeavor begins with a “requirement analysis”, which consists in extracting from the specification corpus a set of requirements that the computer system must obey.

In the case of automated legal decision-making and legal expert systems, the members of this set of requirements are the possible legal reasoning bits that the computer program is expected to perform. Viewing the problem through this lens immediately allows for identifying the key questions for assessing the safety and correctness of legal expert systems. First, when and how is it possible to express legal reasoning as a set of requirements for a computer system? Second, how to check that these requirements are correctly translated to computer code? Third, can we ensure that the computer code does not introduced unwanted, unlawful behavior? In this article, we take a tour of the general computer science answers to these three questions and assess their efficiency in the particular situation of legal expert systems. To do so, we introduce the distinction between result-constrained and process-constrained legal specifications. From this distinction naturally stem different software solutions, ranging from machine-learning-based to algorithm-based. Finally, we conclude by a discussion about the “critical software” qualification for legal expert systems, and what this qualification could entail in terms of technical and organizational change.

Back to list

Sibylle Schupp

Mechanically Verifying Purpose Limitation: Why, How, and How Often?

Generally, awareness of privacy issues grows and more and more people are concerned about protecting their private data. On the other hand, especially end users often do not fully exercise the rights they have been granted. One regulation for the processing of private data presents Art. 5 of the European “General Data Protection Regulation” (GDPR), which addresses purpose limitation, along with its Art. 9, which permits pro- cessing of private data only with that “data subject’s” consent. While such proviso might appear to be a good instrument in an individual’s hand, soliciting consent in practice is often perceived as a nuisance rather, so that individuals commonly resort to default configurations—even if default values run against their general privacy needs (“privacy paradox”). Organizations or activities that are subject to privacy laws might be better prepared to integrate purpose limitation with their own interest but they, too, largely consider any required compliance check extra work, and not a benefit. Moreover, neither individual users nor organizations are fond of repeating tasks, especially those they are only half-heartedly convinced of: neither does an individual like checking consent boxes too often nor is an organization eager to prove its compliance with Art. 5 anew if the business logic or stakeholders changed only slightly.

Even though the GDPR allows consent to carry over as long as the new purpose is “not to be considered to be incompatible” (Art. 5), deciding on such compatibility kicks off a new business process and, as practice shows, can easily take a couple of days—at least if done by hand. Would, however, compatibility or compliance be decided by mechanical verification, the answer would be quick to obtain. Further, mechanical verification would save people’s time and, like any algorithmic solution, allow for reproducible answers, which do not depend on contingencies, e.g., of staffing.

Yet in this essay, we advocate the use of mechanical verification less for the automa- tion it allows, but more for the artifacts it comes with: its models, inference chains, and counterexamples. Admittedly, models are constructed since the verification pro- cess requires them, yet once there, they also enable non-experts to assess the scope and constraints of a formal verifier’s verdict. Similarly, inference rules may form the core of a formal verification system, but they also allow non-experts to understand which path the underlying logic engine took when it arrived at the final judgment. And lastly, counterexamples equip the verifier’s binary decision with a justification in the failure case. Using an example from purpose limitation in the medical domain, we show that models and inference systems provide abstractions needed so that domain experts, legal scholars, and the working logician can communicate.

Back to list

Jurgen J. Vinju

Panta rei: on the evolvability of formalized law

The very moment that law text is formalized into a (semi-)executable software language, it will become subject to maintenance. Namely, mistakes were made in the initial formalization, and they must be corrected. Or, the interpretation of the natural law text has evolved and the formalization must change accordingly. Sometimes, other formalized laws that the current formalization depends on, have changed and the current one needs to be updated accordingly. And finally, the underlying language in which the law was formalized has changed and the current version must be adapted accordingly. As a result, a single formalized law undergoes a process of evolution during its lifetime, from the initial prototype, all the way through a period of active maintenance and renovation until finally this software can be pronounced “dead”. It is this nature of formalizations that they are “under maintenance” which distinguishes them from their original law texts. The changes generate impact. There are two sides to this medal. Firstly, the possibility of change means the possibility of maintaining or improving qualitative aspects. Secondly, the effect of change is downstream impact. Since formalized law texts have multiple target goals (simulation, verification, documentation, implementation, enforcement, cross- and back-linking into law texts), and formalizations have dependencies, and artifacts generated from the sources may be scattered in many locations, the impact of changes is plentiful, hard to predict and thus each change is in need of sceptical investigation.

In this position paper we investigate the requirements, design and implementation of a law formalization system which puts change first, supporting language evolution, law evolution and evolution of law interpretation and evolution of dependent formalizations from the ground up. The tool support we envision is multi-layered and human-centered: enabling authors of formalized law to interact with their formalizations on the language level, on the content-level, and on the (simulated) impact level. We allow them to explore possible alternatives and to analyse the positive, negative or neutral impact of hypothetical changes.

A change-first development environment for formalized law should have a positive impact on the relevance of formalized law, because it gives due consideration to an inevitable reality of law interpretation, implementation and enforcement: panta rei.

Back to list


Jason G. Allen

Doing Things With Algorithms: Law, Language, and Agency in the Digital Real

Science fiction writer and futurist Arthur C. Clarke’s “third law” is that any sufficiently advanced technology will appear as magic. Indeed, many of the technologies we take for granted today would only have made sense to our grandparents by reference to powers that operate above nature, outside the normal rules that govern actions in the physical universe. Of course, we know that computers are not “magical”, but it is both entertaining and informative to take a step back and ask what it is that “magic” does, how it does so, and what that might teach us about both law and digital technology. In this article, we examine “code-driven law” in both the private law and public law context through the lens of the anthropology of magic, namely the idea that a person can effect changes in the world through the proper incantation of words. This speaks to some of the central themes of the COHUBICOL project insofar as it examines the effect of “dynamic” documents that appear, at least, to perform acts in the world—and acts-in-the-law in particular. It points to the need for (i) an enhanced understanding of documents and “documentary acts” (as a category analogous to speech acts) and (ii) a better schematic for modelling the deontic and capacitative logic behind “automated acts-in-the-law”.

Back to list

Gabriele Margarete Buchholtz

Conceptual differences between ‘law’ and ‘code’

In this article, operating principles of ‘law’ and ‘code’ will be juxtaposed, in order to detect problems and regulatory needs. Technical and dogmatic aspects must be considered equally. Attention will also be drawn to the sociological question as to what it really means to set and apply law.

Application of the law as a social act

First of all, attention shall be drawn to the seemingly trivial fact that (written) legal norms are ‘human work’ and ‘social acts’ – and so is the application and interpretation of law in every individual case. Applying a legal norm is a demanding process: First, a concretisation step is required, because legal norms are usually drafted in general terms. This process is usually more challenging the more abstract a provision is. Consider, for example, the principle of ‘good faith’ codified in § 242 in the German Civil Code. After concretisation the norm can be applied to a particular case.

It is fundamental to realise that law can only exist ‘in language and through language’ which brings with it an openness to interpretation. However, the act of statutory interpretation is not a straightforward mechanical operation of textual analysis. It is far more complex and requires complementary knowledge, especially where there is room for discretion. Take, for example, the field of legal risk prevention, which relies on technical expertise, or the legal sphere of immigration and asylum law, which cannot do without social-scientific knowledge. In particular in administrative law, the real-life implications of a legal provision must be given serious consideration: if the law shall serve as an instrument of behavioural control, the lawyer must pay attention not only to the immediate effects, but also to the long-term social consequences. In other words, legal practitioners and lawmakers must not turn a blind eye to the consequences of law. To conclude, the application of law is an independent act of ‘legal production’. In each individual case, law is recreated as a ‘social product’.

Code as a technical act

How do basic assumptions about the application of law change when computers replace lawyers? Of course, algorithms are made by humans; they are ‘social acts’ in the first place. This seemingly obvious fact has far-reaching consequences. Apart from that, ‘law’ and ‘code’ differ significantly in their application. Algorithms are not written in a natural, but in a technical language: a binary code maps information through the sequences of the two symbol-system ‘1’ and ‘0’. Thus, coding a legal tech software consists of two key translation challenges. First, ‘law’ must be converted into binary code, and secondly, it must be translated back into natural language. Driven by this logic, algorithms translate social reality into binary code: drawn from random inferences, however, they can only identify correlations, not causalities. All AI-driven software is limited to this binary logic. However, advanced forms of AI- driven systems – so called learning-systems – are able to transform new data (input) into decisions (output) without significant human intervention. ‘Responsive systems’ can even dynamically modify the previous decision patterns. Thus, the decision-making process is conditioned by the learning experiences of an AI-driven system. That, in turn, can lead to structurally unpredictable decisions. However, the impossible remains impossible: algorithms lack ‘common sense’ and ‘soft’ decision factors such as intuition, value judgement or holistic thinking. Neither can machines think, nor can they answer deep philosophical questions, best expressed in prose by Goethe: ‘Only mankind can do the impossible: he can distinguish, he chooses and judges […]’.

Misconceptions of ‘code’

As already indicated, legal tech is based on the notion that legal norms can be formalised and fully translated into computer language. To verify this claim, one must reveal the different operating principles of ‘law’ and ‘code’. Traditionally, the application of law is not perceived as a strictly formalized process, especially with increasing discretional power of lawyers and judges. A lawyer or a judge is not a ‘Subsumtionsautomat’ who applies law in a formal-mathematical sense, but rather in a dialectical sense. The process of applying and interpreting a legal norm requires value judgements, intuitive knowledge and holistic thinking. However, algorithms lack any of these human qualities – and there is little prospect that software programmers will ever be able to bridge this gap. While machines may indeed one day in the near future perform some of the tedious, repetitive legal tasks, we are far away from replacing nuanced value judgement and expertise and perhaps we never will.

Another critical question arises: Can natural language be transformed into the binary language of a computer system at all? Although natural language has a certain inherent logic due to its grammar, the meaning of a word may vary significantly depending on the context (‘context variance’). Linguistic distinctions are not entirely predictable or programmable. Only in straightforward simple cases is formalisation imaginable (however, it is hard to determine ex ante whether a case is easy or complex), but in a difficult legal or factual situation, formalisation fails. In conclusion, it is quite obvious that the formalisation of the legal language is neither semantically possible nor is it desirable. Yet, the law needs to be flexible to cope with complex technical or social phenomena – in the best interests of society. The necessary degree of flexibility is provided by human language. At this point, a categorial difference between ‘law’ and ‘code’ becomes manifest – calling for new forms of regulation.

Need for regulation

Why is legal regulation necessary in the digital world? The answer is simple. The societal function of law is, above all, to serve the common good and minority protection. But neither category matters in digital ‘code’. As long as this situation persists, public law remains an indispensable instrument of control and regulation. It is a pressing need of modern societies to bridge these gaps and first of all, the conceptual differences between ‘law’ and ‘code’.

Back to list

Pompeu Casanovas

Legal isomorphism and the meta-rule of law

Rules as Code (RaC) has been broadly defined as ‘the process of translating rules in legislation, regulation, policy into code so they can be consumed and interpreted by computers’.1 This actually entails a set of assumptions about the relationship between natural and formal languages in law that should be clarified. In my contribution, I will focus only on one of these assumptions, the link between the sources of law and their characterisation into representation languages. It is generally known as legal isomorphism, i.e. the supposition that ‘there should be a one-to-one correspondence between the rules in the formal model and the units of natural language text which express the rules in the original legal sources’.

The notion of ‘legal isomorphism’ has been conceived in different ways in the literature on legal knowledge engineering. I will explain their different meanings with the aid of the framework set by the meta-rule of law, i.e. the embedding of the substantive protections of the rule of law into legal web services. The analysis will differentiate between two axes, three dimensions, and four clusters to classify the sources of law. Some examples will be provided

I will also contend that private or public legal web services require a hybrid, pragmatic approach to be effective. Systems of substantive rights cannot be completely coded, as rules cannot be extracted and implemented without an extended pre and post modelling work on norms and the context of norms. Knowledge acquisition, rule extraction and rule implementation pose specific problems and challenges.

Technology is changing the way we think about law. But law should be understood not only as a set of constraints on functional requirements but as a full-fledged regulatory toolkit. From this point of view, policies are not safely implementable in digital environments without setting tools of legal governance at the same time, to monitor, control and take care of the whole regulatory process.

1. Governatori, G., Casanovas, P., de Koker, L., 2020. “On the Formal Representation of the Australian Spent Conviction Scheme”. In International Joint Conference on Rules and Reasoning, RuleML+RR 2020, Lecture Notes in Computer Science 12173 Cham: Springer, pp. 177-185; Governatori, Barnes, J., G., de Koker, L., Hashmi, M., Poblet, M., Zeleznikow, J., Casanovas, P. 2020 “‘Rules as Code’ will let computers apply laws and regulations. But over-rigid interpretations would undermine our freedoms”. 25 November. The Conversation; Casanovas, P., Hashmi, M., de Koker, L., Barnes, J., Governatori, G., Lam, H.P., Zeleznikow, J. Comments on Cracking the Code: Rulemaking for Humans and Machines (August 2020 draft) Comments on the draft OECD White Paper on Rules as Code, submitted on 27 August 2020 to the authors.

Back to list

Andrea Leiter and Delphine Dogot

Tech-driven governance and law: of hyperformalism and hyperfactualism

That digital technologies are governing our lives in many ways has become a truism hardly worth mentioning. But precisely what kind of ordering capacity do digital technologies appeal to? In this paper, we examine the dynamics through which tech-driven law and governance are advancing and we unpack the rationalities valued to deploy code to govern the world. On the one hand, there is an appeal toward hyperformalism, committing to the strict formality of code as language. In response to the fogginess and inefficiencies of law, the assurance of an interoperable, effective, and predictable set of decisions and instructions re- enchants formal, objective, and efficient ordering, at last, purging normativity from interpretation. For instance, such an appeal is embodied by blockchain technology, where smart contracts, pieces of code that are automatically executed on a decentralized network of computers, are described as irreversible. On the other hand, the appeal of data-driven real-time complex decision-making rests on the promise of hyper-factualism, able to redress the bulkiness of law and its dated, slow, and over-encompassing categories and procedures. Machine learning and data science allure to a world in which decisions and qualifications are purely empirical and give refined insights into the functioning of the world and its anticipated future. Data-driven governance promises to know more and better and to finally see the world as it is, enabling optimal normative intervention. As lawyers and legal theorists, we can read through these promises and recognize them as familiar struggles grounding major disciplinary legal projects. The formalist project seeks harmony and consistency in a system of rules and forms, while the realist project advances an empirical account of the world informing normative intervention through facts. By using the proposition hyper-, however, we want to indicate a difference in scale and reach of contemporary claims of the rule of code. As rehearsing familiar reactions to formalist and realist projects might overlook the newness of the rule of code, we suggest taking seriously this difference in scale and consider the particular practices of normativity and its effects. This might allow us to begin to grasp what it would mean to engage with code-driven governance.

Back to list

James Grimmelmann

The Structure and Legal Interpretation of Computer Programs

This is an essay about the relationship between legal interpretation and software and interpretation, and in particular about what we gain by thinking about computers and programmers as interpreters in the same way that lawyers and judges are interpreters. In some ways, programs written in programming languages are like legal texts written in natural language. In other ways, they are not.

I wish to propose that there is something to be gained by treating software as another type of law-like text, one that has its own interpretive rules, and that can be analyzed using the conceptual tools we typically apply to legal interpretation.

Back to list

Megan Ma

A pragmatics of code: contextualizing law in computation

As law has language at its core, interpretation has centered on the linguistic exercise. This has led to a heavy reliance on translation when reconciling human with machine-readability. However, lessons from core linguistics suggest that natural language is composed of three underlying components: syntax, semantics, and pragmatics. Curiously, the enduring focus on the syntax and semantics in computational models has led to a subsequent neglect of pragmatics, an arguably essential pillar in meaning-making. Consequently, this impedes the capacity to appropriately understand and contextualize legal concepts.

Pragmatics is the field of linguistics that reflects on intention, using tools of implicature and inference. Consider the phrase: “There is an elephant in the tree.” Semantics is helpful, to the extent, that it could raise what may be a prototype example of an elephant. As elephants are not typically found in trees, this is immediately a sign that this sentence may have a different meaning. Could this be a metaphorical idiom (i.e. elephant in the room) or perhaps there is some implicit understanding that the elephant in question is a paper elephant? As well, pragmatics raises the issue of reference. Consider the following sentences: “Jane is speaking with Joanne. She is a renowned legal scholar.” (Inspired by the example in Betty Birner, Language and Meaning (2017), 109). The referent of “she” is not clear. Without context, semantics alone cannot usefully provide information as to the meaning of these sentences.

There are parallels to the shortcomings of semantics revealed in propositional logic. Computational systems that use propositional logic reflect the limitations presented in semantics. This is because propositional logic can enable the validation of some statements but cannot in itself establish the truth of all statements. So, why must there be consideration for pragmatics in consideration of computational law?

Contrary to the rhetoric on clarity and precision, ambiguity is revered as an inherent property of legal drafting. While this is not necessarily novel, legal documents are not independent artifacts and instead belong to a broader ecosystem. The aforementioned issues of pragmatics in natural language are integrated into the fabric of law and legal text and powered by literary tools of metaphor and analogy that outline context.

Interestingly, code is not quite as transparent or reducible as assumed. Mark C. Marino argues that code, like other systems of signification, cannot be removed from context. Code is not the result of mathematical certainty but “of collected cultural knowledge and convention” (Critical Code Studies (2020), 8).

While code appears to be ‘solving’ the woes of imprecision and lack of clarity in legal drafting, the use of code is, in fact, capturing meaning from a different paradigm. Rather, code is “frequently recontextualized” and meaning is “contingent upon and subject to the rhetorical triad of the speaker, audience (both human and machine), and message.” (id., 4) It follows that code is not a context-independent form of writing. The questions become whether there could be a pragmatics of code, and if so, how could code effectively communicate legal concepts?

Having understood the complexities and pitfalls of natural language, there is now a rising demand to understand the ways code acquires meaning and how shifting contexts shape and reshape this meaning. Currently, few scholars have addressed code beyond its operative capacity. This mirrors the focus on syntax and semantics as primary drivers of using code for legal drafting. Yet, learning how meaning is signified in code enables a deeper analysis of how the relationships, contexts, and requirements of law may be rightfully represented. From the science of (natural) language arises the science of code. The paper, therefore, intends to propose a combinatory method of semiotic analysis with pragmatics for a more fruitful engagement of legal knowledge representation. In this way, the author hopes to extend beyond the arithmetic lens of clarity and precision to account for temporal management and formal ontological reference.

Back to list

Monica Palmirani

Law as Code

In the last two decades the legal informatics community elaborated relevant outcomes in order to manage the Law and legislation in machine-readable format using Semantic Web (Casanovas 2016), Open Government Data (Casanovas 2017, Francesconi 2018), Free Access to Law Movements (Greenleaf 2011) or LegalXML community (Akoma Ntoso, AKN4UN, AKN4EU, LegalRuleML,ELI/ECLI). Additionally, the AI and Law community provided a widespread literature about legal reasoning, machine learning and Legal Analytics, tools for extracting legal knowledge from texts and predictive models (Genesereth 2015, Ashley 2017). Finally, the legal design (Hagan 2020) proposes new pattern for smart visualization of the legal content preserving the legal theory principles. Although these successes, New Zealand Government started in 2018 a project named “Rules as Code” (RaC) and in 2020 it proposed to OECD-OPSI ( to codify a new approach: the idea is to use coding methodology to create a macro-schema of law, legally binding, that produces as output the legal text in natural language.

The idea to produce law algorithmically is fascinating and it can reduce trivial errors, unclear norms, wrong normative citations, inconsistency between definitions, simplify the legalese language in legislation, but there is also the risk to reduce the legal language to a pure syntactical serialization, crystallizing any interpretation and so to compress the legislation and the execution in one single step. But the main weakness of this approach is to not integrate in the whole picture legal theory, interpretation theory, legal linguistics theory, semiotic theory principles and last 30 years of results in AI&law academic community (Bench-Capon 2012; Verheij 2020; Hildebrandt 2018, 2020; Greenleaf 2020). The legal language, not limited to textual, is not only an instrument of communication or execution (enforceability) of norms, but it has a poietic role in the constitutive generation of normativity (Lorini 2020).

This paper proposes a theoretical and technical model based on Semantic Web technologies, LegalXML standards, and Legal reasoning for making the norms machine-consumable without to neglect legal and hermeneutics theory, and technical neutrality principle. Secondly the emerging applications in AI in legal domain are classified in the proposal of AI Act of the EU Commission as highly risky with a potential challenge respect transparency, neutrality, impartiality, and anti-discrimination issues.

Legal formalism and logic-positivism (reductionism and textualism), used for decades, are not totally satisfactory for coding law resilient to the passage of time and to incorporate the digital innovation. There is the necessity to maintain flexibility to be applicable to different jurisdictions, context, historical periods, change of the society. Neither the opposite radical legal hermeneutic nor subjectivism, used in the legal area, are good approaches for the Web of Data. The paper defines a methodology for developing computable informatics legal systems compliant by-design with theory of law and with the explicability principle. We propose an architecture that conciliates legal theory/philosophy of law disciplines with emerging technologies that are deeply modifying the current society.


  • Ashley K. (2017). Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age, CUP
  • Bench-Capon T., Ashley, K. (2012). A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. Artif Intell Law, 20(3):215-319
  • Casanovas P., (2017). A linked democracy approach for regulating public health data. Health and Technology, 7(4):519-537
  • Casanovas P., Palmirani M., Vitali F. (2016). Semantic Web for the Legal Domain: The next step. SEMANTIC WEB, 7(3):213-222
  • Francesconi E. (2018). On the Future of Legal Publishing Services in the Semantic Web. Future Internet, 10(6):48
  • Genesereth M. (2015). Computational Law: The Cop in the Backseat, CodeX- Stanford
  • Greenleaf G. (2011). Free access to legal information, LIIs, and the Free Access to Law Movement, IALL:201-228
  • Greenleaf G., (2020). Strengthening Development of Rules As Code. Submission to the OECD’s OPSI on Cracking the Code
  • Hagan M. (2020). Legal Design as a Thing: A Theory of Change and a Set of Methods to Craft a Human-Centered Legal System, Design Issues 36(3):3-15
  • Hildebrandt M. (2018). Law as computation in the era of artificial legal intelligence. speaking law to the power of statistics. Univ. of Toronto Law Jour., 68:12–35
  • Hildebrandt M. (2020). Code-driven law: freezing the future and scaling the past, in Is the law computable? Is Law Computable?: Critical Perspectives on Law and Artificial Intelligence, eds. Christopher Markou, Simon Deakin, Hart Publishing.
  • Lorini G., Moroni S. (2020). How to Make Norms with Drawings. An Investigation of Normativity beyond the Realm of Words. Semiotica.
  • Verheij B. (2020). Artificial intelligence as law. Artif Intell Law, 28(2):181-206

Back to list


Mazviita Chirimuuta

Rules, Judgement, and Mechanisation

This paper is an exploration of the capacity of judgment, which stands in contrast to the ability to employ rules. The starting point is a short passage from Kant’s Critique of Pure Reason (A133/B172):

If the understanding in general is explained as the faculty of rules, then the power of judgment is the faculty of subsuming under rules, i.e., of determining whether something stands under a given rule… [T]he power of judgment is a special talent that cannot be taught but only practiced. Thus this is also what is specific to so-called mother-wit, the lack of which cannot be made good by any school; … [T]he faculty for making use of … [the rules] correctly must belong to the student himself, and in the absence of such a natural gift no rule that one might prescribe to him for this aim is safe from misuse.*

*The lack of the power of judgment is that which is properly called stupidity, and such a >failing is not to be helped.

In The Promise of Artificial Intelligence, Brian Cantwell Smith has argued that the current deep learning technologies are reckoning systems, but lack judgment. Which is to say, that they are adept at following rules (executing algorithmic procedures) but without understanding of the significance of these processes. I concur with Cantwell Smith that the lack of judgment poses a hard restriction on the kinds of situations in which current AI can responsibly be used. They can only make adequate decisions in situations that have been “registered in advance” (Cantwell Smith 2019:112) – conceptually mapped so thoroughly that questions about whether and how rules are to be applied have long been settled.

While Cantwell Smith is sanguine about the potential of future generations of AI technology to acquire the capacity for judgment, in this paper I present an argument for there being an inherent tension between judgment and the mechanised processes at the heart of digital computers. I here draw on writings by Simon Schaffer (1994) and Lorraine Daston (2018) on the early history of computation, and its relationship to ideas honed with the development of industrial manufacture. Mechanisation is possible within the controlled environment of the factory, such that the capacity to improvise in response to changing circumstances is not demanded of the machines. Judgment, it is argued, is the intellectual version of this capacity to improvise, and there may be no way to replicate it mechanically.

Back to list

Markus Krajewski

Source Code Criticism, commented. About the cultural technique of programming

Code is not easy to read. Although comprised of clearly defined commands in a linear sequence, code tends to display a high level of abstraction due to its underlying data structures and the functions linked to them. After compilation into binary code that can be executed by the computer, it becomes further obfuscated and contained as if in a black box. Obfuscation and abstraction are obstacles to reading code. Given our increasing reliance on code and algorithmic structures, it is however crucial that scholars, jurists and other non-computer experts be able to provide critical readings not only of complex philosophical and literary texts but also of algorithms.

Despite the growing body of research into cultural techniques, questions regarding the digital, such as the operation of algorithms, have remained underexplored in this field of media studies. I will meet this desideratum by examining programming as a cultural technique. Similar to methods in Critical Code Studies (e.g., by Mark Marino) this includes to situate algorithms in a discursive and historical context by elucidating code through systematic commentary, making software transparent by critical analysis.

Source code criticism therefore brings together classic hermeneutic scholarship and historical source criticism, linking the examination of material, i.e. algorithms, with a theoretically informed, commenting reading of program structures. The latter in particular is the central media practice of “programming as a cultural technique”. Since late Antiquity, in its classic application in theology and law, commentary has been used to hold up, determine, and vindicate text. Whether the law is religious or judicial, commentary keeps it from becoming inert or incomprehensible; it keeps arguments alive by underlining particular statements and bringing others into the discourse. Commentary has a similar function in philological text analysis, in the creation of critical editions, and in critique génétique: it points out where the text is unclear or ambiguous, and where there are variations or deletions in the original, to make transparent the genesis and construction of the text. And, finally, commentary is used in digital philology, which also applies the methods of philology to software. In the context of programming as a cultural technique, commentary has all of these functions. I will however go one step further to discuss commentary in its epistemic range as a tool of opening the black boxes of code in order to resolve obfuscation and abstraction.

Unlike most approaches in the DH source code criticism is a decidedly hermeneutic method whose object of investigation is the digital code itself. This means that algorithms are in turn made understandable and plausible by explanation, reflection, references, and if necessary, modification, in order to gain transparency and intelligibility. What is more, on the epistemic level – which goes far beyond the effects intended by Donald Knuth’s principle of literate programming (1984) – my approach aims to narrativise, historicise, and discursivise code by means of extensive commentary, in order to provide a lever to open the black boxes of both AI processes and deterministic execution of code.

Back to list