CRCL2023 conference banner

CONFERENCE PROGRAMME
20-21 November 2023
U-Residence, Vrije Universiteit Brussel, Boulevard Géneral Jacques 271, Brussels 1050
and online
All times are CET

Registration now open  

  CRCL23 home
Day 1: Invited position papers on Symposium on the Future of Computational Law   Day 2: Accepted conference papers & Roundtable on the Future of Legal Method  

20 Nov: Symposium on The Future of Computational 'Law'

Day 1 (20 November) instigates an in-depth discussion by key scholars in the domain of computational 'law' in the broad sense of that term. To ensure an animated and informed discussion we have three panels with a star line-up of speakers. The panel sessions are preceded by a presentation of the draft research results of the COHUBICOL project, under the heading of 'The Future of Computational "Law"'.

8.30 - 9.15

Registration and coffee

9.15 - 9.30

Welcome to the Conference

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

9.30 - 10.00

Pauline McBride and Laurence Diver: 'Research Study on Computational Law (draft)'

Abstract

In this presentation we summarise the findings of the COHUBICOL Research Study on Computational Law, in advance of its publication at the end of 2023. The paper builds on our research in the first Research Study on Text-Driven Law, applying those ideas to both data- and code-driven legal technologies, and incorporating the insights gleaned from our work on the Typology of Legal Technologies. The study will pave the way for legal protection by design in legal tech, bridging the gap between legal theory and computer science practice.

Authors' bios

McBride and Diver are legal postdoctoral researchers in Counting as a Human Being in the Era of Computational Law.

cohubicol.com/about/research-team

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

10.00 - 12.00

Mireille Hildebrandt: 'Introductory Address: Legal Technologies and the Rule of Law'

Abstract

In this introductory position paper I will argue that lawyers need to come to terms with the advent of a rich variety of legal technologies and set out a series of challenges that the position papers aim to identify and address.

Author bio

Hildebrandt is a Research Professor on 'Interfacing Law and Technology' at Vrije Universiteit Brussels (VUB), appointed by the VUB Research Council. She is co-Director of the Research Group on Law Science Technology and Society studies (LSTS) at the Faculty of Law and Criminology. She also holds the part-time Chair of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen.

cohubicol.com/about/research-team/#mireille-hildebrandt

Sarah Lawsky: 'Respecting both Law and Technology in Legal Technology'

Abstract

Effectively creating and evaluating legal technology requires both legal expertise and technological expertise specific to the project in question. An “interesting” computer science application built on a faulty understanding of the law is not useful and in fact may be dangerous. Tools or projects developed without an understanding of the practice of law may answer questions that are not important or solve problems that are not pressing or do not even exist.

Reading statutes, for example, may seem straightforward, but even when statutory language seems clear, accurately understanding the statute in a way that allows formalization may require knowledge of specialized statutory language, the statutory context in which the provision operates, and other guidance such as regulations or rulings that may clarify, augment, or even effectively change the law. Law resides in details that only specialists understand.

Lawyers in turn may misuse technological tools they do not understand, waste money and time on tools that do not live up to the descriptions supplied by salespeople, and fail to work as efficiently as they might otherwise if they cannot use technology effectively. Lawyers throughout the United States have a duty of technological competence, but as technology becomes more complex, and as accounts of the potential of technology become more fevered, living up to this duty becomes more difficult.

It is the rare person who can understand both the particular law at issue and the technology sufficiently to develop or for that matter evaluate legal technology. Thus developing and evaluating legal technology should be a team project, bringing together with mutual respect true legal experts with those whose expertise lies in technology.

Author bio

Sarah Lawsky is the Stanford Clinton Sr. and Zylpha Kilbride Clinton Research Professor of Law at Northwestern Pritzker School of Law. She studies tax law, computational law, and the intersection of the two. Her recent work focuses on the formalization of tax law. Her doctoral research argued for using a particular nonstandard logic to formalize tax law; this research formed the conceptual foundation for the domain-specific programming language Catala, which was created by a team of computer scientists and lawyers. Before entering academia, she worked as a tax lawyer for large law firms.

www.law.northwestern.edu/faculty/profiles/sarahlawsky

Arvind Narayanan, Sayash Kapoor and Peter Henderson: 'Promises and pitfalls of AI for legal applications'

Abstract

Are AI tools set to redefine the landscape of the legal profession? We argue that the current state of evaluations of AI does not allow us to answer this question. We dive into the increasingly prevalent roles of three distinct types of AI used in legal settings: generative AI, AI for automating legal judgment, and predictive AI. While generative AI could help with routine legal tasks, concerns surrounding contamination, construct validity, and prompt sensitivity warrant attention. On the other hand, applications of AI for automating legal judgment range widely in their usefulness. Some helpful interventions include finding common trademark or patent filing errors. Others are inaccurate, hard to evaluate, and suffer from common machine learning errors, such as predicting the outcome of court decisions. Finally, predictive AI is often touted as a groundbreaking tool, but it encounters serious limitations in research and real-world applications. These limitations call into question the validity of such research and applications. Diving into a series of case studies, we highlight potential pitfalls and outline necessary guardrails for evaluating AI in legal contexts.

Authors' bios

Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.

www.cs.princeton.edu/~arvindn/

Sayash Kapoor is a Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research examines the societal impacts of artificial intelligence, with a focus on reproducibility, transparency, and accountability in AI systems. Kapoor's research provides insights into challenges in responsible computing, such as algorithmic legitimacy, privacy, and disinformation. He has previously worked on AI at Facebook, Columbia University, and EPFL Switzerland. His work has been recognized with awards at top computing conferences, including a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Kapoor is currently co-authoring a book titled AI Snake Oil with Arvind Narayanan, which provides a critical analysis of AI capabilities, separating the hype from the true advances.

www.cs.princeton.edu/~sayashk/

Peter Henderson is an incoming assistant professor at Princeton University with appointments in the Department of Computer Science and School of Public and International Affairs, as well as the Center for Information Technology Policy. Previously he received a J.D. from Stanford Law School and Ph.D. in Computer Science from Stanford University. His research focuses on the safe deployment of machine learning algorithms for public good, emphasizing an interdisciplinary agenda at the intersection of AI, law, and public policy.


Watch the full session

Mireille Hildebrandt from 0:00
Sarah Lawsky from 34:20
Arvind Narayanan, Sayash Kapoor and Peter Henderson from 55:40

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Lunch (12.00 - 13.00)

13.00 - 15.00

Natalie Byrom: 'Legal technologies and access to justice: Towards a research agenda'

Abstract

Increasingly claims are made about the potential for computational technology to address the global access to justice crisis. Advocates for AI argue that these tools can extend the protection of the law to the estimated 5.1billion people worldwide who are unable to secure meaningful access to justice, whilst also creating efficiency savings and reducing the cost of administering justice. However, many of these claims suffer from an impoverished understanding of what access to justice entails and fail to engage sufficiently with the nature of the crisis, specifically the practical and attitudinal barriers that people face in attempting to secure their rights, protections and fair treatment under law.

This paper will present a framework for conceptualising access to justice and show how this framework can be used to identify the access to justice challenges computational law is well placed to address. It will also demonstrate how this framework can be used to evaluate claims made by proponents of computational technology and argue for a research agenda that moves beyond assessing the performance of individual tools to explore the cumulative the impact of these tools on the justice system as a whole.

Author bio

Dr Byrom is Honorary Senior Research Fellow at UCL Laws. Her research focuses on evaluating the impact of data driven technologies on access to justice and mechanisms for increasing public participation in justice data governance. Between 2018 and 2020 she was expert advisor to the UK Ministry of Justice where she led the development of a data strategy to underpin their £1bn programme of digital court reform. She sits on the Civil Justice Council and the Ministry of Justice Senior Data Governance Panel, which advises the Lord Chancellor and Lord Chief Justice on novel and contentious uses of justice data. She is part of the BBC Expert Women network and her research has been published in the legal and national press.

ucl.ac.uk/laws/people/visiting-staff-and-honorary-appointments/natalie-byrom

Denis Merigoux: 'Scoping AI and Law projects: Wanting it all is counterproductive'

Abstract

The intersection of law and computer science has been dominated for decades by a community that self-identifies with the pursuit of "artificial intelligence". This self-identification is not a coincidence ; many AI & Law researchers have expressed their interest in the ideologically-charged idea-utopia of government by machines, and the field of artificial intelligence aligns with the pursuit of all-encompassing systems that could crack the very diverse nature of legal tasks. As a consequence, a lot of theoretical and practical work has been carried in the AI & Law community with the objective of creating logic-based, knowledge-based or machine-learning-based systems that could eventually "solve" *any* legal task. This "want-it-all" research attitude echoes some of the debates in my home field of formal methods around formalization of programming languages and proofs, and this position paper is the occasion for me to expand the line of reasoning developed in my PhD dissertation.

Hence, I will argue here that the quest for an unscoped system that does it all is counterproductive for multiple reasons. First, because these systems perform generally poorly on everything rather than being good at one task, and most legal applications have high correctness standards. Second, because it yields artifacts that are very difficult to evaluate in order to build a sound methodology for advancing the field. Third, because it nudges into technological choices that require large infrastructure-building (sometimes on a global scale) before reaping benefits and encouraging adoption. Fourth, because it distracts efforts away from the basic applications of legal technologies that have been neglected by the research community.

The critique presented in this paper is mostly technical. However, I also believe that a shift towards smaller-scale and domain-specific systems and tooling can foster genuine cross-disciplinary collaborations. These collaborations could form the basis for bottom-up approaches that respect the rule of law rather than adapting it for the needs of the system.

Author bio

Denis Merigoux is a starting researcher at Inria Paris inside the Prosecco team, specialized in the study and design of programming languages and formal verification. His research domain is at the intersection of computer science and law as his goal is to improve how the law, turned into computer code, is applied automatically by information systems of government agencies and companies. More particularly, he focuses on systems levying taxes or distributing social benefits. After co-designing the Catala domain-specific language with lawyers for this purpose, he collaborates with the French tax authority (DGFiP) on a tentative re-writing of the income tax computation algorithm in Catala.

merigoux.ovh

Giovanni Sartor: 'Formal argumentation: the logic for AI & law?'

Abstract

Among the computable models of the law, formal argumentation deserves a particular interest, being most suitable to capture the content of legal knowledge and the process of legal reasoning. In comparison with other approaches to modelling the law, argumentation has the advantage of accepting and indeed emphasising the dialectical nature of legal problem-solving , capturing the adversarial interaction of arguments and counterarguments. Formal argumentation also provides a framework for explaining the outcomes of AI applications in the legal domain in a way that promote critical understanding.

In this contribution, the key approaches to formal modelling legal argument will be briefly presented, including both rule and case-based approaches. Then the potential and limitations of formal argumentation will be critically discussed, with regard to enabling perspicuous and practicable computational models of the law. The connections between formal argumentation and natural language processing will also be examined.

Author bio

Giovanni Sartor is professor in Legal Informatics at the University of Bologna, professor in Legal informatics and Legal Theory at the European University Institute of Florence, visiting professor of Artificial Intelligence and Law at the University of Surrey. He coordinates the CIRSFID - AI for Law and Governance unit at the Alma-AI research center of the University of Bologna. He holds the ERC-advanced grant (2018) for the project Compulaw (2019 – 2025).

He obtained a PhD at the European University Institute (Florence), was a researcher at the Italian National Council of Research (ITTIG, Florence), held the chair in Jurisprudence at Queen’s University of Belfast, and was Marie-Curie professor at the European University of Florence. He has been President of the International Association for Artificial Intelligence and Law. He is co-director of Summer Schools on “Artificial Intelligence and Law” and on “Law and Logic”. He holds courses at the University Bocconi (Milan), the university Catolica (Lisbon) and Surrey (London).

He has published widely in legal philosophy, computational logic, and computer law, AI & law. He is co-director of the Artificial Intelligence and Law journal and co-editor of the Ratio Juris journal. His research interests include legal theory, early modern legal philosophy, logic, argumentation theory, modal and deontic logics, logic programming, multiagent systems, computer and Internet law, data protection, e-commerce, law and technology.

giovannisartor.net


Watch the full session

Natalie Byrom from 0:00
Denis Merigoux from 34:15
Giovanni Sartor from 1:05:28

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Coffee and tea (15.00 - 15.30)

15.30 - 17.30

Lyria Bennett Moses: 'Legal education in an era of convergence'

Abstract

The current model of education , generally speaking, puts students on a disciplinary track where they learn a particular approach to solving particular types of problems. A well-trained lawyer will be accomplished in doctrinal reasoning as practiced in their jurisdiction, with (hopefully) sufficient understanding of history, theory, comparative law, and critical perspectives. Currently, only a small proportion will understand computing sufficiently to debate issues such as the affordances and limitations of legal technology (now and in the future) or the benefits and problems of approaches such as ‘rules as code’. Conversely, most computer scientists and software engineers (with a few notable exceptions) are not trained to understand the nature of law or the rule of law. Further, neither (most) lawyers nor (most) computer scientists will be in a position to approach policy challenges such as cyber security holistically, combining the channelling possibilities of law with the technical sense of what might most effectively be done. It is perhaps unsurprising that technology often leaves a trail of poorly designed legal interventions in its wake.

Few problems come neatly in disciplinary packaging, leaving many to make ‘rookie errors’ and, more broadly, practices and policies that are widespread yet poorly conceived. Given the increasing reliance on computer tools in legal institutions and legal practice and the emergence of legislation represented in computer code (among other things), we need reconsider the bounds of a legal education. This can and should include offering specialist courses (including hands-on ‘legal tech’ courses and courses offering a more critical perspective). But we need to go beyond this both in breadth (ie what concepts beyond law should be considered a core component of a ‘legal’ education) and in the composition of classrooms. We need to extend such initiatives (in adapted formats) to existing lawyers, particularly those working as judges and policymakers. Essentially, we need to move nuanced understanding of and debates around computational law, facilitated through real discussions between lawyers and computer scientists, from conferences such as this to the mainstream.

Author bio

Lyria is Director of the Allens Hub for Technology, Law and Innovation and a Professor and Associate Dean (Research) in the Faculty of Law and Justice at UNSW Sydney. She is also co-lead of the Law and Policy theme in the Cyber Security Cooperative Research Centre and Faculty Lead, Law and Justice in the UNSW Institute for Cyber Security. Lyria's research explores issues around the relationship between technology and law, including the types of legal issues that arise as technology changes, how these issues are addressed in Australia and other jurisdictions, and the problems of treating “technology” as an object of regulation. Recently, she has been working on legal and policy issues associated with the use of artificial intelligence and the appropriate legal framework for enhancing cyber security. In addition to editorial duties (including the Journal of Cross-Disciplinary Research in Computational Law), she is on the NSW Information and Privacy Advisory Committee, the Executive Committee of the Australian Chapter of the IEEE’s Society for the Social Implications of Technology, and is a Fellow of the Australian Academy of Law.

unsw.edu.au/staff/lyria-bennett-moses

Floris Bex: 'Transdisciplinary research as a way forward in AI & Law'

Abstract

The field of Artificial Intelligence & Law is a community of law and computer science scholars, with a focus on AI applications for the law and law enforcement. Such applications, however, have become the subject of much debate lately. On one side of the debate, the techno-pessimists focus mainly on the bad effects of AI and big data, seeking to regulate and restrict them. On the other side of the debate stand the techno-optimists, who argue that the ethical and social problems that AI might bring can be solved by technical innovations. What is the role of the (largely techno-optimistic) AI & Law community in this debate, how can we investigate AI for the law without getting caught up in the drama? I will argue for a way forward, consisting of three points. First, we need to combine research on data-driven systems, such as generative AI, with research on knowledge-based AI: use new deep learning techniques without forgetting about good old-fashioned AI. Second, we must put AI into (legal) practice, working together with courts, the police, law firms, but most importantly citizens. Finally, we need to work together across disciplines: bring together those who think about how to build AI and those who think about how to govern and regulate it, and going beyond AI and Law, reaching out to other disciplines such as public administration, philosophy and media studies.

Author bio

Floris Bex is full Professor of Data Science and the Judiciary at Tilburg University, and Associate Professor of AI at Utrecht University. Floris is co-founder and scientific director of the National Police Lab AI, a unique collaboration between Dutch universities and the Netherlands National Police, where research and development of state-of-the-art AI for law enforcement go hand in hand. He is the acting President of the International Association for AI and Law (IAAIL). He has a PhD in AI & Law, and his research focuses on AI techniques (mainly argumentation and NLP) for law enforcement and the legal field, as well as on legal and societal aspects of such AI systems.

florisbex.com

Frank Pasquale and Gianclaudio Malgieri: 'Generative AI, Explainability, and Score-Based Natural Language Processing in Benefits Administration'

Abstract

Administrative agencies have developed computationally-assisted processes to speed benefits to persons with particularly urgent and obvious claims. One proposed extension of these programs would score claims based on the words that appear in them (and relationships between these words), identifying some set of claims as particularly like known, meritorious claims, without depending on the scoring system understanding the meaning of any of these legal texts or words within them. Score-based natural language processing (SBNLP) may expand the range of claims that may be categorized as urgent and obvious, but its practitioners may not be able to offer a narratively intelligible rationale for why it does so. However, practitioners may now use generative AI to attempt to fill this explanatory gap, offering a rationale for decision that is a plausible imitation of past, humanly-written explanations of judgments in cases with similar sets of words in their claims.

This article explains why such generative AI should not be used to justify SBNLP decisions in this way. Due process and other core principles of administrative justice require humanly intelligible identification of the grounds for action. Given that “next-token-prediction” is distinct from understanding a text, generative AI cannot perform such identification reliably. Moreover, given current opacity and potential bias in leading chatbots based on large language models, there is a good case for entirely excluding these automated outputs in administrative and judicial decisionmaking settings. Nevertheless, SBNLP may be established parallel to or external to justification-based legal proceedings, for humanitarian purposes.

Authors' bios

Frank Pasquale is Professor of Law at Cornell Law School and Cornell Tech. His books include The Black Box Society (Harvard University Press, 2015) and New Laws of Robotics (Harvard University Press, 2020). He has published more than 70 journal articles and book chapters, and co-edited The Oxford Handbook on the Ethics of Artificial Intelligence (Oxford University Press, 2020) and Transparent Data Mining for Big and Small Data (Springer-Verlag, 2017). He has held chaired professorships at the University of Maryland, Seton Hall University, and Brooklyn Law School. He has also served as a distinguished visiting faculty member at the University of Toronto Faculty of Law, visiting professor at Yale Law School, and visiting fellow at Princeton's Center for Information Technology Policy. His work on "algorithmic accountability" has helped bring the insights and demands of social justice movements to AI law and policy. In privacy law and surveillance, his work is among the leading legal research on regulation of algorithmic ranking, scoring, and sorting systems.

lawschool.cornell.edu/faculty-research/faculty-directory/frank-pasquale/

Gianclaudio Malgieri is Associate Professor of Law & Technology at Leiden University in the Netherlands, as well as Co-Director of the Brussels Privacy Hub.

gianclaudiomalgieri.eu


Watch the full session

Lyria Bennett Moses from 0:00
Floris Bex from 22:54
Frank Pasquale and Gianclaudio Malgieri from 45:33

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Top

21 Nov: Conference Papers and Roundtable on
the Future of Legal Method

9.00 - 9.30

Registration and coffee

9.30 - 9.45

Opening by Pieter Ballon, Vice Rector of Research, Vrije Universiteit Brussel

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

9.45 - 10.15

'Not-So-Ethical Governors, (Dis)obeying Robots, and Minotaurs: Legal Automation and Technological Disaffordances regarding the Use Of Force' by Johannes Thumfart (Philosophy and History)

Reply by Jessica Dorsey (Law)

Abstract

In this contribution, we discuss technological disaffordances regarding the use of force as a case study of legal automation from the perspective of legal philosophy. We focus on regulatory frameworks such as international law, and international humanitarian law (IHL) specifically, and human and civil rights. We assess possibilities to materialize some of the core principles of these frameworks in technological disaffordances, most notably the prohibition on wars of aggression and occupation in international law and the distinction between civilians and combatants in IHL. Moreover, we pay attention to the domestic use of heavy weapons to suppress the right to peaceful assembly. In an extended literature review, we critically discuss earlier similar conceptions: ‘ethical governors’ as envisioned by Arkin et al. in 2009, ‘disobeying robots’ as envisioned by Grimal and Pollard in 2021, and ‘minotaurs’ as envisioned by Sparrow and Henscke in 2023. We conclude that technological disaffordances regarding the use of force represent a case of dangerous and unrealistic technological solutionism if they are intended to replace the liability and judgement of human actors. However, we recommend specific technological disaffordances to enhance in-field legal reasoning, mostly involving human machine teaming. These are (in order of their technical feasibility): the recognition of cultural sites, blocking the use of Autonomous Weapon Systems (AWS) in densely populated areas, restrictions regarding the defense of occupied territories with AWS, restrictions regarding the oppression of peaceful protests with heavy weapons, the differentiation between civilian and military infrastructure, and the differentiation between civilians and combatants.

10.15 - 10.45

'Regulatory Acceptance of Test-Driven Development for Digital Compliance', by Anna Huggins, Nicholas Godfrey, Lauren Bellamy, Mark Burdon (Law)

Reply by Asia Biega (CS)

Abstract

In this paper, we synthesise insights from computer science and regulatory theory to examine the opportunities and limits of a novel interdisciplinary approach to acceptance test-driven development (‘ATDD’). We propose ‘regulatory ATDD’ as a new variant of ATDD which seeks to incorporate shared understandings of regulatory standards, and varying levels of compliance risk appetite and values among regulatees. We argue that the construction of regulatory user stories can be informed by a range of concrete legal and regulatory interpretive reference points, which provide a more holistic account of best practice standards from the broader regulatory environment. These regulatory user stories can then be translated into acceptance criteria and tests that can be tailored depending on a regulatee’s compliance risk appetite. We concretize these arguments by illustrating the construction of regulatory user stories and acceptance tests for an environmental, social and governance digital compliance case study: the regulatory guidance provided by the Taskforce on Climate-Related Financial Disclosure (‘TCFD’). The paper concludes by contending that the capturing of regulatory user stories and acceptance criteria and tests can contribute to macro rule of law values of transparency and accountability through the recording of micro legal coding choices and practices.

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Coffee (10.45 - 11.15)

11.15 - 11.45

'Governance by Algorithms: From social norms to laws, to numbers and to code', by Zhenbin Zuo (Law)

Reply by Noura Al-Moubayed (CS)

Abstract

This paper proposes an institutionalist theory of algorithmic governance in contrast to standard law and economics theorisations, arguing that ‘governance by algorithms’ is, in practice, a hybrid consisting of four distinct modes of governance: social norms, laws, numbers and computer code. These different modes of governance are diachronically layered in terms of both their historical evolution and current inter-dependence. Their evolution describes a movement which the paper calls the ‘algorithmic turn’ of governance: from social norms to laws, from laws to numbers (or statistics), and from numbers to code (computational algorithms). In this process, governance has become increasingly algorithmic and formalised in its reliance on computation, while expanding its scale effects over populations and territories. However, this ‘scaling effect’ is countered and constrained by a ‘layering effect’: each successive layer is conditioned by those preceding it. Thus today’s code is cognitively layered by reference to statistics, laws and social norms, thereby exhibiting institutional path-dependence. ‘Learning’ code can only function by using statistical analysis; statistics, in turn, is defined by reference to non-computational frames, including those of laws and social norms. To achieve complementarity and overall effectiveness in governance, lawmakers should be aware of the different nature and limitations of each mode, and consciously avoid potential freezing or lock-in effects induced by over-reliance on just one, in particular ‘code’ and ‘numbers.’

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

11.45 - 12.15

'Validation of Slovenian national and local elections in 2022', by Andrej Bauer, Katja Berčič, Saša Zagorc (CS)

Reply by Jurij Toplak (Law)

Abstract

The year 2022 was a super election year in Slovenia, with national elections in April, presidential elections in October, and local elections in November. On this occasion, the State Election Commission partnered with the Faculty of Mathematics and Physics at the University of Ljubljana to carry out a mathematical analysis of electoral laws, and implement independent testing and validation of a new software component for calculating election results. We summarize the analysis, describe how we approach testing and validation, what obstacles we faced and the solutions we found. Our activities expanded beyond software testing to verification of election results and public outreach. We also provide a legal analysis of the issues involved and formulate recommendations for their resolution

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

12.15 - 12.30

On the Journal of Cross-Disciplinary Research in Computational Law (CRCL, say circle), Mireille Hildebrandt and Frank Pasquale

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Lunch (12.30 - 13.30)

13.30 - 17.00

Roundtable on The Future of Legal Method

Introductory keynote: Scott Veitch (University of Hong Kong), 'Normativities and the Rule of Law: Realignments and Blindspots'

Roundtable participants:

  • Sophia Adams Bhatti (Simmons & Simmons)
  • Willy van Puymbroek (European Commission)
  • Anna Drozd (Council of Bars and Law Societies of Europe)
  • Martijn Loth (Taylor Wessing)
  • Cari Hyde-Vaamonde (King's College London)
  • Denis Merigoux (INRIA)
  • Lyria Bennett Moses (University of New South Wales)

(Coffee break 14.45 - 15.00)


Watch the full roundtable

Keynote from 0:00
First round of comments from panelists 49:45
Panelist presentations 1:03:47
Second round of comments from panelists 2:12:55
Audience Q&A 2:33:40

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

17.00

Conference close

We've embedded video from YouTube here. Because YouTube may collect personal data and track your viewing behaviour, we'll only load the video after you consent to their use of cookies and similar technologies as described in their privacy policy.

Top