In December, I was fortunate to be able to attend the NLLP Workshop (the recorded livestream is here). The Workshop is a forum for showcasing original work in natural language processing in the domain of law or natural legal language processing (NLLP). It offers a snapshot of recent developments and an up-close encounter with some of the leading proponents in the field. Now in its fourth year, this year’s NLLP Workshop (8 December 2022) was hosted in Abu Dhabi and sponsored by Bloomberg, LBox, South Korea’s largest legal search service and the European Research Council. The day’s very full proceedings were smoothly managed by Catalina Goanta, Leslie Barrett, Nikolaos Aletras, Ilias Chalkidis and Daniel Preotiuc-Pietro.
Workshop aims and overview
The aim of the Workshop is to bring together ‘researchers and practitioners from natural language processing (NLP), machine learning and other artificial intelligence disciplines together with legal practitioners and researchers’ (see the Call for Papers here). Catalina Goanta, Associate Professor in Private Law and Technology, was the sole lawyer among the organisers. The Discussion Panel was more balanced with three lawyers (Michael Livermore, Matthias Grabmair, Frederike Zufall) and two computer scientists (Josef Valvoda, Jerry Spanakis). More importantly, my impression was that there were comparatively few lawyers among the authors of the 39 papers presented at the Workshop. Lawyers – get involved!
Our work on the typology has sensitised us to hype! The Workshop was refreshingly free of it. Several papers candidly report less than impressive results. The paper by Yu et al., provocatively titled, ‘Legal Prompting: Teaching a Language Model to Think Like a Lawyer’ concedes that ‘While our analysis shows significant promise in prompt engineering for high-order LLM-based [Large Language Model-based] reasoning tasks, it is questionable whether prompting actually teaches a LM to “think like a lawyer” …’ Zhenwei An et al. report that their charge prediction model does not adhere to the principles embedded in the criminal law of China. Such reflection is encouraging. NLP practitioners commonly recognise that the legal domain is challenging because law uses technical language, its texts are long and often unstructured in the sense that it does not use a predetermined data model. It is less common to see sensitivity to the functions, values, and effects of law – one of many reasons for lawyers to get involved!
The papers tackle a wide range of tasks and topics – summarisation of legal texts, named entity recognition applied to legal documents, classification of documents, text analytics, text generation, prediction of judgment, disclosure of datasets, modelling legal reasoning, the deployment of large language and other ML models, to name but a few. Some seem, in Livermore’s words, to use the legal domain as a ‘sandbox’ for application of NLP techniques. Others were more directly focused on solving problems specific to the legal domain, carrying out ‘law jobs’ (such as prediction of judgment, identifying privacy or compliance issues) or extracting insights (political, normative, sociological) from legal texts.
Singling out some papers
It seems invidious to single out favourites. However, three papers attracted my attention: Gubelman et al., ‘On What it Means to Pay Your Fair Share: Towards Automatically Mapping Different Conceptions of Tax Justice in Legal Research Literature’; Perin et al., ‘Combining WordNet and Word Embeddings in Data Augmentation for Legal Texts’ (tackling a ‘real world’ problem for the application of NLP in the legal domain, namely the relative scarcity of high-quality annotated data); and Benatti et al., ‘Should I disclose my dataset? Caveats between reproducibility and individual data rights’. Why these? Perhaps they align with lawyerly interests – these papers do not use the legal domain as a mere ‘sandbox’ for exploration of the capabilities and limitations of NLP and look to do more than merely mimic tasks carried out by lawyers. Gubelman et al.’s paper had a special appeal on account of its attempt to trace and classify legal arguments which invoke conceptions of justice.
For me, of all the papers, that by Perin et al. ‘Combining WordNet and Word Embeddings in Data Augmentation for Legal Texts’ was a ‘must read’. I first encountered WordNet (and FrameNet) in an extended analysis of the implications of the so-called ‘making available’ right. The authors’ approach combines reliance on WordNet’s database of synsets (sets of synonyms for words) with pre-trained word embeddings. They use this combination of symbolic and sub-symbolic approaches in the hope of improving on existing techniques for data augmentation – increasing the contents of a dataset by adding synthetic data generated from existing data. Here, for each sample sentence in the existing data, synthetic data is generated by finding and using replacement words which differ from but are sufficiently similar in semantic content to the words in the sample. Intuitively the authors’ approach makes sense. However, it is a hard (perhaps impossible) task to find acceptable substitutes for certain words or phrases, for example, the names of specific institutions or technical legal terms that are freighted with meaning. It is unsurprising, therefore, that the authors report that domain experts ‘emphasize that more work is needed to obtain satisfactory results.’
Lawyers – get involved
This was an event free of hype, but it was also an expression of confidence in the future of NLLP. The panel accepted that hard problems remain - especially as regards explainability – and suggested that the solution may lie, in part, in hybrid systems combining symbolic and sub-symbolic reasoning. The panel also acknowledged some worries about the deployment of AI, for example, in the context of prediction of judgment. Nevertheless, as Grabmair maintained, like it or not, data-driven technologies and automation are here to stay, and lawyers need to get involved in the conversation.
How should lawyers get involved or, as Hildebrandt puts it, ‘get their act together’ in the brave new world of computational law? Members of the Workshop panel suggested various roles for lawyers in collaboration with NLP practitioners, computer scientists, machine learning experts and others working in the NLLP space. Zufall referred to the need for reflection on the effects of automated decision making, on who gets to decide how law is represented in code, on questions about the legitimacy of systems which encode the law in a manner which imperfectly represents the intentions of the legislature. She saw a need to build theories about how these systems should be designed and applied. Lawyers have a crucial role to play here. However, Valvoda’s comments were particularly insightful. Valvoda expressed uncertainty about whether the NLLP community had correctly formulated ‘its mission’, adding that for the purposes of NLLP we should identify what is unique about law. Lawyers must be part of this conversation. Their contribution should, in turn, be informed not only by a sound grasp of substantive law but a deep understanding of the ends of law, and the practices and institutional arrangements which sustain law to deliver those ends. Part of the raison d’être of the COHUBICOL project is to foster such reflection and to encourage cross-disciplinary discussions about these issues. Our inaugural CRCL Conference (November 2022) and our Typology of Legal Technologies demonstrate our commitment to such an approach. I hope that the Typology will contribute to deliberations about the ‘mission’ of NLLP.
NLLP2022 was lively and thought-provoking. The papers were interesting, the panel discussion careful and well-informed, the tone confident but measured. I welcome the call for cross-disciplinary collaboration and the plea for greater involvement of lawyers. The task of communicating what is distinctive about law, its values and affordances, the practices and institutions that sustain it and make up its ‘regime of veridiction’ must fall to lawyers. We cannot be heard to complain that the mission of NLLP is at odds with law’s mission if we do not engage in that task.