Last year LawtechUK published a discussion paper on the adoption of AI technology in legal services, asking for responses to the following five questions:

  1. How do you assess the risks associated with the ML [machine learning] systems you deploy? Please provide use cases where available.
  2. What activities does your organisation undertake to assure itself that an ML system is appropriate and, where possible, prevent issues from arising?
  3. How to requirements around PII [personally identifiable information] impact the adoption or development of ML solutions by your organisation?
  4. What techniques do you carry out to ensure professional confidentiality is maintained when data associated with client matters is used to build and train ML applications and systems?
  5. Are there any other issues of concern relating to UK legal service regulation not covered by this discussion paper that are acting as a constraint on ML development and deployment?

We welcome such questions being asked. From a COHUBICOL perspective, we are particularly interested in the assessment and anticipation of risk, as well as the implications for lawyers’ professional duties of embedding machine learning in legal practice.

Here are our submitted responses to questions 1, 3 and 5:

Q1. How do you assess the risks associated with the ML [machine learning] systems you deploy? Please provide use cases where available.

The Law Society of Scotland has prepared a Guide to IT Procurement. The Guide briefly addresses a range of ethical considerations including those relating to compliance with professional obligations. The questions set out in that section are designed to prompt reflection on risks relevant to solicitors’ professional obligations to their clients and others. The Law Society’s report on lawtech principles and ethics offers a detailed framework for assessing compatability of lawtech with solicitors’ professional obligations. Both the Guide to IT Procurement and the Report offer a useful starting point for assessment of risks. 

As part of our own research as part of an ERC Advanced Grant project, we have developed a Typology of Legal Technologies. The Typology provides a method for the assessment of legal tech (including ML systems), taking account of (i) the claims made by the developers (the systems’ claimed essential features, rationale, benefits and design choices), (ii) the substantiation of those claims, and (iii) potential technical issues and potential impact on legal decision-making. We see a role for the method set out in the Typology in assessing risk. It is crucial that lawyers do not simply accept at face value the claims of legal tech providers, but rather assess the potential impacts of such technologies on their practice vis-à-vis the rule of law and their status as officers of the court. This is neither straightforward nor simple, but it is essential; methodologies such as our Typology provide a framework to ask the right questions of such technologies. 

We will provide a tutorial on the method offered by the Typology at the prestigious ICAIL conference later this year. The acceptance of the tutorial as part of the conference speaks to a recognition on the part of the AI and Law community that there is a need for frameworks and methods for assessing the suitability of ML for tasks within the domain of law.

Q3. How to requirements around PII impact the adoption or development of ML solutions by your organisation?

This will presumably depend on what the ML solutions are designed to do. If the solution is employed for purposes unrelated to the giving of legal advice, then we expect PII will have no impact. If on the other hand ML is used to shape or deliver advice then PII would be very relevant and, we expect, would act as a constraint. This seems entirely proper. Current ML may be useful, but it is incapable of offering more than statistical insights. It cannot offer advice that is properly responsive to both the particular needs of a client, and the wider professional duties of legal practitioners and their commitments to the rule of law.

Our concern, rather, is that there will be a push towards de-regulation without a full examination of the ways in which ML is not (currently) appropriate for carrying out ‘law jobs’ (drafting, advising, predicting). There may also be a tendency to assume that ML can do things it is neither designed to, nor capable of, doing — especially in the legal domain. Legal service regulation ought to take into consideration the limits of what ML can legitimately do within the legal domain, to protect legal practitioners from predatory marketing that might oversell technologies that may come to undermine their practice and open them to professional liabilities in the future.

Wide-scale adoption of ML solutions in the domain of law will inevitably impact on the institutions of law, as well as practitioners’ and citizens’ conceptions and expectations of law. These issues should also be taken into account. They are relevant to the commitment to the rule of law.

Discussion