THE AUTHORS:
Sergejs Dilevka, Senior Counsel at Galadari Advocates & Legal Consultants
Dimitriy Mednikov, Associate at Galadari Advocates & Legal Consultants
AI Revolution in Dispute Resolution
Artificial Intelligence (“AI”) has by now permeated the world of dispute resolution, both in litigation and arbitration. It has a plethora of uses, which rank from rather benign to utterly nefarious:
- assisting with e-discovery,
- producing ever more accurate machine translation,
- assisting with the arbitrator appointment process,
- providing insight into what arguments and/or language a judge finds most persuasive,
- identifying anonymous parties in tens of thousands of decisions of the Swiss Federal Supreme Court in less than an hour,
- or creating deepfake evidence.
It is fair to say that the future of the legal profession has never looked more uncertain, and the question of whether — or, most importantly, when — AI will replace those who toil in the field of dispute resolution has never been more relevant. In no small part, this is due to a rather sudden but very grand entry of ChatGPT in November 2022, which was shortly followed by Google Bard, as well as a number of other advanced “generative” AIs, or, more specifically, large language models (“LLMs”). Indeed, it is a matter of time before we have LLMs on top of LLMs.
But what are LLMs?
In a nutshell, LLMs work based on a principle noscitur a sociis to understand context, grasp nuances in language, and then generate human-like text by statistically predicting the next word in a sentence. As LLMs are trained on massive datasets, learning from a wide range of Internet text, their predictions depend on the composition of these training datasets (e.g., an LLM specifically trained on a dataset of legal documents would be better at responding to distinctly legal prompts).
Roles and Uses of Generative AI in Dispute Resolution
We will consider how LLMs can be used in dispute resolution using the following three key features of LLMs covered by Damien Charlotin in August 2023 in his excellent article, “Large Language Models and the Future of Law”, as a framework:
LLMs Excel at Detecting and Replicating Patterns in Language Data
This makes LLMs particularly helpful in the legal domain, which is at its core about rules and their application to a set of factual circumstances. Thus, LLMs are powerful at generating documents based on templates, as well as boilerplate language, summarising text, or predicting the outcome of relatively straightforward cases. For example, one can use an LLM to:
- prepare contracts or procedural documents such as a request for an injunction or procedural order No. 1 based on precedents. This may be especially useful for small and medium-sized firms, which may not have a comprehensive in-house precedent library.
- add boilerplate language to a judgment or award, submission, expert report, etc. In September 2023, Lord Justice Birss of the English Court of Appeal publicly admitted that he used ChatGPT to generate a paragraph with “a summary of an area of law [he] was writing a judgment about” and thought that it was “jolly useful”;
- summarise lengthy documents such as judgments or awards;
- identify uncontested facts in the submissions;
- generate factual summaries or chronologies; or
- do spell-checking and light editing.
Essentially, the more routine a particular task is, the more it is susceptible to being automated using an LLM. On the other hand, LLMs oftentimes struggle where one must break from a pattern or established legal rule — as they are by nature “looking backwards” to their training data sets — which is where human lawyers come in.
LLMs Can Produce Large Volumes of Text Quickly and at a Relatively Low Cost
LLMs can be used to produce first — and many more — drafts of submissions and take care of the “technical” side of editing such as proofreading and checking references. Thus, the focus of human lawyers is now more on creative and strategic tasks, e.g., brainstorming arguments and ideas, which can then be quickly “put on paper” by an LLM, as well as on reviewing and editing these drafts. This not only enhances the value of specialist knowledge but also means that younger lawyers would be expected to take on responsibilities traditionally assigned to their more senior colleagues, — in addition to being able to skilfully generate appropriate prompts.
Last autumn we conducted an experiment of our own using ChatGPT to generate commentary on 24 international arbitration rules laws, which we then edited, as well as DALL E to produce beautiful artwork for these publications. We found that, overall, ChatGPT was able to successfully dissect provisions, spot important nuances, and generate meaningful commentary on the relevant provisions. Specifically, once ChatGPT was presented with a provision for analysis, it would typically attempt to break down the text into separate aspects in a list or bullet point format, tackling each and summarising the provision at the end of its analysis. The commentary produced using ChatGPT did display some shortcomings. For example, there were instances when ChatGPT would “hallucinate” (making something up), which is a well-known limitation not only for OpenAI and ChatGPT users but also for other AI systems.
LLMs Are Particularly Designed for Interacting in a Manner That Closely Mimics Human Conversation
This drastically lowers the “technical” bar for interacting with a generative AI, and we now see a range of legal chatbots or virtual assistants, from off-the-shelf products to (semi)custom solutions developed by law firms.
Yet further, lawyers now have an invaluable tool in legal research: instead of relying on keywords oftentimes augmented by Boolean operators LLMs can perform semantics-based searches in order to identify documents most closely related to the query, as well as summarise the results of such a search. One such tool — Jus-AI, which can do much more than just research — has recently been introduced by Jus Mundi.
However, presently, it is irresponsible to rely exclusively on LLMs for legal research as they are still likely to overlook certain factual or legal nuances due to inevitable limitations in their training data, especially in languages other than English. As LLMs are prone to “hallucinate”, one may face dire consequences for failing to independently verify search results: last year was full of stories of lawyers misleading the courts with fake case law references in their submissions (e.g., in the US or Canada).
LLM’s ability to interact with its users in natural language can also be used to bridge the gap between lawyers and software developers. Last year, we used ChatGPT to write Python scripts to automate several mundane tasks with the result that we could run the code to perform the following tasks in just a few seconds (instead of many hours), for example:
- processing incoming documents (by unpacking archives, removing file duplicates, renaming each file based on its creation date, etc.); or
- renumbering exhibits in footnotes in a submission based on a final exhibit list.
In essence, ChatGPT acted as a translator, converting natural language prompts into Python code, analysed the code, and explained what each line of code, function, or variable did, how the code needed to be adapted to address a particular scenario, and addressed bugs and errors in the code, as well as suggested fixes and workarounds. The only technical background we had was a one-hour introductory course on Python.
Compliance Is Key
While the integration of AI into dispute resolution has surged forward, it is essential to navigate the emerging landscape with a keen eye on compliance. Regulations governing AI usage are increasingly being adopted in various jurisdictions, including the EU, US, Canada, New Zealand, and the Dubai International Financial Centre (DIFC). These regulations aim to address concerns about privacy, data security, and ethical AI use. For instance, under the EU’s proposed Artificial Intelligence Act, high-risk AI systems, including those used in legal contexts, would be subject to stringent requirements.
A critical consideration is the confidentiality of data. Using a publicly available LLM for processing confidential information is fraught with risks. The common solution is the deployment of LLMs in-house, ensuring that sensitive data remains within the controlled environment of the law firm or legal department so as to minimise the risk of data breaches and comply with client confidentiality obligations.
Finally, it is paramount to remember that despite the capabilities of AI, the human lawyer retains ultimate responsibility for the end result. LLM tools are, for now, aids, not replacements. They can enhance the efficiency and accuracy of legal work but cannot yet independently exercise the professional judgment required in legal practice. Lawyers must remain vigilant in their oversight of AI-generated work, ensuring that it meets the requisite legal standards and ethical norms.
Looking Forward
Speculating about the future of AI in the legal profession is certainly intriguing. Some — including one of the authors of this article — argue that AI might eventually replace human lawyers, just as legal databases such as Westlaw or LexisNexis put case reporters out to pasture. However, there are significant limitations to the development of AI. One such barrier is the substantial energy consumption of advanced AI systems which could be addressed in the future by nuclear fusion. Another pressing issue is the surge in copyright claims against entities like OpenAI for alleged unlawful use of copyrighted materials in the training datasets.
Despite these challenges, the demand for efficiency in legal services is relentless. Clients increasingly expect their lawyers to deliver services that are both cost-effective and time-efficient. This puts more pressure on lawyers to balance the innovative potential of AI tools with the timeless values of the profession — diligence, confidentiality, and ethical integrity.
ABOUT THE AUTHORS:
Sergejs Dilevka is a Senior Counsel at Galadari Advocates & Legal Consultants in Dubai and a dual-qualified lawyer and in England & Wales and the State of New York. Sergejs has over 15 years of experience in advising and representing multinational companies and high-net-worth individuals in a wide range of complex institutional (ICC, LCIA, DIFC-LCIA, LMAA, SCC, SCIA, DIAC, GCCCAC) and ad hoc international and domestic arbitration proceedings, and litigation proceedings at DIFC Courts. Sergejs is a registered practitioner with DIFC Courts and ADGM Courts.
Dimitriy Mednikov is an Associate at the Dispute Resolution department of Galadari’s Dubai office. Dimitriy’s practice focuses on complex commercial arbitration, particularly in the IT, engineering and construction, and M&A sectors, under various institutional rules (ICC, LCIA, SCC, HKIAC, and DIAC). Dimitriy has substantial experience in advising and acting for high-net-worth individuals in cross-border disputes and criminal proceedings involving allegations of money laundering. Dimitriy is a registered practitioner with DIFC Courts and ADGM Courts.