THE AUTHORS:
Benjamin Knowles, Partner & Chair of International Arbitration at Clyde & Co
Chris Williams, Partner at Clyde & Co
On 5 November 2024, Jus Mundi, at the offices of Clyde & Co, brought together a panel to discuss the impact of emerging tech on dispute resolution. The event, was moderated by Benjamin Knowles, Partner & Chair of International Arbitration at Clyde & Co. The panellists included Chris Williams, Partner at Clyde & Co, Monica Crespo, Head of Product at Jus Mundi, Nora Jox Fredstie, Partner at DAC Beachcroft and Lisa Dubot, Global Knowledge Counsel at Mayer Brown.
There was a healthy debate around the state of technology and, in particular, AI, in relation to dispute resolution, with an unusual amount of engagement from a very sophisticated audience (big shout out to all the knowledge lawyers in the room).
Chris Williams set the scene, in terms of providing some context and definitions for the discussion. All the panel members then shared their experiences of using some of the latest legal tech, much of which is AI-powered, with a warts-and-all approach, sharing the good points and the less good ones. The panel went on to address some of the risks and concerns around the use of AI-related tech, and how that is starting to be addressed.
There were perhaps 3 key takeaways:
- There are undoubtedly AI/tech tools that will materially change the way in which disputes lawyers will operate in the future.
- At present, almost all the tools available come with something of a health warning. They are a work in progress.
- It isn’t clear, at present, what the endgame might look like, but it seems unlikely that AI/tech tools will replace the need for lawyers to analyse, strategise and provide considered advice, at least not any time soon!
This article sets out some of the topics and themes discussed at the event.
What is Generative Artificial Intelligence?
The panel discussion kicked off by acknowledging that Artificial intelligence (“AI”) is reshaping today’s world, which includes the legal industry, and consequently dispute resolution practices. The use of generative AI tools during the dispute resolution process can be valuable in providing efficiencies and costs savings.
AI is an umbrella term for technology which tries to simulate human intelligence. Large Language Models (“LLMs”) are trained on large volumes of data enabling them to both understand and generate different types of content, including natural language. The use of generative AI is increasing, but there are technological constraints which are reflected in the number of tools providing inaccurate or false information.
Large language models and generative AI legal products operate by effectively balancing two key components in building AI tools. Firstly, through training and finetuning with a view to identifying certain patterns and trends. Secondly, by means of choosing the right tools and information in order to generate an answer. However, finding a balance between these two factors is sometimes difficult.
Whilst AI is beneficial, it has its limitations. A study conducted by Apple concluded that large language models are unable to perform genuine logical reasoning, though they can reproduce reasoning steps. This study highlights two broad points which should underpin an approach to utilising generative AI tools in dispute resolution. Firstly, AI tools do not always provide accurate results and secondly, LLMs do not have thinking capacity.
AI Tools in Practice: Harvey AI and Jus AI
The discussion provided an in-depth look at users’ experience with Harvey AI, an AI-powered digital assistant for lawyers, alongside a live demonstration of Jus AI, Jus Mundi’s legal research AI tool. The panel shared their experiences of working with Harvey AI while the audience actively tested Jus AI in real-time. This interactive session led to a balanced debate on the role of AI in legal research, emphasizing the importance of proper supervision, both of the tools and their users, to mitigate the risk of overreliance on the output. A key concern was that less experienced lawyers might struggle to spot where AI tools had provided errant answers.
General Concerns on the Use of AI in Dispute Resolution
In recent years, AI technologies have been used more frequently in the legal sphere, with the aim of streamlining the arbitration process, improving efficiency and reducing costs. However, scepticism lies at the core of this clearly evolving landscape, allowing both courts and legal bodies to tread carefully around the widespread adoption of such tools, particularly for ethical and procedural reasons.
The panel went on to discuss the core concerns surrounding the use of AI tools in arbitration.
- Regulation of AI tools in Dispute Resolution – Despite not having formal acts in the UK which regulate the use of AI tools in dispute resolution, this is anticipated to change soon enough(See Artificial Intelligence in Arbitration: Evidentiary Issues and Prospects – Global Arbitration Review). Institutions are beginning to introduce standard terms on the use of AI, insinuating that we may begin to see policies and potentially procedural orders setting out how the development of AI can adequately and safely be implemented to support dispute resolution. In fact, in March 2023, a White Paper was published in the UK, setting out principles which must be followed by regulators when developing guidance on AI (See A pro-innovation approach to AI regulation). Moreover, the European Union has also taken steps at regulating AI technologies, most recently through the EU AI Act, which aims at coming into force in 2026 (See White Paper on Artificial Intelligence). This Act will apply to both developers and providers in the AI industry and is to adopt a risk-based classification approach defining four categories of risk, ranging from minimal risk to unacceptable risk systems (See 10 Things You Need to Know About the EU White Paper on Artificial Intelligence – Society for Computers & Law). In April 2024, The Silicon Valley Arbitration and Mediation Centre (“SVAMC”) released Guidelines On the Use of AI in Arbitration, setting out the limitations and risks of AI tools. These guidelines also analyse how AI tools may be used diligently whilst safeguarding client confidentiality and ensuring that the human element remains at the forefront.
- Disclosure – Arbitration proceedings in the UK are regulated through the Arbitration Act 1996, which sets out procedural rules including those in relation to the appointment of arbitrators and all other procedures which lead to the eventual issuance of an arbitral award. Since arbitrations are considered more flexible than domestic courts and allow for the application of institutional rules, this would subsequently mean that ad hoc tribunals could be free to decide on issues related to the use of AI tools within individual disputes. The question which ensues is whether the use of AI tools should be disclosed by one party to the other during arbitral proceedings. Additionally, whether the arbitrators’ use of AI tools should also be disclosed to the parties. The latter may be of importance to avoid any potential challenges on the grounds that an award given is defective due to the use of AI.
- Bias – Another concern worth noting here is the element of bias AI tools are inherently equipped with, which may affect the reliability of results. An example being that when prompting an AI tool to provide an image of a room of lawyers at a conference, the prompt will likely have to be amended several times prior to generating a satisfying image. This is mainly because cultural biases also play a key role in arbitration, particularly due to its international nature.
- Confidentiality – For AI tools to effectively assist lawyers in dispute resolution-related tasks, documents will need to be uploaded to the platform, raising concerns related to confidentiality and privacy. Volumes of confidential data stored on such platforms may face the risk of cybersecurity breaches. In order to lessen such possibilities, AI platforms such as Harvey AI use closed systems whereby the information submitted cannot be reproduced in future responses.
- All-party use – A common concern amongst legal professionals relates to the simultaneous use of the same AI tool by all parties to the dispute. The question which arises here is whether the same/similar answers will be generated and further produced, burdening the arbitration process. It is argued that the result is highly dependent on the prompts given, whereby the output will only be as good as the input. Thus making it inevitably dependent on the knowledge of the advisor using the AI tool, which leverages how much better one’s answers will be than another’s, further decreasing the chances of similar results.
Conclusion
Whilst there lies a consensus that the use of AI in dispute resolution can lead to consistency and predictability of rulings and can further contribute to the uniform application of legal principles, it is believed that over-reliance on such tools can stifle the nuance of legal reasoning taken by professional advisors. Thus, although AI-generated tools may significantly lessen a legal professional’s workload, a key takeaway remains that such professionals can in no way produce advice attained solely from such tools without having carried out alternate checks. Law is still a regulated profession, meaning that humans will remain held accountable. The availability of footnotes, along with the ability to verify the generated content, remain essential in ensuring the output is accurate.
Overall, the panel maintained the view that whilst AI tools can and should be used to aid dispute resolution practices, robust procedures should be in place to allow the use of AI to take the form of aid more than a substitute to human decision-making.
ABOUT THE AUTHORS
Benjamin Knowles is aPartner & Chair of International Arbitration at Clyde & Co. He has extensive dispute resolution experience, particularly in international arbitration including at the ICC, ICSID, LCIA, PCA, CIETAC, and AAA-ICDR. His work comprises commercial arbitrations as well as investor state cases. He is currently recognised by Legal 500 in the International Arbitration Powerlist and as a leading individual. Ben also sits on the editorial board of the Global Arbitration Review and on the ICC UK Committee.
Chris Williams is a Partner at Clyde & Co, where he specialises in media, technology, and intellectual property disputes both in court and arbitration. His practice involves defamation, reputation management, patent, trade mark, design rights, copyright, confidential information, trade secrets and passing off. Chris has a wealth of experience in crisis management and brand protection. He prefers to tackle issues using a mix of legal and non-legal solutions, in a manner which best protects the client’s interests.

*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.