This interview was conducted as part of this year’s BCLP’s International Arbitration Group survey. The 2023 survey is about artificial intelligence (AI) and its impact on international arbitration. AI is a hot topic right now and there have been many research pieces on its uses and implications. However, there has been little in-depth analysis of the ways in which AI is impacting the arbitration world. BCLP’s survey wants to assess the extent to which AI is used in the field, consider the perceived risks and benefits that come with its use, and canvas views on the need for regulation.
Can you briefly describe Jus AI, Jus Mundi’s new generative AI-powered tool?
Jus AI is Jus Mundi’s latest AI-powered tool that focuses mainly on the challenges practitioners encounter when grappling with lengthy and complex arbitral awards and national judgments. It provides concise summaries and key insights on specific decisions to our users in a single click.
What is the data source on which Jus AI’s models are trained? Are Jus AI models trained on documents contained in your data basis only or do they incorporate data from the public domain as well?
Jus AI is powered by GPT-4, a large language model provided by OpenAI. The summaries created by Jus AI are solely based on Jus Mundi’s legal data. It does not grasp information from other websites. Our comprehensive database, followed up with the way we organize our data internally, play a key element in how Jus AI functions.
What level of transparency over the model do you provide and how do you ensure that the data used to train the models is accurate and unbiased?
Jus Mundi’s legal data is based on the information extracted from the official PDFs of arbitration decisions and national decisions. The extraction is followed by a thorough internal verification process by our legal experts. Our AI-backed tools, notably Jus AI, refer only to this existing information to deliver accurate summaries, combined with our carefully crafted prompting recipe(s) that allow us to avoid wrong answers. What we are using from GPT itself is only the capacity to read and comprehend large text. The rest is purely based on the data we store.
You mentioned that Jus AI’s inaugural application consists of AI-generated summaries of arbitral awards and judgments. What should we expect for the future?
With Jus AI, Jus Mundi embarks on a new journey. We are currently working on developing further the summary aspect of Jus AI by increasing the scope of documents it is familiar with, improving the fundamental insights it showcases in the summary, and allowing more flexibility for our users to pinpoint the elements they’d like to perceive the most. We see this as a first step: testing one use case of large language models with our extensive database.
Lawyers can also refer to AI for many other reasons as it can assist them in analyzing further documents, extracting relevant information, help them draft submissions, update them on the latest changes state of the art, etc. We continue to explore different use cases to make our users’ legal research experience seamless and more efficient.
At the moment, Jus AI is a tool used for legal research. Do you anticipate it being used for other tasks performed in the context of arbitral proceedings in the future, for example, generating written arguments or awards?
Yes, we believe that artificial intelligence, and more specifically generative AI, will support legal practitioners perform other tasks; i.e., e-discovery or brief drafting. However, it remains imperative for legal professionals to consistently check and validate the responses provided by artificial intelligence (AI) since it is not intended to replace their expertise.
In your view, what are the main benefits of using Jus AI and, more generally, AI in arbitration?
Summarisation with Jus AI will allow users to save time by quickly assessing the relevance of documents for their strategy, streamline their workflows by centralizing information, and providing the key insights on a decision while using dependable data sources housed within the Jus Mundi database, giving them access to accurate and up-to-date information.
More generally, AI has the potential to revolutionize the arbitration industry by seamlessly allowing vast collection, storage, and processing of legal data. It can also efficiently assist lawyers in analyzing the relevance of case data, preparing briefs, aiding in e-discovery, and more, thereby significantly increasing the effectiveness of arbitration proceedings.
Do you expect AI to transform the arbitration industry due to the significant increase in productivity levels? For example, do you expect more in-house teams to subscribe to your products with a view to carrying out legal research and potentially handling their disputes in-house?
Yes, we believe that AI will transform the arbitration industry, and embracing it will lead to higher productivity for the arbitration market. It is already transforming the legal industry. The capacity of AI to pinpoint relevant information through a set of documentation has an undeniable added value to the profession. For in-house teams, AI is especially transformative. We expect in-house lawyers to gain a higher level of independence in general disputes and contract drafting, as these tools allow them to access legal knowledge to understand in a sufficiently comprehensive manner and tackle a greater number of legal tasks in-house. For law firms, this entails that their competitive advantage will predominantly arise from adequately leveraging unique human knowledge in combination with AI’s data processing capabilities.
In your views, what are the main risks of using AI tools and what steps are you taking as a company to mitigate those risks?
We can’t deny that AI is surely being integrated in the arbitration industry’s general workflow. Its rapidly evolving nature brings multiple risks, and it is important to educate oneself on the subject. As the first legal-tech company providing AI-backed tools in arbitration, we are keen to spread the required knowledge and awareness around the topic, as responses generated by AI should always be verified. Another important risk is privacy and confidentiality of the data, and our current focus is on encrypting the data should we release and expand the use of Jus AI. This is something we have currently implemented for our partners.
We hear a lot about AI hallucination. What is it and what steps can be taken to mitigate the risk of AI hallucination?
Hallucinations are false responses that AI can generate. They occur due to different reasons, for example, due to vague prompting or contradictory, incomplete, or false training data. At Jus Mundi, we have trained the model with our unique database – the largest on international law and arbitration – and have defined adapted and clear prompting. Additionally, our carefully crafted prompting discourages hallucinations.
Will the Jus AI model give consistent answers to the same question?
Our team has trained the model to provide consistent answers to summarisation requests. However, we acknowledge that automated summaries have limitations and quality is improved through training. We therefore enabled a feedback mechanism that allows users to vote on a specific summary, these results allow us to train the model and improve the summaries. Therefore, although we value and strive for consistency, we are looking to improve quality and adding more value to our users.
Does Jus Mundi own the IP rights attached to the AI-generated summaries of awards available on your platform?
This is a tricky question, because IP rights depend on jurisdiction and this topic is under evolution. IP regulations are trying to keep up with technological developments. All Jus Mundi summaries are the result of a carefully crafted process involving an iterative process of training with our own database and careful prompting. Therefore, the proprietary rights of the final summaries are Jus Mundi’s.
The use of AI by judges and arbitrators raises questions as to its impact on the administration of justice and the rule of law. The current draft of the EU AI regulation classifies certain AI systems intended to be used in the context of the administration of justice as “high-risk” and contemplates subjecting the use of certain AI systems used by the judicial authorities and administrative bodies to strict requirements. Do you anticipate Jus AI being subject to regulations such as the EU AI Act and its use being restricted to specific tasks or stakeholders in the arbitral process?
At Jus Mundi, we are aware of AI’s impact when used for law enforcement, administration of justice, and the rule of law – for example, in assisting judges in the risk assessment of recidivism. The functionalities available for our users with Jus AI do not fall under the definition scope of high-risk AI. Therefore, we don’t expect to be subject to regulations. Instead, Jus AI is, in this first beta version, a summarisation tool that condenses the main procedural, factual, and legal matters described within a decision.
The launch of ChatGPT has already led to lawyers being disciplinary sanctioned for using AI irresponsibly. This development has prompted courts to produce new practice notes on the use of AI in litigation to address concerns about the reliability and accuracy of information generated by AI. For example, the Court of King’s Bench of Manitoba in Canada issued a practice direction requiring disclosure of the use of any AI in the preparation of materials filed with the court. Do you anticipate seeing arbitral tribunals requiring parties to disclose the use of AI in arbitral proceedings, as well, and your clients disclosing their use of Jus AI?
The evolution of AI tools and applicable use cases for law quite naturally has provoked a higher interest in the requirements of AI disclosure situations. We foresee that, in the upcoming months and years, these requirements will be defined depending on the use case of AI. For example, we foresee the requirement of disclosing the use of AI to generate evidence submitted to a case – e.g., models aggregating data related to a construction case to predict the delay level based on the delay sources proposed. In this use case, AI generates an output which is used as evidence that seek to describe the “truth” or act as facts. Therefore, the counterparty and the arbitrators have an interest in knowing the use of AI, data sources used for training, and (in some cases) the weight given to different factors or data points. In other words, there is an interest in knowing more about the factors leading to an output of the decision.
Contrarily, the requirement to disclose use of AI for legal research may distract from the decision-making process and burden the process. The use of AI in legal research can take place in many ways: machine learning recommendation systems to find relevant results, relationship analysis, or large language models to question-answer a database. In these cases, the output produced by AI can involve diverse types of content (documents, relationships, or AI generated text). These inputs tend to be one-of-many resources lawyers used to inform legal arguments and drafting. These arguments are subject to be contraindicated – especially if these arguments contain fictitious cases or wrong facts – by the opposing counsel. It is crucial for legal practitioners to identify and rectify erroneous responses generated by AI systems. As AI algorithms are designed to learn from vast amounts of data, including legal texts, case precedents, and statutes, any inadequacies or biases in the training data can lead to flawed outcomes. By exercising their expertise and discernment, legal practitioners can effectively identify and rectify inaccuracies, reinforcing the notion that AI is not intended to replace their vital contributions but rather complement their work in the legal domain. Because of this back-and-forth process with AI, listing all these tools used for legal research used by one party may add more distraction than added value.
Regarding Jus AI specifically, we do not see it needed to be disclosed by the parties within an arbitration case.
Do you consider that more guidance on these issues would be useful to Jus Mundi and its clients?
The formulation of useful guidance or regulations on the use of AI for arbitration proceedings require the active involvement of practitioners, arbitrators, institutions, and technology providers in challenging conversations. We are all still discovering how these tools can be leveraged to streamline processes and bring more efficiency in legal services. The guidance on the use of AI for law should guide but at the same time strive to foster continued innovation – it is only through this pursuit of innovation that we can continue developing technologies like AI and address the issues they face.
Will Jus Mundi offer training so counsel and arbitrators can educate themselves on how the tool works, its limitations, and potential ethical implications?
As part of our subscription, we offer continuous support and training to our users about all our features, including Jus AI. Part of the training includes the limitations of the tool and the benefits. We are also engaging with different associations of lawyers to continue exploring the topic and educating about the benefits, limitations, and ethical implications of leveraging AI in law.
How can people find out more about Jus AI? Do you run any demo sessions arbitration practitioners can sign up for?
Yes! If you would like to test Jus AI for yourself you can request a trial here, or book a demo here.