THE AUTHORS:
Jean-Rémi de Maistre, CEO and Co-Founder at Jus Mundi and Jus Connect
Tiffany Lam, Communications Officer at Jus Mundi
As arbitration practitioners integrate Artificial Intelligence (“AI”) into research, drafting, and case preparation, a fundamental question arises: what are the ethical boundaries of AI in international arbitration?
This issue was at the forefront of the discussion at the China Arbitration Summit 2025, co-organised by the China International Economic and Trade Arbitration Commission (CIETAC), the United Nations Commission on International Trade Law (UNCITRAL), the Inter-Pacific Bar Association (IPBA), and the All China Lawyers Association (ACLA).
In a panel moderated by Mr Cao Lijun, I had the privilege of contributing as the sole legal tech representative, alongside distinguished speakers from leading arbitration institutions and law firms, including American Arbitration Association – International Centre for Dispute Resolution (AAA-ICDR), Astana International Financial Centre International Arbitration Centre (AIFC IAC), Saudi Center for Commercial Arbitration (SCCA), Chamber of Business Mediation and Arbitration – Brazil (CAMARB), Hong Kong International Arbitration Centre (HKIAC), and White & Case.
The conversation confirmed that we are past debating whether AI belongs in arbitration. The pressing task is determining how it can be deployed in a way that improves efficiency and quality while preserving fairness, impartiality, and trust.
The Promise and Perils of AI Adoption in Arbitration
The use of AI directly touches on the legitimacy and integrity of arbitration itself. Three critical risks stand out:
- Quality issues from inappropriate use or overreliance;
- Confidentiality and security concerns;
- As well as potential challenges to the impartiality and independence of the arbitral tribunal.
Despite these risks, the benefits of AI in arbitration are undeniable. When used appropriately, AI improves efficiency, reduces human errors, and enhances accessibility. Institutions have already begun implementing AI as part of their commitment to transparency, efficiency, and innovation, such as AAA-ICDR’s ClauseBuilder AI and AAAi Chat Book, and HKIAC’s Case Digest powered by Jus AI.
The 2025 International Arbitration Survey by White & Case and the Queen Mary School of International Arbitration shows that while 90% of respondents expect to use AI for research, data analytics, and document review, more than half cited the risks of error and bias as the primary obstacle. It also reflects a concern that AI usage would interfere with an arbitrator’s fundamental mandate: while respondents generally accept arbitrators using AI to assist in administrative and procedural tasks, AI usage was viewed with great skepticism where tasks require the exercise of discretion and judgment (2025 International Arbitration Survey, p. 3, 31).
The Ethical Stakes in Arbitration: AI’s Impact on Decision-Making
International frameworks increasingly acknowledge this reality. The EU’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (“EU AI Act”) categorises as “high-risk” AI systems used by arbitral tribunals in researching and interpreting facts and the law, or in applying the law to a concrete set of facts, underscoring the criticality of arbitrators’ decision-making.
Institutional guidelines echo this, emphasizing the auxiliary role of AI and stressing that arbitrators must not delegate or relinquish their decision-making to AI tools. Examples of these guidelines include the Vienna International Arbitral Centre (“VIAC”)’s Note on the Use of AI in Arbitration Proceedings, the Silicon Valley Arbitration & Mediation Center (“SVAMC”)’s Guidelines on the Use of AI in Arbitration, the Chartered Institute of Arbitrators (“CIArb”)’s Guidelines on the Use of AI in Arbitration, and more recently CIETAC’s Provisional Guidelines on the Use of AI in Arbitration, all of which reaffirm that the tribunal bears full responsibility for the award’s reasoning and outcome.
In practice, AI tools can potentially influence decision-making or at least shape reasoning, for instance, when used to compare counsel’s arguments or assess damages entitlement. There is consensus that the system’s suggestion must not dictate the outcome, and arbitrators’ verification, judgment, and independent analysis must remain central. The challenge, however, lies in making this principle concrete in practice, ensuring it is not just followed, but demonstrably so.
Building Trust Through Transparency
To preserve confidence and due process, transparency has emerged as a promising response. Arbitrators have begun disclosing AI use in their first procedural orders, akin to disclosure of appointment of tribunal secretaries, and directing parties to discuss and agree on AI usage before case management conferences (including whether and how to disclose usage). Some institutional guidelines also recommend specific approaches to AI usage disclosure.
At Jus Mundi, we believe that transparency on AI use is key to safeguarding the integrity of the arbitration process. This led us to propose a model AI disclosure clause that provides a practical framework for tribunals and parties to address AI use proactively, adaptable across different AI tools and use cases. Drawing from best practice, the clause helps establish parameters for AI use at the outset of a case, reducing the risk of challenges and maintaining trust in the integrity of both the process and the resulting award.
Our governance approach encompasses our commitment to transparency. As a mission-driven company, we have embedded ethical AI development as a core pillar of our mission and established an independent Mission Committee with diverse stakeholder representation. We are the first legal tech to have obtained the ISO 42001 certification, the global standard for AI governance, reinforcing our aim to provide responsible and trustworthy AI to the arbitration community.
Keeping Humans in Command
In navigating this frontier, arbitration practitioners must stay firmly in command. That responsibility extends not only to how AI is used and how outputs are scrutinized, but also selecting the right AI tools in the first place. The reliability of AI in arbitration depends on deliberate design choices. Practitioners should ask the following non-exhaustive, critical questions:
- Does the AI have domain mastery? Arbitration has unique complexities, nuances and global stakes. As the Stanford CodeX-Jus Mundi White Paper reveals, generic tools are not built to meet this bar, whereas arbitration-specific AI delivers better quality outcomes.
- Is the AI’s reasoning transparent? Practitioners should be able to trace answers back to verifiable sources with line-level citations, enabling users to evaluate and challenge outputs.
- How is bias addressed? All AI systems contain bias, reflecting its training data and design choices. But how does the provider minimise bias through diverse datasets and oversight?
- What protections safeguard confidentiality? Enterprise-grade security, zero data retention, and models not trained on user data are non-negotiable.
At Jus Mundi, our latest Jus AI was designed with precisely these standards in mind, embodying our commitment to building AI that amplifies human expertise and facilitates arbitration.
Looking Forward: Partnership, Not Replacement
The most valuable insights from China Arbitration Summit came from practitioners sharing real experiences with AI in cases. Their observations reveal both technology’s potential and limitations. Arbitrators noted improvements in efficiency and accessibility, while at the same time emphasized their role and independent judgment that no algorithm can replicate.
The practical guidance emerging from these discussions is remarkably consistent: prefer enterprise-grade tools, verify every AI-powered analysis against its sources, disclose AI use in proceedings where appropriate, and recognize bias rather than assume neutrality. Importantly, practitioners are learning to ask better questions about AI tools. The question facing every arbitration practitioner today is not whether to use AI, but which tools to trust and how to use them responsibly. As technology developers, we have an equal responsibility to build domain-specific tools that deserve confidence.
When implemented thoughtfully, AI strengthens rather than weakens arbitration’s foundations. Through ongoing collaboration between institutions, practitioners, and technology providers, AI can enhance accessibility, quality, and efficiency while preserving the integrity that makes international arbitration effective.
About Jus Mundi
Founded in 2019 and recognized as a mission-led company, Jus Mundi is a pioneer in the legal technology industry dedicated to powering global justice through artificial intelligence. Headquartered in Paris, with additional offices in New York, London, and Singapore. Jus Mundi serves over 150,000 users from law firms, multinational corporations, governmental bodies, and academic institutions in more than 80 countries. Through its proprietary AI technology, Jus Mundi provides global legal intelligence, data-driven arbitration professional selection, and business development services.
Press Contact Helene Maïo, Senior Digital Marketing Manager, Jus Mundi – [email protected]
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.