THE AUTHOR:
Reza Eftekhar, Senior Legal Adviser at the Iran-United States Claims Tribunal (“IUSCT”)
Introduction
Utilising Artificial Intelligence (“AI”) in arbitration offers significant benefits but also presents notable challenges. The first challenge faced by an arbitral tribunal when considering the use of AI is determining, as a threshold matter, whether such use is authorised and permissible in arbitration. More specifically, an arbitral tribunal may ask itself: Am I allowed to deploy AI in the first place? If so, to what extent can AI be utilised in this arbitration? What obligations do I have when using AI? All these questions should be answered by reference to the applicable legal framework.
It could be argued that there is no inherent issue with a tribunal using AI in arbitrations. AI, as a recent technological advancement, has the potential to enhance efficiency by making proceedings speedier and less costly. Beyond reliance on an arbitral tribunal’s broad discretion in conducting proceedings, certain recent developments may further encourage a tribunal to use AI.
- First, the recent experience during the COVID-19 pandemic, when various communicative technologies were employed to overcome the vagaries of restrictions on in-person meetings (hearings and deliberations), may have paved the way for integrating newer technologies into arbitration.
- Second, certain updated institutional arbitration rules now explicitly promote the use of technology to streamline arbitration proceedings.
These factors, along with the current widespread use of AI in the legal industry, may collectively establish a practical basis for tribunals to use AI in arbitrations without prior scrutiny and analysis regarding the applicable legal framework.
However, AI is not merely a communicative technology to facilitate the arbitration process. AI possesses unique characteristics that enable it to enter the realm of analysis, interpretation, and decision-making. This capability introduces various legal, ethical, and practical challenges. For the same reason, it is not immediately clear whether the encouragement of using technology in certain institutional arbitration rules also extends to the authorisation of utilising AI in arbitration. Therefore, an arbitral tribunal should act prudently and conduct thorough scrutiny before using AI in the arbitration process. There is always a risk that unauthorised or out-of-the-scope use of AI may be deemed a violation of due process or an irregularity in the proper conduct of the proceedings, thus jeopardising the integrity and the enforceability of the eventual arbitral award.
The permissibility of utilising AI in arbitration should, therefore, be determined in accordance with the relevant applicable law. Deploying AI in an arbitration proceeding is a question pertaining to the discretion of an arbitral tribunal concerning the conduct of the proceedings and, as such, is governed by the law applicable to the procedure of arbitration (the lex arbitri). This blog post explores four key components of this legal framework: legislation and rulemaking, agreement by the parties, arbitration institutions’ rules and guidelines, and the arbitral tribunal’s discretion.
For the sake of clarity, this piece only addresses the use of AI by an arbitral tribunal, not by the parties to arbitration proceedings.
Legislation and Rulemaking
To date, there is no international treaty to regulate the use of AI by arbitral tribunals. Furthermore, the overwhelming majority of national arbitration laws and broader domestic civil procedure laws remain silent on the use of AI in arbitration or judicial proceedings. While some judiciaries have issued limited guidelines (e.g., New South Wales, Australia; Victoria, Australia; New Zealand; DIFC Courts, UAE; the United Kingdom; Delaware, USA; and Illinois, USA), there is a glaring lack of specific legislation addressing this issue.
A notable exception in this regard is Regulation 2024/1689, titled “European Union Artificial Intelligence Act”, adopted by the European Parliament on 13 March 2024 and generally entered into force on 01 August 2024. Among others, this Regulation sets forth specific rules on applying AI in judicial and arbitration proceedings, which are directly applicable to all Member States with binding force, without requiring domestic ratification. The provisions that may directly impact international arbitration will apply as of 02 August 2026 (Article 113).
The precise scope of application of this Regulation is debatable (for a detailed treatment, see here). Article 2(1)(b) of this Regulation provides that it applies to “deployers of AI systems that have their place of establishment or are located within the Union”. Consequently, it is arguable that the Regulation applies to arbitrations seated within the European Union. Therefore, arbitrations conducted in popular seats of international arbitration, such as Paris, The Hague, Stockholm, and Vienna, will be required to comply with this Regulation.
Recital 61 and Article 8(a) of Annex III of this Act classify the use of AI by judicial authorities and alternative dispute resolution mechanisms (which presumably includes arbitration) as high-risk when “researching and interpreting facts and the law” and when “applying the law to a concrete set of facts” and, specifically concerning alternative dispute resolutions, “when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties”. This Regulation, thus, does not prohibit the use of AI in arbitration altogether. Rather, in order to maintain the integrity and enforceability of the eventual arbitral award, when utilising AI for the purposes mentioned above, an arbitral tribunal must comply with a number of regulatory requirements set forth in Article 26 of the Regulation with respect to high-risk activities (e.g., “log retention”). In contrast, when AI is applied for less significant tasks, such as “a narrow procedural task” (e.g., editing a draft procedural order), the arbitrators are not bound by the strict obligations set forth in Article 26 of the Act. In such circumstances, the arbitrators need only comply with more general obligations catered for in Article 4 of the Act (i.e., AI literacy).
Agreement by the Parties
As noted above, however, an overwhelming majority of national legal systems fail to directly regulate AI use by arbitral tribunals. In the absence of legislation regulating the use of AI in arbitration, arbitral tribunals must turn to alternative sources. Almost all national arbitration laws uphold the principle of “party autonomy” on procedural matters, allowing the parties to agree on the procedure to be followed by arbitral tribunals in conducting the proceedings (e.g., Article 19(1) of the UNCITRAL Model Law, Article 182(1) of the Swiss PIL, and Section 1.b of the English Arbitration Act). In their arbitration agreement, or indeed at the very commencement of the proceedings, the parties may directly regulate the use of AI or agree to apply arbitration rules/guidelines that address this question.
If the parties expressly prohibit the use of AI in their arbitration agreement or at the commencement of the proceedings, the tribunal must comply and refrain from using AI. Conversely, if the parties expressly permit the use of AI, the tribunal may avail itself of this technology to the extent agreed upon by the parties and in compliance with any limitations set forth by the mandatory provisions (e.g., fundamental procedural protections such as the principles of equal treatment, the right to be heard, and the fair conduct of the proceedings) and laws (e.g., data protection laws) of the lex arbitri.
Arbitration Institutions’ Rules and Guidelines
As noted, another possibility is that the parties do not directly determine the use of AI by the tribunal but instead refer to arbitration rules/guidelines that address the use of AI by arbitral tribunals.
Though no arbitral institution has yet revised or supplemented its rules to account for specific AI regulations, certain arbitration centers and institutions have adopted specific guidelines for using AI in arbitration. If the parties refer to them, then such guidelines will apply to the utilisation of AI by the tribunal in the proceedings. For instance, in April 2024, the Silicon Valley Arbitration & Mediation Center (“SVAMC”) published the “Guidelines on the Use of Artificial Intelligence in Arbitration” in order to ensure the ethical and effective use of AI in arbitration proceedings. These Guidelines include a model clause that allows their adoption in arbitration proceedings. Similarly, on 16 October 2024, the SCC Board adopted the “SCC Guide to the use of artificial intelligence in cases administered under the SCC rules”. In addition, on 14 March 2025, Ciarb announced the launch of its “Guideline on the Use of AI in Arbitration”.
However, the fact that regulation of AI use in arbitration is at a fledgling stage —resulting in the above-mentioned guidelines being very broad—coupled with the fact that AI-related technology is evolving at an extraordinary speed means that many pertinent issues that are currently at stake or likely to arise in the future have been left unaddressed by these guidelines. Where these guidelines fail to address a specific AI matter, such as the tribunal’s specific obligations when using AI, reference must be made to the general principles of the lex arbitri.
Certain other arbitration institutions have not adopted specific guidelines on using AI in arbitration but promote using technology to make arbitration faster and more cost-effective (e.g., Article 14(6)(iii) of the LCIA Arbitration Rules (2020) and Article 13(1) of HKIAC Rules (2024)). As noted, encouraging the use of technology does not automatically equate to authorisation for using AI by an arbitral tribunal.
Arbitral Tribunal’s Discretion
In the absence of a treaty, legislation, or an agreement between the parties regarding the use of AI by the arbitral tribunal, to resolve this issue, the tribunal may rely on its broad discretion to conduct the proceedings in a manner “to avoid unnecessary delay and expense and to provide a fair and efficient process for resolving the parties’ dispute” (e.g., Article 17(1) of the UNCITRAL Arbitration Rules). However, given the unique features of AI in comparison to other technologies and the array of concerns that it may give rise to concerning due process, confidentiality, and transparency, a prudent course of action by the tribunal in such circumstances would be (a) to ensure that the mandatory procedural norms of the lex arbitri are complied with; and (b) to diligently keep the parties informed on this issue. As such, in the absence of a specific regulatory framework or the parties’ agreement regarding the use of AI, the tribunal’s intention to utilise AI should be disclosed to the parties to the dispute, ensuring that the parties’ right to be heard in this regard is observed. The parties should be informed about the intended use of AI by the tribunal and be able to comment on the very deployment of AI, the type of AI to be used (both name and version), the scope of the tasks to be assigned, measures to safeguard confidentiality, as well as the level and the modality of arbitrator oversight to prevent flaws, errors, and biases.
ABOUT THE AUTHOR
Dr. Reza Eftekhar is a Senior Legal Adviser at the IUSCT. He deals with public international law and contractual interstate arbitrations in this capacity. He is also a practitioner in international investment and commercial arbitrations. He has written widely on international commercial and investment treaty arbitration and has been a speaker at numerous seminars on these subjects. He is the author of the book entitled “The Role of the Domestic Law of the Host State in Determining the Jurisdiction ratione materiae of Investment Treaty Tribunals: The Partial Revival of the Localisation Theory? (Brill 2021) He holds a Ph.D. in International Dispute Settlement from Leiden University.
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.