THE AUTHORS:
Christoph Lüttenberg, Senior Associate at YPOG
Ilka Beimel, Senior Associate at Noerr
Simon J. Heetkamp, LL.M. Professor at the TH Cologne University of Applied Sciences
Personal Scope of Application
Pursuant to Article 2(1)(b) and (c) of the Artificial Intelligence Act – Regulation (EU) 2024/1689 (“AI Act”), the Act applies if arbitral tribunals are deployers of AI systems. Article 3(4) of the Act defines deployers as “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”
Members of an arbitral tribunal are natural persons with a personal mandate and cannot act as legal entities. Arbitral institutions should also be considered in the personal scope of application, although their administrative work is less likely to fall within the category of high-risk AI systems.
Territorial Scope of Application
Territorially, Article 2(1) of the EU AI Act states that the Act applies to
- “Deployers of AI systems that have their place of establishment or are located within the Union” (point (b)) or
- “Providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union” (point (c)).
At first glance, Article 2(1)(b) of the EU AI Act could lead to the impracticable result that, in a three-member panel, the Act only applies to those members of the arbitral tribunal domiciled or based in the Union. To avoid this inconsistent treatment, a collective link to the seat of the arbitral tribunal would be appropriate. However, if the arbitral tribunal comes from EU Member States but its seat is located outside the EU, the AI Act would not apply. It is questionable whether the intention was to link the application exclusively to the seat. The wording of Article 2 (1)(b), which addresses deployers “located within the Union”, contradicts this. In addition, recital 1 states that the Act aims “to protect against the harmful effects of AI systems in the Union”. Considering the entire Union as a protected territory also contradicts an intent to link the Act’s applicability exclusively to the arbitral tribunal’s seat. The possible inconsistent treatment resulting within the tribunal in Article 2(1)(b) of the Act can be partially offset by Article 2(1)(c).
Regarding Article 2(1)(c) of the EU AI Act, the meaning of the phrase “the output produced by the AI system is used in the Union” is ambiguous. Various interpretations are possible:
- Is it sufficient that one of the parties has its place of business or residence in the EU? This interpretation is backed up by Article 2(1)(g) EU AI Act, limiting the scope of application to “affected persons that are located in the Union.”
- Is it sufficient that the arbitral award may be enforced with regard to property or real estate located in the EU?
- Is it sufficient that the arbitral institution’s seat is located in the EU?
Do Arbitrators Use High-Risk AI Systems?
According to Article 6(2) of the EU AI Act, the AI systems listed in Annex III are considered high-risk. Para. 8b) of Annex III includes AI systems “intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution”.
Thus, this risk category also targets alternative dispute resolution bodies. The third sentence of recital 61 clarifies this: “AI systems intended to be used by alternative dispute resolution bodies for those purposes [i.e. use and assistance in researching and interpreting facts and the law and in applying the law to a concrete set of facts, see recital 61, second sentence AI Act,] should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties.”
Arbitration proceedings are a prime example of alternative dispute resolution. The award of an arbitration tribunal is generally binding on the parties and therefore has “legal effects for the parties.”
Accordingly, AI systems that the tribunal uses as outlined by recital 61 need to be categorized as high-risk systems. However, the purpose of categorizing these AI systems as high-risk systems is their potentially significant impact on democracy, the rule of law, individual freedoms, and the right to a fair trial. Therefore, AI systems that are “intended for purely ancillary administrative activities, […] such as the anonymization or pseudonymization of court judgments, documents or data, communication between staff or administrative tasks” do not qualify as high-risk AI systems under recital 61, fifth sentence of the AI Act.
Exemptions Under Article 6(3) EU AI Act
Exceptions to the categorization as a high-risk AI system may result from Article 6(3) of the Act. According to this provision, a high-risk AI system will not be considered one “where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. The first subparagraph shall apply where any of the following conditions is fulfilled:
- the AI system is intended to perform a narrow procedural task;
- the AI system is intended to improve the result of a previously completed human activity;
- the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
- the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. […]”.
These exceptions likely cover many of the AI systems already in use, e.g.:
- Point (a): using AI systems for payments of advances or calculating and monitoring deadlines if adopted by the arbitral tribunal.
- Point (b): the tribunal improving the presentation of finalized orders or reviewing prepared chronologies.
- Point (c): arbitral tribunals checking an award for consistency with previous decisions. If deviations are found, this may entail further review by the tribunal.
- Point (d): AI systems creating a chronology or summary of the facts, or searches for specific case law and literature references on a case-related legal issue.
Since the wording of Article 6(3) of the EU AI Act refers to “the health, safety or fundamental rights of natural persons”, it is unclear how the Act deals with legal entities. One could examine whether the individuals behind the legal entity are affected by the arbitral award. The lower demand for protection of legal entities with regard to the aforementioned protected interests generally favors broad application of the exception.
Legal Consequences of Categorizing an AI System as a High-Risk AI System
Article 26 of the EU AI Act lists the (extensive) obligations of those deploying high-risk AI systems. The following obligations, among others, could arise for an arbitral tribunal if the said exceptions do not apply:
- Arbitral tribunals must take appropriate technical and organizational measures ensuring proper use of high-risk AI systems and monitoring by individuals with the necessary competence and training.
- Arbitral tribunals must retain the logs automatically generated by the high-risk system used for at least six months.
- Arbitral tribunals must inform the parties about the use of high-risk systems if individuals are affected. The AI Act does not provide for equivalent transparency towards legal entities, but this is partly recommended by other rules (e.g. Silicon Valley Arbitration & Mediation Center (SVAMC), Guidelines on the use of Artificial Intelligence in Arbitration).
Violations of these obligations are subject to penalties that still need to be specified by the Member States. According to Article 99(4)(e) of the Act, breaches of the obligations listed in Article 26 can be punished with fines of up to €15 million.
Conclusion and Outlook
As far as relevant for arbitral tribunals, the graded risk system of the EU AI Act including the aforementioned exceptions can be summarized as follows:
- AI systems that are used for (preparatory) activities not concerning the “determination and interpretation of facts and legal provisions” or the “application of the law to specific facts” do not qualify as high-risk AI systems;
- AI systems that are used “to assist in the determination and interpretation of facts and legal provisions and in the application of the law to specific situations” are not high-risk if they meet the conditions set out in Article 6(3) of the Act;
- all other AI systems are high-risk AI systems.
AI systems that replace human behavior and the personal mandate of an arbitrator are most likely prohibited under the AI Act, even though this is not explicitly stated in its wording.
Typically, AI systems used by arbitral tribunals fall under the exemptions of Article 6(3) of the Act. Nevertheless, given the potential (and potentially severe) penalties, arbitral tribunals are well advised to examine the potential applicability of the AI Act to their AI systems and consider relevant obligations and legal consequences. The same applies when deciding whether and which AI systems should be used in the future – considering the specific use case, if applicable.
ABOUT THE AUTHORS
Dr. Christoph Lüttenberg works as a lawyer at the law firm YPOG in the field of litigation and arbitration. His practise focuses on corporate litigation, cross border post M&A disputes and commercial litigation matters. Christoph studied law in Cologne and at the University of Paris 1 (Panthéon-Sorbonne).
Dr. Ilka Beimel is a senior associate in Noerr’s dispute resolution team, based in Dusseldorf. She specialises in national and international dispute resolution before arbitral tribunals and state courts, and advises and represents clients on complex cross-border disputes, especially corporate, liability and commercial disputes. She has particular experience in handling disputes involving international sales law.
Prior to joining Noerr, Ilka worked as a tribunal secretary in different arbitral proceedings. In this role, as a party representative and as arbitrator, she gained experience in arbitration proceedings governed by DIS, CAM-CCBC, ICC, LCIA and VIAC Rules.
Prof. Dr. Simon J. Heetkamp, LL.M. is Professor of Commercial Law, Mobility and Insurance Law at the TH Cologne University of Applied Sciences. He previously worked as a judge at the Regional Court of Cologne and the Local Court of Cologne in the North Rhine-Westphalian judiciary. Before becoming a judge, Simon worked for several years in the litigation department of a large German international business law firm. Simon is trained as a mediator. His current research focuses on legal issues related to artificial intelligence, virtual reality and alternative dispute resolution.
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.
* This article is an abbreviated version of an article published by the authors in the SchiedsVZ | German Arbitration Journal, 2024, pp 225 – 228.