THE AUTHOR:
Nariat Pashaeva, Associate Lawyer at Advokatfirma Haakstad & Co DA
Introduction
When does a tool become the decision-maker? In LaPaglia v. Valve, a 2025 petition to vacate an AAA (American Arbitration Association) award, in which the claimant alleged that the award was drafted with the assistance of Artificial Intelligence (“AI”), is one of the first cases to force courts to confront that question under the Federal Arbitration Act (FAA).
The FAA §10(a)(4) allows a court to vacate an award “where the arbitrators exceeded their powers”. This principle is echoed internationally. Article 34(2)(a)(iii) of the UNCITRAL Model Law on International Commercial Arbitration (1985 with amendments as adopted in 2006) similarly bans tribunals from exceeding their authority, underscoring the point that non-delegation is universal. By mid-2025, a few professional bodies, such as the CIArb (Chartered Institute of Arbitrators), SVAMC (Silicon Valley Arbitration & Mediation Center), and AAA-ICDR (American Arbitration Association – International Centre for Dispute Resolution), had issued guidance on arbitrators’ use of AI. In contrast, major institutions, such as the ICC (International Chamber of Commerce), ICSID (International Centre for Settlement of Investment Disputes), LCIA (London Court of International Arbitration), and SIAC (Singapore International Arbitration Centre), lack binding rules. The institutional landscape is mixed: soft-law guidance exists, but binding rules remain limited.
CIArb’s non-binding, risk-based approach mirrors the EU AI Act (2024) in logic: both evaluate AI by the consequences of its use and converge on the same safeguards, such as human oversight, transparency, and record-keeping when adjudicative reasoning is involved.
Across different frameworks, the idea remains the same: arbitrators may use tools, but they cannot outsource judgment. Under the FAA §10(a)(4), U.S. courts set aside awards only when arbitrators exceed the powers given to them by the parties. CIArb’s recent guidelines, while still soft law, also echo that principle by urging arbitrators to assess AI use by its consequences and keep a human in the decision-making. The EU AI Act introduces a broader regulatory framework that extends across all AI systems in the EU, not just those used in judicial settings or arbitration. However, it applies similar principles to these areas: AI systems used in adjudication or alternative dispute resolution are designated as part of its “high-risk” category, triggering requirements of human oversight, transparency, and record-keeping.
Each of these three regimes comes from a different angle, whether domestic, professional guidance, or supranational regulation. Still, they all converge on the same point: once AI crosses from assistance into delegation of judgment, it becomes impermissible.
This blog post does not attempt a doctrinal deep dive into each regime. Instead, it asks a narrower question: when does AI use by arbitrators cross the line from permissible assistance to impermissible delegation?
Exceeding Powers Under FAA §10(a)(4)
U.S. courts interpret §10(a)(4) narrowly. Awards are rarely vacated. Courts only intervene when arbitrators stray from the role the parties have assigned them, not when that task is performed “poorly”. Challenges based solely on the merits of the arbitrator’s interpretation are therefore irrelevant. In Stolt-Nielsen S.A. v. AnimalFeeds Int’l Corp., the Supreme Court vacated an award where the arbitrators imposed their own understanding of policy. Ergo, the arbitrators were deciding what the contract should have meant and not what it did mean.
In LaPaglia, the claimant argues that AI-generated reasoning may have supplanted the arbitrator’s independent reasoning. Claimant’s motion argues that the arbitrator, by relying on AI, had exceeded his authority and delegated his judgment. If true, that would mean the arbitrator delegated core decision-making to a tool rather than exercising independent judgment. Courts have long drawn a line between arbitrators interpreting the contract themselves and handing that task to another. When an AI replaces decision-making, it renders the award vulnerable.
LaPaglia v. Valve Corp.
As of September 2025, all claims in the LaPaglia filings remain allegations; no court has found that the award was AI-generated. While there are prior cases where arbitrators were sanctioned for delegating tasks to humans, this case marks a new threshold (see Move, Inc. v. Citigroup Global Mkts., 840 F.3d 1152 (9th Cir. 2016); Stivers v. Pierce, 71 F.3d 732 (9th Cir.1995) and Bassett’s Adm’r v. Cunningham’s Adm’r, 50 Va 684 (1853).
During proceedings, the arbitrator reportedly disclosed having used ChatGPT for private matters and informed the parties of his intention to conclude the arbitration before an upcoming trip. The claimant points to timing (a 10-day hearing, a 2,000-page record, and the issuance of a 29-page award 15 days later, over the Christmas and New Year’s holidays) as circumstantial evidence of AI use.
Timing and stylistic similarity alone are insufficient for §10(a)(4) relief. Whether they could support vacatur depends on further evidence of actual reliance on AI. AI tools are not inherently prohibited. They become problematic when replacing the arbitrator’s exercise of judgment; in AI governance, this is known as maintaining a ‘human-in-the-loop’ (Ionnadis has argued that AI could eventually replace arbitrators altogether: Dimitrios Ionnadis, ‘Will Artificial Intelligence Replace Arbitrators under the FAA?’ (2022) 28 Rich. J. L. & Tech. 505).
EU AI Act and ADR
The EU AI Act distinguishes AI systems by risk. This post focuses on “high-risk” AI systems. By contrast, tools of a “narrow and limited nature” that only pose “limited risks” and which are used to “improve a previously completed human activity” (preamble 53) are excluded from the high-risk category. This is because purely ancillary administrative activities, such as AI systems that structure unstructured data or categorize incoming data, do not affect legal reasoning.
Article 26 imposes specific obligations on deployers of high-risk AI systems. Annex III(8) classifies “high-risk” as any AI systems used by a judicial authority, or on its behalf, to assist in researching, interpreting both facts and law in applying the law to a concrete set of facts, or used in a similar way in alternative dispute resolution (Annex III(8)).
The AI Act regulates ‘providers’ (developers), ‘deployers’ (users), and distributors/importers of AI systems in the EU. In theory, arbitrators using AI could fall under the deployer category, but it remains unclear how often arbitration practice would meet these conditions (Article 3(4)).
Under the Act, a deployer must follow instructions for use, assign human oversight, ensure training, and provide appropriate data input, maintain logs, and inform affected persons where a high-risk system is used (Article 26). These are only some of the obligations, but they all resonate with core arbitration principles, such as human control of adjudication, procedural transparency, and confidentiality.
EU obligations underline the same non-delegation principle at issue under FAA §10(a)(4): adjudicative judgment must remain human.
Towards a Risk-Based Approach
The FAA does not prohibit the use of tools, but it requires that arbitrators retain independent judgment. Delegating that judgment, whether to a person or a machine, can invalidate an award under §10(a)(4).
While there is no formal standard for AI use in arbitration under U.S. law, guidance can be drawn from the CIArb’s risk-based thinking. CIArb defines “high-risk AI use” not by the type of tool, but by its potential consequences, such as “breach of confidentiality” or “non-human influence on the award.” The EU AI Act follows the same logic, where AI systems are not classified by their form, but by the extent to which these systems affect adjudicative reasoning.
Building on CIArb’s risk-based logic, this blog post suggests a three-tier functional framework:
- Low-risk tools improve efficiency without materially influencing the arbitrator’s judgment or the outcome of decision-making (e.g., spellcheckers, formatting, and transcription assistants);
- Mid-risk tools assist in legal research but leave legal conclusions to the arbitrator – they do not affect the arbitrator’s judgment, but a human should be kept in the loop (e.g., case law search engines);
- High-risk tools interpret law, weigh evidence, draw legal conclusions, generate legal reasoning, or suggest outcomes.
When third-tier tools replace an arbitrator’s judgment, they may cross into impermissible delegation under §10(a)(4). These tools sit in Annex III(8), triggering Article 26 requirements (human oversight, information to natural persons, logging, etc.) under the EU AI Act.
Implications of LaPaglia and Conclusion
For arbitrators, LaPaglia signals the need for transparency in AI use. For parties, it underscores the importance of including AI use clauses or disclosures in arbitration agreements. One solution is for arbitrators to conduct risk-assessments and to disclose AI use to the parties. This balances confidentiality with procedural transparency.
Practical takeaways for practitioners include:
- Human judgment must remain decisive: do not use AI to weigh evidence, apply law to facts, or to draft dispositive reasoning.
- For EU-seated arbitrations, align with the AI Act: enable human oversight, notify the relevant parties, and keep track of your logs.
- Develop an internal AI policy: identify high-risk AI systems, adopt safeguards, and incorporate AI clauses in your arbitration agreements and other important contracts.
LaPaglia may test the boundaries of §10(a)(4) in the context of modern technologies. The key issue is not whether arbitrators may use AI, but when such use crosses into the delegation of judgment. Parties and institutions would be well-advised to anticipate this question by clarifying acceptable uses of AI in their rules and agreements.
ABOUT THE AUTHOR
Nariat Pashaeva is an associate lawyer at Advokatfirma Haakstad & Co DA in Norway, where she works on a broad range of matters, including litigation, dispute resolution, and contractual issues. She holds an LL.M. in Public International Law alongside a Master of Laws from the University of Oslo, with multiple specializations, including in arbitration, international trade, and technology. Her background spans research and practice globally, equipping her with a global outlook.
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.



