THE AUTHOR:
Dr. Nikolaus Pitkowitz, Founding Partner and Head of Dispute Resolution at Pitkowitz & Partners & President of the Vienna International Arbitral Centre (VIAC)
Beijing, 17 September 2025 – At the China Arbitration Summit, Dr. Nikolaus Pitkowitz, President of the Vienna International Arbitral Centre (VIAC) and founding partner of Pitkowitz & Partners, called on arbitration institutions worldwide to take the lead in addressing the use of Artificial Intelligence (“AI”) in arbitration. In a speech to an audience of arbitration professionals, Pitkowitz emphasized that AI in arbitration is “no longer theoretical but already shaping our practice,” and urged institutions to proactively set standards to ensure that this technology strengthens rather than undermines arbitral proceedings.
Institutions Over Legislators: Filling the AI Guidance Gap
Pitkowitz noted that international arbitration, being a private and consensual form of dispute resolution, is well-positioned for self-regulation on emerging issues like AI. He argued that arbitral institutions – not state legislators – should spearhead the development of AI guidelines to find the sweet spot between lawless uncharted territory and paralysis by overregulation. “We don’t want a regulatory vacuum, but we also don’t want heavy-handed state intervention,” he stated, suggesting that institutions can fill the gap through soft-law instruments such as notes, guidelines, and rules tailored to arbitration’s needs. By taking initiative, institutions can create balanced frameworks that keep pace with technology while preserving the fairness and integrity of the arbitral process.
As a forward-looking idea, Pitkowitz even floated the possibility of arbitral institutions offering AI tools directly to parties and tribunals as part of their services. If both sides and the tribunal have equal access to the same AI-driven resources (for tasks like organizing evidence or managing case data), it could “level the playing field” and preempt concerns about unequal access or secret use of AI. In this way, institutions would not only set rules but also embed transparency and fairness into the very tools used in proceedings.
VIAC and CIETAC Pioneering AI Guidelines
As examples of institutional leadership, Pitkowitz highlighted recent advancements by VIAC and the CIETAC (China International Economic and Trade Arbitration Commission) in integrating AI guidance:
- VIAC’s Note on AI in Arbitration (2025): Released at the beginning of this year as part of VIAC’s 50th anniversary initiatives, this non-binding guidance was developed by VIAC’s Legal Tech Think Tank. “The VIAC Note on AI aims to ensure that AI strengthens rather than impairs arbitration,” Pitkowitz explained, underscoring that arbitrators must use AI responsibly and ethically. The VIAC Note encourages tribunals and parties to discuss any intended use of AI at an early case management stage, and it empowers arbitrators to require disclosure of AI involvement where appropriate. Notably, it even addresses a hotly debated topic – the use of AI in generating evidence – by affirming that arbitrators have discretion to ask for disclosure of AI-produced evidence and to determine its admissibility.
- CIETAC’s AI Guidelines (2025): In July, CIETAC became the first major arbitral institution in the Asia-Pacific region to issue AI guidelines. These provisional guidelines provide high-level advice for parties, counsel, and tribunals on using AI in arbitration, aligning with China’s drive to lead in AI by 2030. CIETAC’s rules echo a principle also central to VIAC’s Note: AI can support procedural efficiency, but must not replace human decision-making. The guidelines emphasize that arbitrators retain full authority for decisions and must ensure parties’ right to be heard never being compromised by AI use. CIETAC also addresses practical safeguards – for example, it cautions that using AI does not excuse parties from guaranteeing the authenticity and legality of evidence they submit. This proactive stance has made CIETAC a forerunner in the region in advancing AI use in arbitration.
Core Principles for AI Use in Arbitration
Dr. Pitkowitz distilled the approach of these initiatives into three core principles – essentially the “features” that any use of AI in arbitration should uphold:
- Authority: Human arbitrators must remain the ultimate decision-makers. AI tools may assist with analysis or efficiency, but under no circumstances should an arbitrator delegate or surrender their judgment to an algorithm. Both VIAC and CIETAC stress that the tribunal bears full responsibility for the award’s reasoning and outcome.
- Transparency: Disclose AI usage to all stakeholders. To preserve trust and due process, arbitrators (and even counsel or experts) should inform parties if AI tools are used and clarify for what purpose. Using AI to draft a simple scheduling order is very different from using it to evaluate evidence or draft substantive portions of an award. Pitkowitz noted that greater disclosure allows parties to understand and, if needed, challenge how AI has influenced a proceeding. VIAC’s guidance stops short of mandating disclosure in every instance, but it recommends that tribunals and parties discuss and agree on the use of AI at the outset of a case – including whether and how to disclose its use.
- Integrity and Fairness: AI should enhance efficiency without undermining due process. Any AI application must respect the fundamental procedural rights of the parties. For instance, if an AI tool summarizes evidence or suggests a legal analysis, the parties must still have the opportunity to respond to that output. Both institutions’ guidelines underline that arbitration’s fairness – equality of arms, the right to be heard, reasoned decision-making – cannot be compromised by opaque or untested AI outputs. Pitkowitz warned that so-called “black box” AI systems (whose internal logic is not transparent) pose a risk to the integrity of awards if their reasoning cannot be explained or verified. Ensuring explainability of any AI influence is thus crucial to maintaining confidence in the outcome.
Safeguarding Due Process in an AI-Augmented Arbitration
One of the central themes of Pitkowitz’s address was the “due process dilemma” posed by AI. He observed that the legitimacy of any arbitral award hinges on the parties’ ability to understand and trust the decision-making process. Introducing AI into that process raises challenging questions:
- Disclosure and the Right to Be Heard: If an arbitrator uses an AI tool in the process of evaluating evidence or crafting a decision, how do parties know to what extent AI provided decisive input? “If a party doesn’t know AI was involved, they can’t meaningfully engage with the resulting output,” Pitkowitz noted. Transparency is not just an ethical nicety but an integral component of due process – parties must be aware of all factors influencing the tribunal’s reasoning so they can exercise their right to be heard in response. This commands that at least some level of AI disclosure is necessary whenever AI meaningfully impacts the tribunal’s views.
- The “Black Box” Challenge: Advanced AI systems (especially generative AI based on large language models) often operate opaquely, without providing human-readable reasons for their conclusions. Pitkowitz questioned how this caters to one of the core principles of arbitration, by which arbitrators are expected to issue reasoned awards. “Is it enough to say ‘the AI said so’? Of course not,” he quipped. If an arbitrator were to rely on an AI’s suggestion, there must be a way to explain and justify that decision in the award. Otherwise, an award that quietly relies on AI’s unseen logic could be attacked for lacking transparency and reasoning. In fact, he pointed out that European courts might even refuse to enforce an award tainted by such opacity: under the New York Convention (1958), an award can be denied recognition on public policy grounds if fundamental procedural fairness (like the requirement of a reasoned decision) is breached.
- Ensuring Equality of Arms: Pitkowitz also raised concern about resource asymmetry. If one party has access to cutting-edge AI tools and the other does not, it could “distort procedural fairness”. He suggested that clear rules – or institutional provisioning of AI tools to all sides – are needed to prevent AI from becoming an invisible source of unfair advantage in proceedings.
To address these issues, Pitkowitz advocated developing standards for minimum disclosure of AI use, avenues for parties to test and challenge AI-generated inputs, expectations for explainability in any AI-derived analysis, and measures to prevent inequality of arms. Without such guardrails, he warned, “we risk undermining the very fairness that gives arbitration its legitimacy.”

A Global Issue Requiring a Global Approach
While speaking in Beijing, Pitkowitz drew on developments in both China and Europe to illustrate that the intersection of AI and dispute resolution is a global concern. He pointed to the EU’s Artificial Intelligence Act (EU) 2024/1689), set to be the world’s first comprehensive AI legal framework. Adopted in 2024, this EU AI Act will classify AI systems used in judicial or quasi-judicial decision-making as “high-risk”, subjecting them to stringent requirements for transparency, documentation, testing, and human oversight. For example, AI tools that help judges or arbitrators analyze facts or draft decisions fall into the high-risk category under this law.
Pitkowitz posed a provocative question: What happens if an arbitral tribunal’s use of AI doesn’t meet these new standards? Could a losing party invoke the EU AI Act to challenge an award or resist its enforcement in court? He noted that these scenarios are no longer far-fetched. In fact, legal experts have already begun contemplating whether non-compliance with AI transparency and oversight obligations might violate public policy or due process to the extent of imperiling an award (being set aside in the worst case).
“These are not just academic questions,” Pitkowitz remarked. They strike at the heart of what makes arbitration credible and enforceable across borders. As AI technologies and regulations evolve at alarming speed, it is imperative for the arbitration community’s approach to keep up with the everlasting development and adopt accordingly. Pitkowitz urged collaboration across jurisdictions and institutions in developing coherent guidelines: a fragmented approach could lead to uncertainty, whereas a broadly harmonized set of best practices would ensure AI is used responsibly worldwide.
Embracing Innovation with Guardrails
Pitkowitz concluded his address with a balanced call to action. He warned against both uncritical embrace of AI and fear-driven rejection. Instead, he advocated for “thoughtful integration” – welcoming AI’s efficiency gains and innovative potential with proper guardrails in place. Those guardrails, as he outlined, will come largely from arbitral institutions taking initiative: through guidelines like those of VIAC and CIETAC, through updating arbitration rules, and through offering tools that promote transparency and equality.
“If we embrace AI without thinking critically, we risk turning arbitration into a technological façade,” he said. “If we reject AI out of fear, we risk falling behind the demands of commercial reality.” The solution lies in finding a middle path: embrace the benefits of AI – such as faster analysis, better management of large case files, and cost savings – but always prioritize human judgment, party consent, and fairness standards as core principles in arbitration.
ABOUT THE AUTHOR
Dr Nikolaus Pitkowitz is the founding partner and head of dispute resolution at Pitkowitz & Partners and President of the Vienna International Arbitral Centre (VIAC). Recognized as one of Austria’s leading arbitration practitioners, he has acted as counsel and arbitrator in over 130 international cases across Europe, the US, and Asia, including Austria’s largest-ever arbitration. He is a Fellow of the Chartered Institute of Arbitrators (CIArb), a court member of the Casablanca International Arbitration and Mediation Centre (CIMAC), and co-editor of the Austrian Yearbook on International Arbitration. He frequently lectures and publishes, with more than 50 works on arbitration and dispute resolution, including a monograph on Setting Aside Arbitral Awards and the Handbook on Third Party Funding in International Arbitration. Contact: [email protected].
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.