THE AUTHORS:
Natalie Armstrong, Associate at Clyde & Co
Lucy Larner, Trainee Solicitor at Clyde & Co
Clyde & Co’s Young Arbitration Group provides a unique insight into international arbitration issues through the lens of young international arbitration practitioners working across different jurisdictions. In this series with Daily Jus, Clyde & Co explores the evolving landscape of artificial intelligence in arbitration, analysing recent developments, legislative changes, and their impact on dispute resolution worldwide.
Whilst the debate about the use of artificial intelligence (“AI”) in international arbitration has to date largely focused on its use by legal practitioners, less focus has been on how arbitrators can appropriately leverage AI and Generative AI (“GenAI”) tools to complete judicial tasks, if at all. The progress in machine learning and predictive analytics presents opportunities but also legal challenges for those exercising adjudicative authority or judicial functions.
There is currently no formal legislation in the jurisdiction of England & Wales that governs the use and integration of AI in arbitral proceedings. Instead, parties and arbitrators should rely on soft law guidance published by various arbitral institutions. Much is left to the parties’ discretion, and the Arbitration Act 1996 (as amended by the Arbitration Act 2025 (in effect from 1 August 2025)) (the “AA 1996”), which is the relevant legislation for arbitral proceedings in England & Wales, is, for all intents and purposes, silent on the topic. This regulatory environment creates a flexible landscape for discussion on how arbitrators could utilise AI and GenAI tools, including in decision-making, performing comprehensive and administrative tasks to support specific legal functions, and how such tools could be used in appointing arbitrators.
Decision-Making
Decision-making starts with the constitution of an arbitral tribunal. The regulatory scope for an AI arbitrator to form part of that tribunal, or to adjudicate proceedings alone varies across jurisdictions; some legal systems specifically require arbitrators to be natural persons. In Scotland, only an individual may act as an arbitrator (Schedule 1, Part 1, Rule 3, Arbitration (Scotland) Act 2010); while in France, the role of arbitrator can only be performed by a natural person (Article 1450, French Code of Civil Procedure (Code de procédure civile)). The language of relevant rules in other jurisdictions only implies that arbitrators will be natural persons, such as in Germany where, in appointing a sole arbitrator or a third arbitrator, the court is to consider whether the appointment of an arbitrator of a nationality other than those of the parties may serve the intended purpose (Sections 1035(5), 1036, German Code of Civil Procedure (Zivilprozessordnung); see also requirements regarding disclosure, impartiality and independence).
In the jurisdiction of England & Wales the position falls into the latter category of implication, there being no explicit requirement within the AA 1996 for arbitrators to be natural persons, but the language of the provisions meaning the necessary conclusion is that must be the case. For instance, Section 24, AA 1996 allows a party to arbitral proceedings to apply to the court to remove an arbitrator on the ground that they do not possess the qualifications required by the arbitration agreement or that they are physically or mentally incapable of conducting the proceedings or there are justifiable doubts as to their capacity to do so—language that clearly contemplates human arbitrators. This ambiguity is the starting point for debates as to whether the current law, without amendment, could accommodate the appointment of non-human arbitrators to a tribunal in any arbitration subject to the laws of England & Wales.
Even if there is such a scope, whether or not this is permissible further depends on ethical considerations of whether an AI arbitrator could adjudicate an arbitration in a way that accords with the obligations of fairness and impartiality, which are founding principles of the AA 1996 (as outlined in the parliamentary debates during the Bill’s passage through Parliament, Hansard, 2 May 1996, Arbitration Bill [Lords]). That discourse is ongoing, and we will be following developments closely.
Using AI Tools to Select Arbitrators
There are AI arbitrator selection tools in existence which would allow users to predict the decision of a particular candidate based on the facts of the dispute and the arbitrators track record of historic judgments. The reputation of arbitrators in the investor-state dispute settlement system is one of perceived bias, since the reputation of an arbitrator as either investor- or host state-friendly can result in ongoing appointments—a phenomenon documented in academic research examining bias patterns in international investment arbitration. If properly trained, AI tools could help to encourage diversity within tribunals and provide a solution to the perceived bias in certain types of arbitration proceedings.
Specific Legal Functions and Administrative Tasks
In contrast to the debate around AI arbitrators, it is more accepted that a variety of legal functions can be delegated by arbitrators to AI and GenAI tools in order to assist the arbitral process. Arbitral guidance often follows that of the courts and tribunals and updated AI guidance from the England & Wales Courts and Tribunals Judiciary for judicial office holders confirms the introduction of Microsoft’s ‘Copilot Chat’ on all judicial office holder’s devices suggesting that the use of AI for straightforward tasks (whilst retaining confidentiality and privacy) is set to become the everyday norm within the judiciary. In fact, in the arbitral world, the use of such tools directly aligns with the principle of fair resolution of disputes without unnecessary delay or expense under section 1(a) of the AA 1996. Caution, however, is advised to ensure accountability and accuracy of any information provided by an AI or GenAI tool, for reasons we explore below.
For arbitrations brought under the London Court of International Arbitration (“LCIA”)’s Arbitration Rules 2020 (“LCIA rules”), the arbitral tribunal’s powers include making any procedural order as to the employment of technology (Article 14.6, LCIA Rules). This presumably could include any order as to the arbitrator(s)’s own uses of technology. In terms of the authority for tribunals to do so, similarities may be drawn from the delegation of certain tasks to a tribunal secretary following approval by all parties, including as regards the tasks that may be carried out (Article 14A, LCIA Rules). Although delegation could include tasks such as conducting legal and procedural research, producing document summaries or chronologies, reviewing and analysing evidence, assessing legal merits and correcting typographical or damages calculation errors, soft-law guidance has warned against any delegation which may influence procedural or substantive decisions (see, CIArb (Chartered Institute of Arbitrators) Guideline on the Use of AI in Arbitration (2025), para 8.2). In the event that such decisions are made without reverting to the primary sources, and contain inaccuracies or false information, this could leave the arbitral decision open for challenge, for example, for serious irregularity or on a point of law (sections 68, 69, AA 1996).
A tribunal adopting procedures needs to do so in a way which provides a fair means for the resolution of the relevant dispute (section 33, AA 1996). Moreover, whilst parties are free to agree how their disputes are resolved, this is subject to safeguards being in place for the public interest (section 1, AA 1996). Fair means and public interest require guardrails in place. In delegating tasks, the tribunal would need to retain ultimate responsibility for the AI or GenAI work product, ensuring that all tasks are carried out to the required standard and under its supervision (paragraphs 201 and 202, Section 10, LCIA Guidance Note for Parties and Arbitrators). The obvious issue in achieving this is the “black box” problem – the inherent lack of understanding of the internal workings of AI and GenAI systems (Maxi Scherer, Artificial Intelligence and Legal Decision-Making: The Wide Open? (Journal of International Arbitration).
The less contentious and lower risk use of AI and GenAI tools would be its use for certain administrative tasks such as communicating on behalf of the tribunal, organising documents, proofreading, organising procedural matters, and dealing with matters relating to invoices.
In any event, it is generally recommended that arbitrators include provisions within early procedural orders that restrict or limit the use of AI/GenAI tools, or that reflect the parties’ agreed parameters for their deployment, to prevent their decisions being challenged over the purported wrongful use of AI.
Conclusions
The evolving role of AI and GenAI in arbitration presents opportunities for tribunals operating under the legal framework of England & Wales. Whilst regulation in the jurisdiction does not explicitly mandate or prohibit the use of AI by arbitrators, its emphasis on fairness, impartiality, and procedural integrity imposes boundaries on how such technologies could be adopted.
At this point, arbitrators could seek party agreement to use AI/GenAI tools to assist with administrative tasks to facilitate the efficient conduct of proceedings, but its use for more technical tasks without robust supervision, or in decision-making, is likely not endorsable. The lack of transparency in AI reasoning, combined with the statutory implication for human judgment and accountability, means that arbitrators must exercise caution. Recommendations remain that AI may support but not supplant a tribunal’s adjudicative role.
To mitigate risk, arbitrators should clearly define the scope of AI and GenAI use in procedural orders and ensure any deployment is transparent, supervised, and party approved. As the legal community continues to adapt, maintaining the integrity of arbitral proceedings must remain the priority.*
* This conclusion was prepared by Microsoft CoPilot on 4 August 2025 and was refined once by prompting to make the conclusion more concise. The drafting was thereafter manually refined for style and to correct inaccuracies. This highlights the risks with relying entirely upon AI tools for drafting but does demonstrate that with human oversight these tools can be beneficial for summarising or producing conclusions.
ABOUT THE AUTHORS
Natalie Armstrong is an Associate in the International Arbitration and Commercial Litigation Group of Clyde & Co in London and has assisted on various complex and high-value international and domestic disputes and advisory matters spanning a range of sectors, including the commodities, trade, shipping, energy, precision engineering, construction, technology, and pharmaceuticals industries. Natalie is fluent in English and German and has worked for clients with a presence in the UK, US, Europe, and the Middle East.
Lucy Larner is a trainee solicitor on secondment in the International Arbitration and Commercial Litigation Group of Clyde & Co in London.

*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.