Insights from Richard Susskind, the LCIA, and Practitioners on AI Implementation, Ethics, and What Comes Next
THE AUTHOR:
Clémence Prévot, Director of Publications at Jus Mundi
Building effective AI for arbitration requires more than technological capability. It demands honest conversation between those building the tools and those using them daily. That’s why on December 2, 2025, during London Arbitration Week, Jus Mundi and the London Court of International Arbitration (LCIA) brought together the perspectives that matter most.
The discussion, titled “All in Favour: Say AI,” assembled an exceptional panel: Professor Richard Susskind CBE KC, tone of the biggest legal tech evangelists; Kevin Nash, Director General of the LCIA (London Court for International Arbitration); Ilona Logvinova, Global Chief AI Officer at Herbert Smith Freehills; Tara Waters, Legal Innovation Strategist and VALS AI Project Lead; and Jean-Rémi de Maistre, CEO and Co-Founder of Jus Mundi. The discussion was moderated by Alexandre Vagenheim, VP of Global Legal Data at Jus Mundi.
In this article, we’ve gathered some of the panelists’ key insights on works in practice, common pitfalls, where to draw ethical lines, and where arbitration is heading, so you can future-proof your teams for the next wave of AI innovation.

Beyond Automation: Rethinking What AI Is For
Professor Susskind opened with a provocative challenge: Are we on the verge of a new form of dispute resolution, one that could actually be better than what we have today?
His central thesis confronted a common misconception. “The first 65 years of legal technology have been devoted to automation, to grafting technology onto our old ways of working,” he observed. “But when I look across other sectors for truly game-changing uses of technology, rarely are the examples I find about automating someone remaining on the job.”
Using Black & Decker as an example (a company that realized it doesn’t sell drills, it sells holes in walls), Susskind urged the arbitration community to ask: Are we in the business of conducting hearings and producing awards, or are we in the business of resolving disputes? The distinction matters profoundly when considering AI’s potential.
Drawing from medicine, he noted how innovation extends beyond robotic surgery to include non-invasive therapy and preventative medicine, outcomes that look nothing like traditional practice but serve patients better. “What’s the equivalent in law?” he asked. “What’s preventative lawyering? We now have tools emerging that help identify early markers of disputes before they escalate.”
AI in Practice: From Research to Award Drafting
The conversation quickly moved from theory to implementation. Jean-Rémi de Maistre shared concrete examples from Jus Mundi’s experience deploying AI across approximately 200 clients, spanning the entire arbitration lifecycle: pre-arbitration market analysis, early case assessment for corporates and funders, arbitrator due diligence, legal research and argument analysis during submissions drafting, and even award preparation by arbitrators themselves.
“Notably, arbitrators and trainees are among the heaviest AI users,” Jean-Rémi revealed. “Applications range from drafting procedural histories to analyzing legal issues, though arbitrators maintain full judgment authority.”
Kevin Nash reinforced these benefits with an institutional perspective, recalling a counsel who spent two weeks manually extracting 44,000 entity names for conflict checks. “That would have been a situation where an AI tool could have helped tremendously and gotten us to the things that require actual thoughtfulness.”
Ilona Logvinova introduced a reframing of AI’s role, citing Microsoft CEO Satya Nadella: “I think with AI, and I work with humans.” Rather than viewing AI as a tool humans use, this formulation positions it as a strategic thinking partner. “AI can help you surface better quality output. It can help you tap into an intelligence layer you’re not getting if you’re just iterating in your own mind,” she explained.
Ethics and Governance: Drawing the Lines
Kevin Nash posed a direct question to the room: “How comfortable are you right now with arbitrators using AI without disclosing it?” The discomfort was palpable.
He continued: “What you don’t want is shadow AI. As long as it’s being disclosed and the scope is clearly delineated, that’s perfectly fine.”
Jean-Rémi de Maistre addressed the thorny issue of AI bias head-on. Recounting feedback from a user who suspected Jus AI was “investor-biased,” he acknowledged the complexity: “Each AI carries biases from the models we use, from the data we use, and potentially from the system design itself. Should we be transparent about that? Absolutely.”
He outlined Jus Mundi’s approach to explainability across three levels:
- Upstream data transparency: What data does the system examine?
- Reasoning transparency: Why did it select certain materials?
- Downstream citation verification: Can users trust the specific extracts?
“No AI system will reach 100% quality,” de Maistre acknowledged, “the same way no human reaches 100% quality. The explainability is what allows you to trust an AI system.”
Tara Waters added perspective from her evaluation work, noting that AI systems often display less bias than humans, even when they’re less explainable. “That absence of human bias, even though there’s less explainability for now, may actually lead to more effectiveness at the end of the day.”
Future Horizons: New Forms of Dispute Resolution
Professor Susskind shared a striking statistic: In 2022, leading AI developers predicted Artificial General Intelligence (AGI) systems would match human performance across all tasks in 20-40 years. Today, the mainstream answer is 5-20 years.
“I don’t think we’re ready,” Susskind stated bluntly. “The idea that we could develop systems matching human performance across the entire white-collar workforce is something for which we are completely ill-prepared.”
Kevin Nash discussed the AAA-ICDR (American Arbitration Association International Centre for Dispute Resolution)’s AI arbitrator for construction disputes and outlined the LCIA’s vision: AI-powered award review for enforceability, predictive analytics for cost estimation, and more transparent sharing of institutional data.
Perhaps most provocatively, Jean-Rémi de Maistre shared a real-world example: startup founders facing a potentially company-killing dispute appointed three AI systems each, fed them agreed facts, and accepted the solution supported by two of three systems. “They wanted a quick solution they could trust. In their case, it was the only solution for survival.”
Windmills, Not Walls
Professor Susskind closed with a Chinese proverb: “When the winds of change blow, some people build walls, others build windmills.” His assessment was direct: “In law, we’re tending to put up walls. I think we need a shift in mindset toward finding opportunities.”
He challenged the room with a final reframing:
“The question isn’t ‘What does the future hold?’ I put it back to you: In this remarkable era of AI, what future are you going to create?”
The evening concluded with a fitting announcement: the LCIA and Jus Mundi formalized a partnership to collaborate on arbitration workflows powered by AI, thought leadership, and training initiatives, signaling that some leading voices in the arbitration community are choosing windmills.
The Path Forward
This session reflected a maturing relationship between arbitration and technology. The days of pure optimism and fear-mongering are giving way to nuanced discussion about implementation, ethics, governance, and value redefinition.
As Ilona Logvinova observed, younger generations already treat AI differently: “Gen Z and Alpha are using it as an operating system, a first stop for absolutely everything.” The arbitration community faces a choice: lead this transformation or be disrupted by those who think differently about dispute resolution.
The panel at Inner Temple didn’t provide easy answers. Instead, it offered something more valuable: honest dialogue about where arbitration stands today, the ethical lines that must be drawn, and the futures worth building. For those of us working at the intersection of arbitration and technology, these conversations reinforce a simple truth: we’re not developing tools in isolation. We’re building with the practitioners, institutions, and thought leaders who use them daily.
Interested in seeing how AI can support your arbitration practice? Explore Jus AI to learn more.
About Jus Mundi
Founded in 2019 and recognized as a mission-led company, Jus Mundi is a pioneer in the legal technology industry dedicated to powering global justice through artificial intelligence. Headquartered in Paris, with additional offices in New York, London, and Singapore. Jus Mundi serves over 150,000 users from law firms, multinational corporations, governmental bodies, and academic institutions in more than 80 countries. Through its proprietary AI technology, Jus Mundi provides global legal intelligence, data-driven arbitration professional selection, and business development services.
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.





