4 Pillars for Responsible Adoption
THE AUTHOR:
Nate Baker, Key Account Executive at Jus Mundi
The Ethical Crossroads: What AI Means for Legal Practice Today
I recently had the opportunity to moderate a panel at ILTACON on Ethics and Accountability in AI & Legal Technology with distinguished leaders from Reed Smith, Levenfeld Pearlstein, and Paul Hastings.
This conversation came at a critical time. AI tools are being rapidly adopted across legal workflows, from research and document review to due diligence and contract automation. While the possibilities are exciting, the ethical obligations are ever present.
What emerged from our discussion was a roadmap for how forward-thinking legal professionals can harness AI’s transformative power while maintaining the highest ethical standards.
The Current State
The American Bar Association’s Formal Opinion 512, released in July 2024, remains the primary guidance framework for ethical AI use in US legal practice. More than a year later, the core challenges persist: AI systems frequently produce incomplete or inaccurate results, confidentiality risks have evolved beyond traditional cloud security concerns, and the profession still lacks robust frameworks for bias detection and mitigation.
But here’s what’s changed: 80% of AmLaw 100 firms have now established AI governance boards, moving from experimental adoption to enterprise-wide transformation.
The 4 Critical Elements of Responsible AI Implementation
Competency
The ABA made clear that lawyers don’t need to become AI experts, but they must develop a “reasonable understanding of the capabilities and limitations of the specific AI technology” they use.
This distinction shifts the focus from technical mastery to strategic application. While you don’t need to understand the underlying architecture, you do need to understand how the system functions, what questions the tool can and cannot reliably answer, what biases and blind spots might exist, what risks it introduces, and when to verify its outputs.
For firms, this means that AI literacy should be embedded in professional development. Some are already building internal training programs where attorneys learn not only to use the tools, but also to challenge them; asking “what is missing?” or “what biases might be at play?”. Creating an “AI tool card” for each system your firm uses, with documented capabilities, limitations, and appropriate use cases, will build competency and ensure consistent application across your practice.
Validation
AI systems are increasingly sophisticated at producing outputs that appear authoritative and well-reasoned, even when they contain errors. Outputs can be incomplete, inaccurate, or simply fabricated.
To address this, systematic verification protocols are the first step. This means understanding not just what the AI concluded, but how it reached that conclusion. It also means treating AI as a tool that accelerates the first draft but isn’t the final word.
Several firms represented in our audience are already creating layered review protocols. For example, requiring that AI-assisted research memos be checked against primary sources before being circulated. Others are drafting internal guidelines mandating that every AI-generated clause in a contract must be validated against precedent or templates in the firm’s document management system. These processes take time, but they protect against ethical breaches.
Bias Mitigation
We also examined the issue of bias; a challenge that, unlike hallucinations, cannot be solved with simple fact-checking. AI models are trained on vast datasets, many of which reflect historical inequities or skewed representation. This means that without safeguards, outputs may inadvertently perpetuate bias in areas like employment law, housing disputes, or even international arbitration where regional disparities in published awards already exist.
Implementing bias detection protocols and involving diverse review teams can go a long way towards mitigating this risk. Some firms are beginning to conduct regular audits of AI outputs to see whether certain demographics or jurisdictions are consistently underrepresented or mischaracterized. Others are working with vendors who can provide transparency about training data and allow custom fine-tuning on curated legal sources.
Confidentiality
For many law firms, AI governance strategies are in early development stages, leaving vulnerabilities that extend beyond traditional data breach concerns.
AI systems process information in ways that can create unexpected connections. The risk most firms miss is that AI tools can inadvertently provide information from one attorney’s input to another’s. This isn’t a hypothetical concern; it’s an architectural reality of how many AI systems function.
The response to this is twofold: choosing the right vendors and enforcing the right internal protocols. Enterprise-grade AI solutions should provide separation of client data, encryption, audit logs, and compliance with frameworks like SOC 2 and ISO 27001. Internally, firms should treat AI inputs with the same diligence they apply to email or cloud storage and “leverage internal ethical walls”; not every attorney should have the same level of access, and role-based permissions should be enforced.
Several firms have gone a step further by mandating NDAs with vendors that explicitly cover AI-related risks, ensuring clients’ confidential information never becomes model training data.
Informed Consent
Increasingly, clients are asking whether AI is being used on their matters, what data is being processed, and what safeguards exist. Failing to disclose AI use can risk eroding trust with your clients.
Forward-thinking firms are beginning to draft AI usage policies that are shared with clients upfront. These documents outline when AI may be used (e.g., for research, summarization, or first-draft generation), what data will and will not be processed, and how confidentiality is safeguarded. These disclosures often build confidence by showing that the firm has thought through potential implications.
Governance
Successful AI adoption requires treating AI as a fundamental shift in how legal services are delivered. This shift necessitates new policies, new training programs, new risk management approaches, and new client communication strategies.
Firms without governance frameworks tend to adopt AI as a series of individual decisions: one attorney tries a tool, another department experiments with a different platform. This fragmented approach creates inconsistent practices, increases security risks, and prevents firms from capturing the full benefits of systematic AI integration.
Effective governance starts with recognizing that AI implementation is an enterprise-wide transformation that requires coordinated oversight. Establish an AI governance board with real authority to make binding decisions about AI adoption, budget allocation for AI tools and training, and enforcement of AI policies across all practice groups. Create clear escalation procedures for AI-related ethical concerns that provide attorneys with concrete steps when they encounter problems, and regularly update your policies to reflect technological and regulatory changes.
The Path Forward for Ethics in AI
What emerged from our panel is that while tools will continue to evolve, the real determinant of ethical AI use is culture. A firm that fosters accountability will be better equipped to integrate new technologies responsibly. By anchoring adoption in competence, verification, bias awareness, confidentiality, informed consent, and effective governance, firms can innovate confidently without sacrificing trust and integrity.
About Jus Mundi
Founded in 2019 and recognized as a mission-led company, Jus Mundi is a pioneer in the legal technology industry dedicated to powering global justice through artificial intelligence. Headquartered in Paris, with additional offices in New York, London and Singapore. Jus Mundi serves over 150,000 users from law firms, multinational corporations, governmental bodies, and academic institutions in more than 80 countries. Through its proprietary AI technology, Jus Mundi provides global legal intelligence, data-driven arbitration professional selection, and business development services.
Press Contact
Helene Maïo, Senior Digital Marketing Manager, Jus Mundi – [email protected]
*The views and opinions expressed by authors are theirs and do not necessarily reflect those of their organizations, employers, or Daily Jus, Jus Mundi, or Jus Connect.