Skip to main content
Industry Security

AI Security for Law Firms: Protecting Client Confidentiality

BT

BeyondScale Team

AI Security Team

15 min read

A federal court in New York ruled in February 2026 that communications a defendant entered into a consumer AI platform were not protected by attorney-client privilege. The platform's terms of service permitted data collection and disclosure to government regulators. One prompt to the wrong AI tool and a decade of privilege protection evaporated.

Law firms now face a security challenge with no clear precedent: AI tools that promise to save thousands of billable hours also create new pathways for privilege waiver, client data exposure, and regulatory sanction. This guide explains the four AI security risks specific to legal practice, what bar associations require, and the controls that protect client confidentiality without blocking your attorneys from using the tools that make them competitive.

Key Takeaways

    • The average law firm data breach now costs $5.08 million, and 45 ransomware attacks hit law firms in 2024, a record high
    • US v. Heppner (SDNY, February 2026) established that consumer AI platforms can destroy attorney-client privilege through their own data retention terms
    • ABA Formal Opinion 512 requires informed client consent (not engagement letter boilerplate) before client confidences are input into any AI tool
    • 69% of individual legal professionals now use AI, but 54% of firms have no formal AI policy, creating a structural shadow AI problem
    • Indirect prompt injection in document review is ranked the #1 LLM attack vector by OWASP; an adversary can embed malicious instructions inside a discovery document that corrupts your AI's legal analysis
    • Purpose-built legal AI platforms (Harvey, CoCounsel, Lexis+ AI) with zero-retention architectures are not equivalent to consumer AI tools from a privilege and security standpoint

Why Law Firms Are High-Value AI Attack Targets

Law firms hold a concentration of valuable, sensitive information that exists almost nowhere else: privileged case strategy, M&A deal terms before public announcement, litigation exposure assessments, witness identities, and confidential client financial data. This makes them targets for both opportunistic ransomware and sophisticated adversaries seeking competitive intelligence.

The numbers reflect that reality. The average cost of a law firm data breach reached $5.08 million in 2024, up more than 10% year-over-year. Ransomware attacks hit 45 firms in 2024, a record, compromising 1.5 million records and costing firms an average of $1.85 million per incident in remediation. Orrick, Herrington & Sutcliffe settled four consolidated class action lawsuits for $8 million after attackers accessed files belonging to 637,000 individuals. Berkeley Research Group was hit during a $700 million LBO, compromising intelligence across hundreds of concurrent deals.

AI adoption has expanded the attack surface at a moment when governance has not kept pace. Individual legal professional AI adoption reached 69% by early 2026, more than double the 31% recorded in 2025. Firm-wide adoption is at 42%. But 54% of firms still provide no AI training, and an equal proportion have no formal AI policy. The tools are in the building. The controls are not.

The AI Tools Law Firms Are Deploying

Understanding the security implications requires knowing the actual tools in use:

Harvey AI is the dominant purpose-built legal AI platform, now valued at $11 billion. CMS Law deployed Harvey across 7,000 lawyers in 50+ countries. Harvey's security posture is meaningfully better than consumer alternatives: SOC 2 Type II certified, ISO 27001, annual third-party penetration testing (Schellman, NCC Group, Bishop Fox), contractual prohibition on using client inputs for model training, and zero data retention with model providers. These controls matter for privilege analysis.

Thomson Reuters CoCounsel / Westlaw Precision AI integrates legal research with document drafting. Thomson Reuters holds ISO 42001 certification (the international AI management systems standard) and operates a zero-retention architecture for client data. A 2025 Stanford study in the Journal of Empirical Legal Studies found Westlaw AI hallucinated 33% of the time, which is a competence issue under ABA Rule 1.1 regardless of confidentiality controls.

Lexis+ AI integrates with Microsoft 365 Copilot via the Lexis Create Plugin, which brings AI-assisted drafting directly into Word. The Microsoft integration introduces the Copilot security surface.

Microsoft 365 Copilot is the most widely deployed AI tool in law firms simply because most firms already run M365. It carries a specific risk that legal AI platforms do not: Copilot inherits every permission the signed-in user holds. Research has found that 16% of business-critical data in M365 environments is overshared, with an average of 802,000 files per organization accessible to anyone with organizational access. The U.S. House of Representatives banned Copilot for congressional staff over data security concerns. Deploying Copilot without a Purview sensitivity labeling and permission audit is a material risk for any firm.

Consumer ChatGPT, Claude.ai, Gemini are not appropriate for any client-related work. Prior to enterprise tiers, these platforms retained prompts for model improvement and reserved the right to share data with third parties. The Heppner ruling directly addresses this scenario.

Risk 1: Attorney-Client Privilege Waiver Through AI Vendor Terms

The most direct legal threat is privilege destruction. In United States v. Heppner (S.D.N.Y., February 2026), Judge Jed S. Rakoff ruled that communications a criminal defendant sent to a consumer version of Anthropic's Claude were not protected by attorney-client privilege. Three reasons drove the outcome:

  • The AI tool is not an attorney and bears no fiduciary duty
  • The platform's privacy policy explicitly permitted data collection, model training, and disclosure to third parties including government regulators
  • The defendant was not using the tool at counsel's direction
  • The New York State Bar Association noted this ruling "shook the legal community." The Duane Morris firm issued a March 2026 alert flagging vendor terms that allow government disclosure as a specific privilege risk. The ABA's take in Formal Opinion 512: disclosure of client confidential information to a public AI platform that retains data for training constitutes a waiver, and the standard engagement letter consent clause is not sufficient.

    The practical implication: every AI tool used in connection with a client matter must be evaluated for its data retention terms before use, not after. Zero-retention, no-training contractual guarantees from enterprise-tier tools reduce but do not eliminate the risk. An in-house protocol for obtaining client informed consent to AI use should be part of every firm's standard engagement process.

    Risk 2: Indirect Prompt Injection in Document Review

    Indirect prompt injection is ranked the #1 attack vector in the OWASP Top 10 for LLM Applications 2025. In legal document review, the attack is straightforward and difficult to detect without specific defenses.

    The attack works as follows: an adversary embeds hidden instructions inside a contract, discovery document, deposition transcript, or opposing counsel filing. The instructions are invisible to the human reader but processed by the AI when it ingests the document. When the firm's AI assistant analyzes that document, the embedded instruction can redirect the AI's output. Depending on the system's tool access, it can cause the AI to:

    • Include the opposing party's preferred framing in a privileged case summary
    • Fabricate legal citations that the attorney forwards to a court
    • Attempt to exfiltrate matter details through any tool-calling capability the agent holds
    • Corrupt the firm's AI knowledge base if the document is indexed into a RAG system
    Microsoft identified indirect prompt injection as one of the most used AI attack techniques in 2025. Palo Alto Unit 42 confirmed web-based indirect prompt injection operating autonomously in the wild.

    A law firm's exposure is higher than most enterprises because the source documents are supplied by adversarial parties: opposing counsel, counterparties in deals, and witnesses in litigation. Any document that enters an AI-assisted review pipeline is a potential injection vector.

    The OWASP mitigation baseline: input sanitization and content validation before AI processing; allowlisting trusted document sources; human-in-the-loop review before agentic systems take action on AI-generated analysis; output monitoring for anomalous instructions. For law firms specifically, no AI agent should execute file access, communications, or external actions based solely on analysis of adversarial-sourced documents.

    Risk 3: Shadow AI by Associates

    79% of legal professionals use AI tools in their work. Only 21% of firms have formal AI governance in place. The gap between those two numbers is shadow AI: associates and attorneys using free consumer tools on personal phones, personal laptops, or firm devices to avoid the friction of approved systems or to work around outright bans.

    The shadow AI problem in law firms has a specific dynamic that generic enterprise guidance misses. Associates under time pressure and billing targets use consumer AI to:

    • Summarize deposition transcripts overnight
    • Draft client-facing communications
    • Analyze merger agreements against comparable transactions
    • Research case strategy and look for litigation precedent
    The NC Bar Association noted in January 2026 that banning AI without providing approved alternatives directly causes shadow AI. ALPS Insurance (a legal malpractice carrier) flags shadow AI as an active malpractice and bar complaint trigger. Clio's 2025 research found 44% of firms have not implemented formal governance despite widespread individual use.

    The Heppner analysis applies to every one of these use cases. If an associate summarizes a deposition in free ChatGPT and the platform retains that prompt under terms permitting government disclosure, the privilege analysis is unfavorable. The firm is liable for the associate's conduct under ABA Rule 5.3 (supervision), and the associate may face individual bar discipline for confidentiality violations under Rule 1.6.

    The control is not another ban. It is providing a firm-approved AI tool that meets the privilege and security standards of the matter, training associates on the specific ethics rules that govern AI use, and using network-layer DLP to prevent client data from reaching unauthorized AI endpoints. Zscaler, Microsoft Purview, and Netskope all provide AI traffic monitoring capabilities.

    Risk 4: Agentic AI and Autonomous Access to Privileged Files

    Agentic AI systems are now entering legal workflows. Harvey's agent layer, Thomson Reuters' agentic CoCounsel, and the Epiq Agentic Platform operate in observe-orient-decide-act loops: autonomously reviewing documents, drafting correspondence, extracting data, updating case management systems, and interacting with other agents through standardized protocols.

    The security posture of agentic AI is fundamentally different from a query-response legal research tool. An agent granted persistent access to iManage or NetDocs can access thousands of matter files across every client. If that agent is compromised through prompt injection or a jailbreak attack, an adversary can exfiltrate files at scale without triggering the access-volume alerts that traditional SIEM rules would catch.

    Multi-turn jailbreak attacks against open-weight agentic models achieved 92% success rates in 2025 testing. An attorney who directs an agent to review all correspondence on a specific matter has, in practice, created an autonomous process that holds broad file access for an indefinite period.

    The privilege dimension is equally concerning. When an AI agent takes an action autonomously, the question of whether attorney-client privilege and work product protection cover that action depends on whether a lawyer directed it and whether documentation of that direction exists. The Heppner reasoning about AI tools lacking fiduciary duties applies to agentic systems operating beyond direct attorney oversight.

    Controls for agentic AI in law firms: principle of least-authority for agent permissions (an agent reviewing a single matter should not hold firm-wide document access); human approval requirements before any agent action involving privileged matter files or external communications; complete audit logs of all agent actions with matter and client identifiers; and documented attorney direction for any agent task where privilege protection is anticipated.

    Bar Association Ethics: What the Rules Require

    The following ethics obligations apply in any US jurisdiction where an attorney uses AI with client data:

    ABA Formal Opinion 512 (July 2024) is the baseline. Key obligations:

    • Competence (Rule 1.1): Lawyers must understand how the AI tool handles data, including data retention policies, training data use, and security certifications. This understanding must be periodically updated as tools change.
    • Confidentiality (Rule 1.6): Inputting client confidential information into a self-learning AI tool requires informed client consent, not boilerplate engagement letter language. The lawyer must explain the specific data risks in terms the client understands.
    • Supervision (Rule 5.3): Lawyers are responsible for the work product AI generates and for supervising non-lawyer staff who use AI tools on client matters.
    • Cross-matter contamination: When multiple firm attorneys use the same AI tool, there is a risk the tool surfaces one client's confidential information in responses about another matter. This requires governance at the tool and data-segmentation level.
    State bar guidance has accelerated since Opinion 512. Florida Opinion 24-1 requires reasonable precautions to protect client information plus disclosure of AI use when it affects billing. New York Formal Opinion 2025-6 requires client consent and confidentiality safeguards before using AI to record or transcribe client meetings. California's Board of Trustees issued Practical Guidance in November 2023 requiring competence in LLM limitations before use.

    UK and EU firms operate under SRA Standards and Regulations Rule 5.1 (client confidentiality regardless of whether work is done by a solicitor or technology), GDPR cross-border transfer restrictions on cloud AI processing, and Law Society guidance recommending against inputting confidential information into tools without direct control over development and deployment. The Jeffries v. Harcros Chemicals case (2026) restricted eDiscovery to closed AI tools specifically citing GDPR.

    Security Controls Checklist for Law Firms

    The following controls address the four risk areas above. They map to NIST AI RMF 1.0 functions (Govern, Map, Measure, Manage) and the OWASP LLM Top 10.

    Governance

    • Establish an AI governance committee with cross-functional representation (IT/security, ethics, compliance, knowledge management, risk)
    • Publish a formal AI use policy: approved tool list, prohibited tools, matter-level restrictions for sensitive transactions, mandatory human review requirements for AI-generated work product
    • Document attorney-directed use of any agentic AI system
    Vendor Due Diligence (required before deployment)
    • Minimum: SOC 2 Type II annual audit, ISO 27001 certification, contractual zero-retention and no-training-with-client-data clause, annual third-party penetration test report, incident response SLA of 72 hours or less, subprocessor disclosure
    • Review all AI vendor terms for data collection clauses that could enable government disclosure (apply Heppner analysis)
    Microsoft 365 Copilot (if deployed)
    • Run SharePoint permission audit before enabling Copilot; eliminate "everyone in organization" default sharing
    • Apply Microsoft Purview sensitivity labels to all matter files; configure Purview DLP for Copilot to block processing of labeled documents
    • Enable Microsoft Purview Data Security Posture Management for Copilot
    Shadow AI Prevention
    • Deploy network-layer DLP (Zscaler, Netskope, or Microsoft Purview) to detect and block client matter data going to unauthorized AI endpoints from firm devices and networks
    • Provide attorneys with approved AI alternatives before enforcing any AI ban
    Document Review and Agentic AI
    • Validate all AI-processed documents before analysis, particularly documents sourced from adversarial parties
    • Require human approval before any AI agent executes file access, communications, or external actions
    • Implement principle of least-authority for agent tool permissions
    • Maintain complete audit logs of all AI interactions with matter and client identifiers
    Client Consent
    • Develop a formal AI disclosure and consent process for client engagement; per ABA Opinion 512, engagement letter boilerplate is not sufficient
    • Document informed consent before AI tools process client confidential information on any matter
    Training
    • Provide associate and attorney training covering ABA Formal Opinion 512, applicable state bar opinions, the distinction between consumer and enterprise AI tools, and shadow AI risks
    • Update training when relevant bar opinions or vendor data handling terms change

    What a Law Firm AI Security Assessment Covers

    A formal AI security assessment for a law firm covers four domains that standard IT security assessments do not:

  • Privilege risk analysis: Reviewing vendor data handling terms against the Heppner framework; documenting where informed consent processes are absent; assessing whether current tools meet the contractual standards required for privilege protection
  • Configuration review: M365 Copilot permission sprawl, Purview DLP gaps, SharePoint oversharing, AI audit log coverage
  • Agentic AI posture: Mapping what agents have persistent access to, evaluating tool permissions against least-authority principles, reviewing audit log coverage of agent actions
  • Shadow AI discovery: Network traffic analysis to identify client data going to unauthorized AI endpoints; gap analysis between firm policy and actual practice
  • The output is a prioritized remediation plan mapped to ABA ethics obligations and applicable bar association guidance, not generic IT security findings.

    Protecting Client Trust in the Age of LLMs

    The stakes for law firm AI security are distinct from every other industry. A data breach at a retail company costs money and creates regulatory exposure. A breach at a law firm also destroys the confidentiality that is the foundation of every client relationship, potentially waives privileges that took years to build, and can trigger bar complaints against individual attorneys.

    The tools that are reshaping legal practice, including Harvey, CoCounsel, and Copilot, can be deployed securely. The path requires proper vendor evaluation, governance controls that address the specific risks of legal practice, and a client consent process that satisfies the intent of ABA Formal Opinion 512 and state bar guidance.

    Law firms that invest in AI security now will be better positioned as bar associations accelerate formal rule-making and courts develop clearer standards for AI-related privilege claims.

    Ready to assess your firm's AI security posture? Book an AI security assessment tailored for law firms, or run a free Securetom scan to identify AI-related exposure in your current environment.

    For broader context on AI security controls, see our guides on indirect prompt injection defense and AI security for enterprise.

    Share this article:
    Industry Security
    BT

    BeyondScale Team

    AI Security Team, BeyondScale Technologies

    Security researcher and engineer at BeyondScale Technologies, an ISO 27001 certified AI cybersecurity firm.

    Want to know your AI security posture? Run a free Securetom scan in 60 seconds.

    Start Free Scan

    Ready to Secure Your AI Systems?

    Get a comprehensive security assessment of your AI infrastructure.

    Book a Meeting