Voibe Logovoibe Resources
attorney-client privilegeAI lawlegal ethicsHeppnergenerative AIwork productlawyersSDNY2026

AI and Attorney-Client Privilege After US v. Heppner: What Lawyers Must Know (2026)

US v. Heppner held AI chats are not privileged. Learn the three-part test Judge Rakoff applied, how Heppner differs from Gilbarco, and what lawyers should do.

AI and Attorney-Client Privilege: What US v. Heppner Held

TL;DR: In United States v. Heppner, Judge Jed S. Rakoff of the Southern District of New York ruled on February 10, 2026, that a criminal defendant's chats with a public version of Anthropic's Claude were not protected by attorney-client privilege or the work product doctrine. The court applied the traditional three-part privilege test and found that public AI tools fail every prong: the AI is not an attorney, the platform's privacy policy disclaims confidentiality, and independent client use lacks counsel's direction. The ruling does not ban AI in legal practice, but it sets a clear marker: anything a client types into a public chatbot about their case is potentially discoverable.

For lawyers, the message is operational. Public consumer AI tools cannot be treated as confidential channels for client communications. Enterprise tiers with contractual confidentiality protections may fare better, but no court has directly held that enterprise AI is privileged. Until that happens, the safest postures are explicit counsel direction, contractually confidential tools, and — for any AI that touches privileged audio or text — architectures that keep the data on the lawyer's own machine.

Key Takeaway

Public AI chats failed all three prongs of the attorney-client privilege test in Heppner. The AI is not an attorney, the privacy policy disclaimed confidentiality, and independent client use lacked counsel's direction.

Key Takeaways: Heppner at a Glance

ElementWhat the Court SaidWhy It Matters
Court & JudgeS.D.N.Y., Judge Jed S. RakoffInfluential federal trial court; Rakoff is widely cited on evidence and procedure
Decision dateBench ruling Feb 10, 2026; written memorandum Feb 17, 2026First federal ruling squarely on AI chat privilege
AI tool involvedPublic version of Anthropic's ClaudeConsumer tier, not enterprise — privacy policy permitted training and disclosure
Documents at issue~31 AI-generated reports outlining defense strategyCreated independently by defendant, not at counsel's direction
Privilege rulingNot protected by attorney-client privilegeFailed all three prongs of the traditional test
Work product rulingNot protected by work product doctrineNot prepared by or at the direction of counsel
Companion caseGilbarco (E.D. Mich., Feb 10, 2026) — work product preserved for AI-assisted pro se filingsAI-as-tool reasoning may save work product even when privilege is lost

Disclosure: Voibe is our product. We make this article educational first; the practical recommendations apply regardless of which dictation or AI vendor a lawyer chooses.

The Heppner Case Background: A Defendant, a Subpoena, and 31 Claude Documents

The Heppner case background illustrates how casual AI use becomes evidence. Bradley Heppner was indicted in October 2025 on federal securities and wire fraud charges and arrested in November 2025. According to filings summarized by the Debevoise Data Blog and the Inside Privacy analysis from Covington, federal agents seized approximately 31 documents from his electronic devices that had been created using a publicly available version of Anthropic's Claude.

The timing matters. Heppner had already engaged counsel and received a grand jury subpoena before he turned to Claude. He used the tool to generate reports analyzing the government's likely theories, weighing potential defenses, and outlining legal arguments about facts and law. He then included the AI-generated material on his privilege log. The Government moved to compel production. Judge Rakoff granted the motion.

Two facts about how Heppner used Claude framed everything that followed. First, he consulted the tool independently — his lawyers did not instruct him to use Claude, supervise the prompts, or direct the analysis. Second, he used the consumer-facing version of Claude, governed by Anthropic's standard consumer privacy policy. Both choices proved decisive when the court ran the privilege analysis.

Info

The actual order is publicly available as a PDF: see Reuters' hosted copy of Judge Rakoff's order at fingfx.thomsonreuters.com. The original Hacker News discussion is at news.ycombinator.com/item?id=47778920.

Judge Rakoff's Three-Part Privilege Test

Judge Rakoff's three-part privilege test is the traditional federal common-law standard, applied unchanged to AI: communications are privileged only when they are (1) between a client and his or her attorney, (2) intended to be and kept confidential, and (3) for the purpose of obtaining or providing legal advice. The Inside Privacy summary records the court's exact framing of these three requirements.

The court found Heppner's Claude chats failed every prong. The novelty was not the test — it was applying decades-old privilege doctrine to an AI exchange and finding nothing fit:

  • Prong 1 — Attorney involvement. Claude is not an attorney. It holds no law license, owes no fiduciary duty, and cannot form a representation. The communication was between Heppner and a third-party software provider, not between Heppner and counsel.
  • Prong 2 — Confidentiality. Anthropic's consumer privacy policy permits collection of prompts and outputs for model training and reserves the right to disclose user data to governmental regulatory authorities and other third parties. The court found this defeated any reasonable expectation of confidentiality. As Inside Privacy summarizes the holding, the platform's terms made the inputs effectively "equivalent to discussing legal issues with a third party."
  • Prong 3 — Purpose of legal advice. Even if the first two prongs had been satisfied, Heppner was not seeking legal advice from counsel — he was independently consulting a software tool that explicitly disclaims providing legal advice. The lack of attorney direction broke the chain.

The court also rejected the retroactive privilege theory: simply forwarding non-privileged AI-generated documents to a lawyer afterward does not transform them into privileged communications. As the Chapman and Cutler analysis notes, this distinguishes Heppner from Upjohn-style fact-pattern questions: the AI documents never achieved privileged status in the first place, so there was nothing to preserve.

Why Anthropic's Privacy Policy Defeated Confidentiality

Anthropic's privacy policy defeated the confidentiality prong because it expressly contemplates the kind of third-party access that privilege requires the parties to prevent. The court relied on three categories of disclosure that the policy permits: collection of prompts and outputs for model training, sharing with service providers and affiliates, and disclosure to governmental regulatory authorities. The Proskauer Rose alert notes that consumer-tier terms of this kind generally provide "no reasonable expectation of confidentiality."

This is the operative legal principle: confidentiality under privilege law is not just a subjective hope. It is an objectively reasonable expectation, judged by the agreed terms governing the channel. When a vendor's terms reserve broad rights to read, retain, train on, or hand over content, no party can reasonably expect that content to stay private — regardless of whether the vendor in fact exercises those rights.

That principle generalizes beyond Claude. Most consumer AI products — ChatGPT's free tier, Google Gemini, Microsoft Copilot's consumer offering, Meta AI, Perplexity's free tier — operate under broadly similar policies. Each reserves rights to use inputs for training, share with subprocessors, and respond to law enforcement. Under the Heppner reasoning, every one of those channels carries the same privilege risk.

The contrast is enterprise tiers. Anthropic's commercial agreement for API and enterprise customers, OpenAI's enterprise terms, and Google's Workspace AI all typically include no-training commitments and stricter confidentiality undertakings. As Perkins Coie's analysis observes, enterprise tools "should bolster privilege claims" — but no court has yet directly held that enterprise AI is privileged, and even an enterprise tier does not save a workflow where the client uses the AI independently of counsel.

Heppner vs. Gilbarco: Two AI Rulings, Two Different Outcomes

Heppner and Gilbarco were decided within a week of each other and reached different conclusions because they involved different protections and different facts. Understanding both is essential for any lawyer relying on AI in litigation.

In Gilbarco (E.D. Mich., Feb. 10, 2026), summarized in the same Perkins Coie analysis, a pro se plaintiff used generative AI tools to help prepare litigation materials. The defendants argued the materials lost work product protection because they had been disclosed to the AI provider. The court disagreed, treating the AI as a tool rather than a third-party adversary. Work product waiver requires disclosure to an adversary or under circumstances likely to enable an adversary to obtain the material — using software does not, on its own, meet that bar.

The two rulings together suggest a working principle: privilege is harder to preserve than work product when AI is involved. Privilege requires confidentiality plus an attorney relationship, and public AI breaks both. Work product turns on adversarial disclosure, which AI does not automatically create.

FactorHeppner (S.D.N.Y.)Gilbarco (E.D. Mich.)
Protection assertedAttorney-client privilege + work productWork product
AI toolPublic Claude (consumer tier)Generative AI used in pro se drafting
UserRepresented criminal defendantPro se litigant
Counsel directionNone — independent useLitigant-initiated work product
Court's view of AIThird party — disclosure defeats confidentialityTool — not an adversary, no waiver
OutcomeNo privilege, no work productWork product preserved
DateBench Feb 10, 2026; written Feb 17, 2026Feb 10, 2026

The two cases are reconcilable. Heppner is about creating privileged communications — and AI is not the right channel for that. Gilbarco is about whether existing work product is waived by AI use — and it generally is not, absent adversary involvement. A lawyer who keeps both holdings in mind can use AI productively without forfeiting either protection.

The AI Privilege Risk Spectrum: Public, Enterprise, On-Device

The AI privilege risk spectrum runs from highest risk (public consumer chatbots) to lowest risk (on-device tools that never transmit data). Where a tool sits on this spectrum determines how much weight it can carry in a privileged workflow.

Public consumer AI (highest risk). ChatGPT free tier, Claude.ai consumer, Gemini consumer, Meta AI, Perplexity free, Copilot consumer. Privacy policies permit training and broad disclosure. Under Heppner, these channels do not support privilege. Use them for non-privileged research, public-domain analysis, and sanitized drafting only.

Enterprise AI (medium risk, improving). Claude for Work, ChatGPT Enterprise/Team, Gemini for Workspace, Microsoft 365 Copilot. Contractual no-training commitments, stricter confidentiality, and DPAs are typical. Heppner's reasoning is consistent with stronger privilege claims here, but courts have not yet directly ruled. Pair with explicit attorney direction, written client consent under ABA Formal Opinion 512, and documented prompts that note the work is being done at counsel's direction.

On-device AI (lowest risk). Locally hosted models — open-source LLMs running on a firm's own servers, Apple Intelligence's on-device tier, on-device Whisper for transcription, on-device dictation tools. No data leaves the lawyer's machine, so the third-party disclosure problem the Heppner court identified does not arise at all. The trade-off is capability — on-device models are typically smaller, narrower, or more specialized than frontier cloud models — but they are well-suited to specific tasks like dictation, summarization, and structured drafting. For broader background, see our explainer on cloud vs local dictation and on why offline processing matters.

What Heppner Means for Lawyers' Workflows: A Practical Checklist

What Heppner means for lawyers' workflows is concrete and immediate: tighten how clients interact with AI, document attorney direction, and pick tools that match the work. Below is a practical checklist drawn from the post-Heppner commentary by Husch Blackwell, Ogletree Deakins, and the New York State Bar Association:

  1. Update intake and engagement letters. Add explicit AI provisions: ask clients whether they have used any AI tools to discuss the matter, instruct them not to use public AI for case-related queries, and document the firm's approved AI tools.
  2. Tell clients in writing not to use public chatbots about their case. Husch Blackwell's recommended client message: any AI use about the matter should occur only at counsel's direction with approved tools. Independent client AI use creates discoverable evidence.
  3. Audit the firm's existing AI footprint. Identify which lawyers and staff use which AI tools, on which tiers, for what kinds of work. Migrate any privileged-content workflows off consumer tiers.
  4. Document attorney direction in the prompt itself. When AI is used at counsel's direction, prompts should state that fact. Save the prompt history. This supports both privilege and work product claims if challenged later.
  5. Choose tools by sensitivity tier. Public consumer AI for non-privileged research and public-domain drafting. Enterprise AI with no-training terms for sensitive but non-privileged work. On-device tools for anything that touches client confidences, deposition prep, draft motions, or privileged audio.
  6. Add AI to discovery and depositions. Per Husch Blackwell's guidance: ask deponents about AI usage, request AI conversation histories where appropriate, and consider whether opposing parties' AI usage has waived their own privilege.
  7. Train compliance and operations staff. Paralegals, secretaries, and operations staff often touch privileged content but may not understand the AI privilege implications. Brief them on the firm's AI policy.
  8. Get informed consent under ABA Opinion 512. If the firm uses AI on client matters, obtain informed consent that meets the ABA standard — boilerplate engagement letter language is not enough.

The throughline is operational discipline. Heppner did not change privilege doctrine. It applied the existing doctrine to a new technology and showed that the doctrine has teeth. Lawyers who treat AI like any other communications channel — choosing it based on confidentiality terms, looping counsel into the workflow, and documenting the work — can use AI productively without putting privilege at risk.

The Hidden Privilege Risk: AI Dictation and Voice Transcription Tools

The hidden privilege risk most post-Heppner commentary misses is voice. The Heppner ruling focuses on text chats, but the court's reasoning generalizes to any AI tool that processes privileged content under terms permitting third-party access — including the AI dictation, transcription, and meeting-summary tools that have become routine in modern legal practice.

Consider how voice tools touch privileged content in a typical practice:

  • Dictation of memos, motions, and case notes. A lawyer dictates a privileged work product memo. If the dictation app sends audio to a cloud transcription service whose terms permit training or third-party disclosure — the same Heppner problem applies, this time to the audio recording rather than a text prompt.
  • AI meeting note-takers on client calls. Otter, Fireflies, and similar tools join client meetings and transmit audio to cloud servers for transcription and summary. The terms of service typically permit broad data use.
  • Voice memos transcribed by cloud apps. A lawyer dictates a voice memo about a deposition strategy on the way back from court. If the transcription happens in the cloud, the audio of that strategy session is on a third-party server.
  • Browser-based dictation and Web Speech APIs. Chrome's built-in dictation routes audio to Google's servers. Most browser dictation does the same.

The same three-prong analysis applies. The audio is shared with a third party that is not the lawyer. The terms of service often disclaim confidentiality. The use is not at the direction of opposing counsel — but the third-party disclosure prong is what fails. Under Heppner's logic, transmitting privileged audio to a vendor whose terms permit broad use weakens any later claim that the recording or transcript is privileged.

The architectural fix is the same as for text AI: keep the processing on the lawyer's own machine. On-device dictation tools like Voibe run OpenAI's Whisper models locally on Apple Silicon Macs. Audio is converted to text in memory on the lawyer's M1, M2, M3, or M4 chip and discarded immediately. No audio leaves the device, no transcript leaves the device, and no vendor terms of service are implicated. The third-party disclosure problem the Heppner court identified does not arise because there is no third party.

For lawyers evaluating dictation tools through the Heppner lens, see our deeper guides on dictation software for lawyers, cloud vs local dictation, and voice data privacy. For HIPAA-adjacent considerations applicable to lawyers handling medical-legal matters, see our HIPAA dictation guide. And for the broader case for offline processing, see why offline dictation matters.

Tip

Quick Heppner audit for voice tools: (1) Does the dictation or transcription app transmit audio off the device? (2) Do the terms of service permit training, sharing with subprocessors, or disclosure to authorities? (3) If yes to either, treat it like a public AI tool — do not use it for privileged content.

Frequently Asked Questions About AI and Attorney-Client Privilege

The Heppner Ruling: Basics

Q: Is US v. Heppner binding on other federal courts? No. As an S.D.N.Y. district court ruling, Heppner is not binding on other district courts or other circuits. It is, however, highly persuasive given Judge Rakoff's stature in evidence and federal procedure, the clarity of the reasoning, and the absence of contrary authority. Most federal courts addressing similar facts will likely reach the same result.

Q: Does Heppner apply in state courts? Heppner applies federal common law on attorney-client privilege. Most state privilege rules are functionally similar, requiring an attorney-client relationship, confidentiality, and a legal advice purpose. State courts addressing AI privilege questions are likely to follow analogous reasoning, though the specific contours of state privilege law vary.

Q: What is the exact citation? United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.). The bench ruling was delivered February 10, 2026, with a written memorandum following on February 17, 2026. The order is publicly available; Reuters hosts a copy of Judge Rakoff's order.

Tool Choice and Compliance

Q: Are enterprise versions of public chatbots safe to use? Enterprise tiers (Claude for Work, ChatGPT Enterprise, Microsoft 365 Copilot) typically include no-training commitments and stricter confidentiality terms. Multiple post-Heppner analyses suggest these should bolster privilege claims, but no court has yet directly held enterprise AI is privileged. The safest posture combines an enterprise tool with explicit attorney direction, written client consent under ABA Opinion 512, and documented prompts.

Q: What about open-source local LLMs? Locally hosted models — Llama, Mistral, GPT-OSS, or other open-weights models running on a firm's own hardware — eliminate the third-party disclosure problem because the model never communicates with a vendor. They are an attractive option for privileged work, with the trade-offs being capability, infrastructure, and IT overhead.

Q: Do I need to update my AI policy? Yes. Most law firm AI policies were written before Heppner and treat AI as a general productivity question. Post-Heppner, AI policies should explicitly address consumer-tier prohibitions on privileged work, approved enterprise tools by sensitivity tier, prompt documentation requirements, and client communication standards. See our dictation software for lawyers guide for a tool-by-tool breakdown that maps to these tiers.

Client Communication and Discovery

Q: What should I tell clients about using AI for their case? Tell clients in writing — ideally in the engagement letter and in a follow-up email — not to use any public AI tool (ChatGPT, Claude, Gemini, Copilot, Perplexity, or any consumer chatbot) to discuss the matter, draft case theories, summarize evidence, or analyze legal exposure. Any AI use should happen at counsel's direction with approved tools.

Q: Should I include AI questions in deposition prep? Yes. Per Husch Blackwell's post-Heppner guidance, depositions and discovery should include questions about whether the deponent or party has used AI tools to analyze the matter. Independent AI use by an opposing party may reveal both substantive content and arguably waive their privilege over similar communications.

Q: Can prosecutors get my client's AI chats from the AI provider directly? Generally yes, with appropriate legal process. Anthropic's, OpenAI's, and Google's privacy policies all reserve the right to disclose user data to governmental authorities in response to valid legal process. The Heppner court relied on this disclosure right as part of the confidentiality analysis.

Voice, Dictation, and Adjacent Tools

Q: Are AI meeting note-takers like Otter or Fireflies a Heppner problem? Cloud-based meeting transcription tools that join client calls and transmit audio to vendor servers under broad terms of service raise the same third-party disclosure concern. For privileged calls, either skip the AI note-taker, use an enterprise tier with appropriate contractual confidentiality, or use an on-device alternative.

Q: What about dictating motions or memos with cloud dictation tools? Cloud dictation tools transmit audio of whatever the lawyer dictates — including privileged work product — to vendor servers. Under Heppner's reasoning, this creates the same disclosure problem. On-device dictation tools that process audio locally avoid the issue entirely.

Q: Is Apple Dictation safe for privileged work? Apple Dictation processes most speech on Apple Silicon Macs locally but can fall back to cloud processing for complex requests, and may also share audio samples with Apple if the "Improve Siri & Dictation" setting is enabled. Apple does not sign Business Associate Agreements. For consistent privilege protection, a fully on-device tool is more defensible than Apple's hybrid model. See our Apple Dictation privacy analysis for the detailed comparison.

Conclusion: Heppner as a Forcing Function

The conclusion lawyers should draw from Heppner is straightforward: privilege doctrine has not changed, but the channels lawyers and clients use have. Public AI is a third party. Vendor terms of service govern confidentiality. Independent client use does not produce privileged communications. Forwarding non-privileged AI output to counsel does not retroactively create privilege.

Practically, Heppner is a forcing function for three operational changes:

  • Update client communication. Every new engagement letter and intake conversation should address AI use. Existing clients should receive a written advisory.
  • Tier the tools. Public consumer AI for non-privileged work. Enterprise AI with confidentiality terms and counsel direction for sensitive but non-privileged work. On-device or locally hosted models for anything touching privilege — including dictation, transcription, and meeting capture.
  • Document the workflow. Prompts that note attorney direction, saved chat histories, written client consent, and an updated firm AI policy together create the record needed to defend privilege if it is challenged.

For lawyers thinking through the dictation and voice piece specifically — the area most often overlooked in post-Heppner commentary — Voibe is an on-device dictation tool for Mac that runs OpenAI's Whisper models locally on Apple Silicon. Audio is processed on the lawyer's own M1, M2, M3, or M4 chip and discarded immediately. No audio leaves the device, no transcript leaves the device, no vendor terms of service are implicated. Try Voibe for free, or read our deeper guides on dictation software for lawyers, cloud vs local dictation, and why offline dictation matters.

Heppner is unlikely to be the last AI privilege ruling. The next round of cases will almost certainly address enterprise tiers directly, test the boundaries of work product after Gilbarco, and probably reach voice and meeting-capture tools. The lawyers best positioned for those rulings are the ones who already treat AI like the third-party communications channel it is.

Ready to type 3x faster?

Voibe is the fastest, most private dictation app for Mac. Try it today.