All posts

Your AI Conversations Are Not Confidential — And a Federal Court Just Said So

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled from the bench in United States v. Heppner that documents a criminal defendant generated using the consumer version of Anthropic's Claude were protected by neither the attorney-client privilege nor the work product doctrine. A week later, he issued a written opinion calling it a matter of "nationwide" first impression.

I think parts of the court's reasoning are wrong — or at least underdeveloped — in ways that matter. But the opinion landed on a real problem. Lawyers, clients, and judges are making consequential decisions about AI tools without fully understanding how those tools handle data. Heppner is worth examining less for the doctrine it announces than for the knowledge gap it reveals.

This post lays out what happened in Heppner, explains what I think the opinion gets right and wrong, and then walks through what Anthropic's data-handling policies actually say across Claude's consumer and commercial tiers — the very policies the court relied on but did not examine closely. The same structural divide exists across every major LLM provider, and the legal implications extend well beyond this one case.

What Heppner held

Bradley Heppner, the founder and former CEO of Beneficient, a financial services company, faces a five-count federal indictment for securities fraud, wire fraud, conspiracy, making false statements to auditors, and falsification of records — charges arising from an alleged scheme to defraud investors in the publicly traded company GWG Holdings through self-dealing transactions involving Beneficient. After receiving a grand jury subpoena and learning he was a target of the investigation, but before his November 2025 arrest, Heppner used the consumer version of Claude to analyze his legal exposure and develop defense theories. When federal agents executed a search warrant at his home, they seized numerous documents and electronic devices. Defense counsel later identified approximately thirty-one of the seized materials as AI-generated documents. The government moved for a ruling that the documents were not privileged; Heppner resisted, invoking attorney-client privilege and the work product doctrine.

Judge Rakoff rejected both claims on multiple grounds. On privilege, the court articulated three independent reasons for denial:

First, Claude is not an attorney. It has no law license, owes no fiduciary duties, and cannot form an attorney-client relationship. Privilege requires a "trusting human relationship" with "a licensed professional" — and an AI tool is not one.

Second, Heppner had no reasonable expectation of confidentiality. The court pointed to Anthropic's privacy policy, which disclosed that user inputs and outputs could be used for model training and disclosed to third parties, including government authorities.

Third — which the court acknowledged "perhaps presents a closer call" — Heppner did not communicate with Claude for the purpose of obtaining legal advice from an attorney. Claude's terms of service disclaim providing legal advice, and Heppner's lawyers neither directed nor supervised his use of the tool. The court noted that had counsel directed Heppner to use Claude, it might have "functioned in a manner akin to a highly trained professional" who could act within the privilege under the Kovel doctrine — but because Heppner acted on his own, the question was whether he intended to obtain legal advice from Claude, and Claude disclaims providing it.

On work product, defense counsel conceded that Heppner created the documents "of his own volition" and that the legal team "did not direct" him to use Claude. The court held that materials not prepared by or at the behest of counsel do not qualify as work product — expressly disagreeing with Shih v. Petal Card, Inc., 565 F. Supp. 3d 557 (S.D.N.Y. 2021), which recognized work product protection for a party's own litigation-preparation materials regardless of attorney direction.

Where I think the reasoning falters

The first and third grounds — no attorney-client relationship, no communication for the purpose of obtaining legal advice from an attorney — are each independently sufficient to defeat the privilege claim. An AI tool is not a lawyer, and Heppner was not seeking legal advice from an attorney when he typed queries into Claude. Full stop.

The work product holding is correct on these facts — defense counsel conceded that Heppner acted without direction — but the court's reasoning adopted a narrower view of the doctrine than the weight of authority supports. The traditional Second Circuit formulation protects "materials prepared by or at the behest of counsel in anticipation of litigation or for trial," but the civil analog, Fed. R. Civ. P. 26(b)(3)(A), protects materials prepared "by or for another party or its representative" — language broad enough to cover a party acting on its own initiative. The court's express rejection of Shih on this point signals that the question remains open, and future courts should not treat Heppner's narrow formulation as settled.

The confidentiality analysis in the second ground is where things get shaky, and it is the part of the opinion that has generated the most commentary — and the most anxiety.

Judge Rakoff treated Anthropic's consumer privacy policy as establishing that Heppner could have "no reasonable expectation of confidentiality" in his AI conversations. But the court's analysis has significant gaps. The opinion cited an archived version of Anthropic's privacy policy dated February 2025 — a version that predated the August 2025 consumer terms update giving users the ability to control model training. Because Heppner used Claude in 2025 before his November arrest, his conversations may have been governed by either the old or the new terms depending on when they occurred. The court never asked what version of the terms governed Heppner's use, whether he had opted out of training, or what his actual settings were. It treated the broadest possible reading of the consumer terms as conclusive without examining what the user actually agreed to or configured.

This matters because the confidentiality holding — which was not necessary to the result — is the part of the opinion most likely to be cited broadly. And it rests on an incomplete factual record. As the policy comparison below demonstrates, Anthropic's consumer terms create meaningfully different data-handling regimes depending on whether a user has opted in or out of model training. The court did not grapple with that distinction.

There is also a subtler problem. The opinion conflates a platform's contractual permission to use data with the practical likelihood that any human will ever see it. Consumer AI privacy policies reserve broad rights, but the actual probability of a specific conversation being reviewed by a person — absent a safety flag or legal process — is vanishingly low. Whether that distinction should matter for privilege purposes is a genuinely hard question. Heppner does not engage with it.

None of this means the opinion is unimportant. It is the first federal decision to address AI and privilege head-on, and it will shape how courts and litigants think about these issues going forward. But its broadest holding — that consumer AI use necessarily destroys confidentiality — rests on reasoning that future courts should scrutinize carefully.

What the case gets right: a knowledge problem

Where Heppner is most valuable is as a signal. Whatever one thinks of the doctrinal analysis, the case exposes a widespread failure to understand how consumer AI tools handle data. Heppner apparently did not know — or did not care — that his AI conversations were governed by terms that reserved broad data-use rights for the platform provider. His lawyers did not anticipate that their client's independent AI use would create a discovery problem. And the court itself did not dig into the specific settings or tier the defendant used.

This is not an isolated failure. Most lawyers I talk to cannot articulate the difference between a consumer and enterprise AI deployment. Most clients do not read privacy policies. And most courts have not yet had to think carefully about how AI data handling intersects with privilege doctrine.

Heppner should change that — not because its reasoning is airtight, but because it demonstrates what happens when no one in the room understands the technology well enough to ask the right questions.

What Anthropic's policies actually say

Since Heppner turned on Anthropic's terms, this is the right place to start. I went through Anthropic's published policies — the Consumer Terms of Service, the Commercial Terms of Service, the Privacy Policy, and the Privacy Center — to compare what Claude's consumer and commercial tiers actually promise. What follows is a synthesis of that research.

The core divide: consumer terms vs. commercial terms

Anthropic's policies split along two fundamental lines: Consumer Terms (Free, Pro, Max) and Commercial Terms (Team, Enterprise, API, Education, Government). This distinction — not the price paid — determines virtually every data right the user holds. The Commercial Terms state explicitly: "Services under these Terms are not for consumer use. Our consumer offerings (e.g., Claude.ai) are governed by our Consumer Terms of Service instead."

This means a Pro or Max subscriber paying $20 or $100 per month operates under the same legal framework as a free user. Paying more buys additional model access and features, but it does not change how Anthropic treats your data.

Model training: the sharpest divide

For Free, Pro, and Max users, Anthropic may use conversations to train its models. In August 2025, Anthropic updated its consumer terms to give users the ability to control whether their data would be used for model training. Existing users had until October 8, 2025, to accept the new terms and select their preference. The operative contractual language states that Anthropic may use user materials for model training "unless users opt out" — placing the default in Anthropic's favor — though Anthropic's own blog post announcing the change described it as "allowing users on Claude Free, Pro, and Max plans to opt-in for data usage," framing the default in the opposite direction. The tension between the legal text and the public announcement underscores the difficulty of determining any individual user's training status based on the terms alone. Opting out remains available through Claude's settings.

For Team, Enterprise, API, and Education/Government users, Anthropic contractually prohibits itself from training on customer content. The Commercial Terms are unambiguous: "Anthropic may not train models on Customer Content from Services" — with no exceptions and no reliance on user-level toggles.

Data retention: a 60× gap

Retention periods are directly tied to training status for consumer plans, creating a striking disparity:

Consumer users who have opted in to training (or failed to opt out) face retention of up to five years for de-identified conversation data. Consumer users who have opted out see their conversations retained for 30 days before deletion. In either case, content flagged for safety or policy violations can be retained for up to seven years, regardless of the user's training preference.

On the commercial side, API input and output logs are retained for seven days. Enterprise accounts default to 30 days, with the option to negotiate Zero Data Retention — under which inputs and outputs are processed in real time and not stored at all. No consumer plan, regardless of price, offers true zero retention.

Data ownership and IP

The Commercial Terms contain an unusually strong ownership clause absent from the consumer terms. They provide that the customer "retains all rights to its Inputs, and owns its Outputs," that "Anthropic disclaims any rights it receives to the Customer Content under these Terms," and that Anthropic "hereby assigns to Customer its right, title and interest (if any) in and to Outputs."

Consumer users have no equivalent contractual assignment. Under the consumer framework, Anthropic holds a license to use inputs and outputs for model improvement unless the user opts out.

Data controller vs. data processor

This distinction carries significant weight under GDPR and analogous privacy regimes. For consumer plans, Anthropic acts as the data controller — it determines the purposes and means of processing user data. For Enterprise and API accounts, Anthropic functions as a data processor operating under a Data Processing Addendum, with the commercial customer serving as the controller.

The practical consequence: a consumer user's data is governed by Anthropic's privacy choices. An enterprise customer's data is governed by the customer's own policies, with Anthropic acting under instruction.

Employee access and confidentiality

For consumer plans, Anthropic employees may access conversations only if the user explicitly consents via feedback, or if access is required for Usage Policy enforcement — in which case only the Trust & Safety team may view content on a need-to-know basis.

For commercial plans, customer content is contractually designated as Confidential Information under the Commercial Terms. Anthropic may use it only to exercise its rights under the contract and must protect it with at least the same care it applies to its own confidential information.

Two further protections — Zero Data Retention and HIPAA Business Associate Agreements — are available exclusively on commercial tiers. Under ZDR, inputs and outputs are not stored; the sole exception is User Safety classifier results retained for Usage Policy enforcement. A BAA imposes specific configuration requirements and excludes certain features (web search, for instance, falls outside BAA coverage). Neither protection is available on any consumer plan at any price point.

The comparison distills to a structural reality: consumer Claude users — whether free or paying $100 per month — operate under terms that allow Anthropic to train on their data by default, retain it for up to five years, and act as the data controller with broad discretion. Commercial Claude users operate under a contractual regime that prohibits model training, treats their content as confidential information, assigns them ownership of outputs, and offers zero-retention options.

The pattern holds across providers

Anthropic's tiered structure is not an outlier. OpenAI's ChatGPT follows the same pattern. On Free and Plus plans, OpenAI's Data Usage for Consumer Services FAQ states that it "may use" consumer content to improve its models unless the user disables training — while retaining the right to log interactions for safety and abuse monitoring regardless. On Edu and Enterprise plans, OpenAI commits not to train on business data, provides admin-controlled retention windows, and offers Zero Data Retention and configurable data residency.

The structural divide is the same: consumer terms grant the provider broad data-use rights with an opt-out toggle; commercial terms prohibit model training by contract and give the customer control over retention, residency, and access. Google's Gemini, Meta's Llama-based offerings, and other major LLM providers follow similar patterns. The consumer-versus-commercial distinction is an industry-wide architectural choice, not a quirk of any single provider.

This matters for the Heppner analysis because the court's reasoning — resting on the provider's privacy policy and terms of service — would apply with equal force to any consumer LLM deployment, not just Claude.

What this means going forward

Heppner will be cited for the proposition that consumer AI conversations are not confidential. That proposition is probably too broad as stated — it ignores user training preferences, conflates contractual permission with practical disclosure risk, and was not necessary to the holding. But it captures something real: consumer AI platforms operate under terms that were not designed with legal privilege in mind, and users who rely on those platforms for sensitive work are taking risks they may not understand.

The practical response is not to avoid AI tools. It is to understand what you are agreeing to when you use them — and to recognize that paying for a subscription does not, by itself, change the legal framework governing your data. For lawyers, that means learning the difference between consumer and commercial deployments and advising clients accordingly. For organizations, it means treating AI procurement as a legal risk question, not just an IT question. And for courts, it means doing the factual work that Heppner did not: examining the specific terms, settings, and tier a user actually employed before concluding that confidentiality has been waived.

The gap between consumer and commercial AI products is wide, it is well-documented, and it is consistent across every major provider. The problem is not that the information is unavailable. The problem is that almost nobody — lawyers, clients, and judges included — reads it.


The Anthropic policy comparison in this post draws on Anthropic's Consumer Terms of Service, Commercial Terms announcement, consumer terms and privacy policy update, and Privacy Center. OpenAI policy references draw on the Data Usage FAQ, platform documentation, and privacy policy.