I have written previously about what United States v. Heppner held and what it got wrong, and about why moving to an API does not, by itself, constitute a compliance strategy. This post turns to a different audience: not organizations choosing AI tools, but practicing lawyers whose clients are already using them.
The core question is straightforward. Heppner established — on reasoning I have criticized but that is now on the books — that a client who feeds privileged materials into a consumer AI platform may forfeit the privilege over those materials. That is now a known hazard. And when a known hazard exists that threatens the integrity of the attorney-client relationship, existing rules of professional conduct impose obligations on the lawyer — not just the client.
No ethics rule says "warn your client about ChatGPT." But the obligation to do something very close to that is already embedded in the structure of Model Rules 1.1, 1.4, and 1.6, and their state counterparts. Heppner did not create that duty, but it did make the duty impossible to ignore.
A brief recap of what Heppner did
I covered the decision in detail in this prior post, so I will keep this short. Bradley Heppner, a criminal defendant, used consumer Claude to analyze his legal exposure and develop defense theories after receiving a grand jury subpoena and learning he was a target of a federal investigation. He did this on his own, without his lawyers' knowledge or direction. Judge Rakoff of the S.D.N.Y. held the resulting documents were protected by neither the attorney-client privilege nor the work product doctrine — because Claude is not a lawyer, because Anthropic's consumer terms did not support a reasonable expectation of confidentiality, and because counsel had not directed the AI use.
Two things from the opinion matter for this post. First, Judge Rakoff observed that had counsel directed Heppner to use Claude, the tool "might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege" — a reference to the Kovel doctrine. That dictum rewards attorney supervision and penalizes its absence. Second, the privilege was lost in part because Heppner's lawyers never told him — one way or the other — anything about using AI tools in connection with his case.
The NYSBA's post-Heppner commentary drew the practical conclusion quickly: attorneys should "include robust disclaimers and warnings in engagement letters and email signatures alerting clients to the risks of using AI platforms in connection with their legal matters." That is a reasonable starting point. But I think the duty runs deeper than engagement-letter boilerplate, and that existing ethics rules already require it.
The rules that get you there
Three Model Rules, read together, create an affirmative obligation to advise clients about AI-related privilege risks — even though none of them mentions AI by name.
Competence: Rule 1.1
Model Rule 1.1 requires lawyers to provide competent representation, defined as "the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation." Since 2012, Comment 8 has specified that competence includes keeping "abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." Forty states have now adopted this language or its equivalent.
After Heppner, the "relevant technology" a competent lawyer must understand includes consumer AI tools — not how to use them, but how they handle data and what the legal consequences of client use might be. A lawyer who does not know that consumer chatbot terms permit the provider to retain, train on, and disclose user inputs is missing knowledge that is now directly relevant to protecting the privilege. The duty of competence is not limited to a lawyer's own work product. It encompasses the "thoroughness and preparation" needed to protect the attorney-client relationship from erosion by foreseeable client conduct.
Communication: Rule 1.4
Model Rule 1.4(b) requires that a lawyer "explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation." This is generally understood to encompass not just the substance of legal advice but the conditions under which the privilege protecting it might be forfeited. A client who does not know that pasting counsel's memorandum into ChatGPT may destroy the privilege over that memorandum has not been equipped to make an informed decision about managing privileged information.
The critical feature of Rule 1.4 is that it operates prospectively. The duty to communicate is a duty to give clients the information they need before they act — not a post-hoc damage-control obligation. After Heppner, the relevant information includes the fact that consumer AI use can waive the privilege.
Confidentiality: Rule 1.6
Model Rule 1.6(c) provides that a lawyer "shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." The operative word is "reasonable," and what counts as reasonable changes as risks become known.
State bars have interpreted this provision to require affirmative steps — not just reactive ones — when digital communications create confidentiality risks. The principle is not new; all that is new is the specific threat: a client's use of a consumer AI platform is precisely the kind of inadvertent disclosure that Rule 1.6(c) was designed to address.
The state-level picture
The ABA's Formal Opinion 512, issued in July 2024, was the first comprehensive ABA guidance on generative AI in legal practice. It addressed competence, confidentiality, communication, candor, supervisory duties, and fees — all through the lens of existing Model Rules applied to AI. Formal Opinion 512 focused primarily on a lawyer's own use of AI tools, but its analysis of the confidentiality obligations under Rules 1.6 and 1.4 applies with equal force when the risk comes from the client's conduct rather than the lawyer's.
The New York City Bar's Formal Opinion 2024-5 addressed generative AI in legal practice directly, and Formal Opinion 2025-6 extended the analysis to AI tools used to record and transcribe client conversations — a context in which the duty to counsel clients about confidentiality implications is made explicit. California's State Bar has published practical guidance on generative AI grounded in the same competence and confidentiality obligations.
None of these authorities squarely addresses the specific scenario Heppner presented: a client, acting on his own, feeding privileged materials into a consumer chatbot. But they establish the framework within which that scenario falls. If a lawyer has a duty of technological competence that includes understanding AI data handling, a duty to communicate information necessary for informed decisions about the representation, and a duty to take reasonable steps to prevent inadvertent disclosure — then the obligation to warn a client about the privilege risks of consumer AI use follows from the conjunction of all three.
What "reasonable" looks like
Not every representation carries the same risk. The obligation to advise clients about AI-related privilege risks should be calibrated — as professional duties always are — to the circumstances.
The nature of the matter. A client facing a federal investigation, complex litigation, or a regulatory proceeding is more likely to receive extensive privileged communications and more acutely harmed by their disclosure. In high-stakes representations, the duty to counsel clients about AI risks should be treated as near-mandatory and documented. Routine advisory work still carries the obligation, but its urgency is proportional to the exposure.
The sophistication of the client. Sophisticated institutional clients with in-house counsel may understand the risk without detailed instruction. Individual clients, small business owners, and people facing their first serious legal proceeding probably do not. Heppner illustrates the gap precisely: the defendant was fluent enough to use Claude effectively but apparently had no appreciation of the legal consequences. Technological fluency and legal sophistication are not the same thing, and lawyers should resist treating them as interchangeable.
The attorney's reasonable belief about client conduct. A lawyer who knows or should know that a client is likely to use AI tools in connection with the matter — because the client has mentioned doing so, because the client works in a tech-forward industry, or simply because generative AI has become most people's first tool for understanding complex documents — bears a heightened responsibility to address the risk explicitly. This is not speculative. Consumer AI adoption has reached the point where assuming a client will not use these tools requires more justification than assuming they will.
These factors interact. A sophisticated client in a high-stakes criminal matter presents a different risk profile than a sophisticated client in a routine transaction. An unsophisticated client in any matter of consequence probably requires explicit, plain-language AI counseling as a baseline.
The structural remedy worth considering
Warning clients not to use consumer AI to understand their legal matters is, as a practical matter, unlikely to be fully effective. The impulse that drove Heppner to Claude is deeply human: complex legal advice is hard to understand, and AI tools offer an immediately accessible way to work through it. Telling clients not to do something genuinely useful — without offering an alternative — is an instruction destined to be ignored.
The more constructive path is to give clients a safe way to do what they are going to do anyway. Enterprise-grade AI deployments — tools operating under commercial terms that contractually prohibit the provider from retaining or training on user inputs — can be configured within a firm-controlled environment with appropriate confidentiality protections. A client who uses a firm-provided, privilege-preserving AI tool to work through counsel's advice is in a fundamentally different position than a client who pastes that advice into a consumer chatbot governed by terms that reserve broad data-use rights.
Judge Rakoff's Kovel dictum points in this direction. The court distinguished between unsupervised client use of a public AI platform and a hypothetical in which counsel directed the AI use. A firm-provided, counsel-supervised AI environment — deployed under commercial terms, subject to confidentiality agreements, and offered as part of the representation — positions the tool more like the Kovel professional the court described than the public chatbot it rejected. The privilege analysis is not guaranteed, but the structural argument is considerably stronger.
This is not a small undertaking, and I do not suggest it is costless. But the alternative — relying on engagement-letter warnings while clients continue to use consumer AI tools unsupervised — is a posture that grows harder to defend as the risk becomes more widely known.
Where this leaves practicing lawyers
Heppner did not create a new professional obligation. What it did was train a spotlight on one that already existed. The duty of competence requires understanding how consumer AI tools handle data. The duty of communication requires informing clients about risks to the privilege before those risks materialize. The duty of confidentiality requires reasonable efforts to prevent inadvertent disclosure. Together, these rules establish an obligation — variable in its intensity, sensitive to context, but real — to advise clients about the privilege risks of consumer AI use.
This post draws on the ABA Model Rules of Professional Conduct, ABA Formal Opinion 512, the New York City Bar's Formal Opinions 2024-5 and 2025-6, the NYSBA's post-Heppner commentary, and Judge Rakoff's written opinion in United States v. Heppner. The California State Bar's Generative AI Practical Guidance provides additional state-level context. The consumer-versus-commercial data-handling comparison referenced throughout is detailed in a prior post.