AI Docs Not Privileged: What the Ruling Means
Judge Rakoff ruled AI-generated legal documents lack privilege protection. What every firm using AI tools needs to know.

AI Docs Not Privileged: What the Ruling Means

Shere Saidon
Shere Saidon

CEO & Founder at LlamaLab

Published March 3, 2026
Updated February 28, 2026
6 min read
Legal Updates

Judge Rakoff Rules AI-Generated Documents Are Not Protected by Attorney-Client Privilege

A federal judge has ruled for the first time that documents generated using a consumer AI chatbot are not protected by attorney-client privilege or the work-product doctrine. Judge Jed Rakoff of the Southern District of New York issued the ruling on February 10, 2026, in U.S. v. Heppner, with a written opinion following on February 17. The decision arrives as 40% of legal organizations now use AI tools organization-wide—nearly double the 22% reported in 2025—and it forces a direct question about whether the legal profession's rapid adoption has outpaced its risk management.

The gap between adoption and governance is stark. Despite the surge in AI usage, only 41% of law firms have established formal generative AI policies, according to Thomson Reuters. Judge Rakoff's opinion makes clear that the absence of such policies carries real consequences—not just for compliance, but for the fundamental protections attorneys rely on to represent their clients.

40%

of legal organizations now use AI organization-wide (Thomson Reuters 2026)

78%

of legal professionals expect GenAI to become central to workflow within 5 years

41%

of law firms have established formal GenAI policies

What Happened in U.S. v. Heppner

Bradley Heppner, a criminal defendant with retained counsel, used the consumer version of Anthropic's Claude AI to independently research legal issues related to his case—without his attorneys' direction or involvement. He inputted information from his counsel into the chatbot, generated reports that outlined potential defense strategies, and then shared those AI-generated documents with his lawyers.

The government sought production of the AI-generated materials during discovery. Heppner's defense team argued the documents were protected under attorney-client privilege and the work-product doctrine. Judge Rakoff rejected both claims, finding that neither legal protection applied to communications between a human and a consumer AI tool.

Important

Key Legal Distinction

The ruling turned on a critical fact: Heppner used the consumer version of Claude AI, not an enterprise deployment. Anthropic's consumer privacy policy states that user data may be disclosed to third parties and used for model training—eliminating any reasonable expectation of confidentiality.

The Court's Reasoning

No Attorney-Client Privilege

Judge Rakoff's privilege analysis was direct. Attorney-client privilege protects confidential communications between a client and counsel for the purpose of obtaining legal advice. An AI chatbot, the court held, does not satisfy that requirement. Claude is not an attorney. It holds no license, owes no fiduciary duties, and cannot form the "trusting human relationship" that privilege is designed to protect.

The court acknowledged that Heppner eventually shared the AI-generated reports with his lawyers. But the privilege inquiry looks at where the communication originated—and a conversation with a machine is not a conversation with counsel. The information Heppner fed into Claude was disclosed to a third party before it ever reached his attorneys, defeating privilege at the threshold.

No Work-Product Protection

The work-product doctrine fared no better. Work-product protection applies to materials prepared "in anticipation of litigation" by or for a party. Rakoff found that Heppner's independent use of Claude—without attorney direction—did not qualify as work prepared by or at the behest of counsel. The AI-generated reports were Heppner's own initiative, not part of a litigation strategy directed by his legal team.

No Reasonable Expectation of Confidentiality

Separate from the privilege and work-product analyses, the court addressed confidentiality head-on. Anthropic's privacy policy for the consumer version of Claude states that user data may be shared with third parties and used to train the model. A defendant who inputs case strategy into a tool governed by those terms, the court reasoned, has no reasonable expectation that the information will remain confidential.

Consumer AI vs. Enterprise AI

The distinction between consumer and enterprise AI deployments sits at the center of this ruling—and likely at the center of the litigation that will follow it.

Consumer AI tools like the free or standard-subscription versions of ChatGPT, Claude, and Gemini typically include terms of service that allow the provider to access, store, and in some cases train on user inputs. That data-sharing framework is what allowed Judge Rakoff to conclude that no confidentiality expectation existed.

Enterprise deployments operate under different terms. Major AI providers offer business and enterprise tiers with contractual commitments that user data will not be used for training, will not be shared with third parties, and will be processed under strict access controls. The Heppner opinion does not address whether documents generated through such deployments would receive different treatment—but it draws a bright line that consumer-grade tools fall outside privilege protection.

For legal technology platforms that process sensitive case materials—medical records, litigation strategy, client communications—the architecture matters. HIPAA-compliant platforms with enterprise-grade data isolation, encryption, and contractual confidentiality protections operate in a fundamentally different category than a consumer chatbot with a broad data-sharing policy.

What This Means for Law Firms

Key Points

Essential takeaways from this article

Establish a formal AI usage policy now — only 41% of firms have one, and Heppner shows the cost of operating without clear guardrails
Distinguish consumer AI from enterprise AI in firm protocols — the ruling hinged on consumer terms of service that allow data sharing
Audit AI tools for confidentiality terms — any platform that trains on user data or shares inputs with third parties creates privilege risk
Direct all case-related AI work through attorney-supervised, enterprise-grade tools — privilege requires attorney involvement and confidentiality protections

The 78% of legal professionals who expect generative AI to become central to their workflow within five years cannot ignore the privilege implications of how they deploy it. A tool that accelerates legal research but strips privilege protection from the output is not a net gain—it is a liability.

Law firms handling sensitive litigation—personal injury, mass torts, criminal defense—face the highest stakes. Medical records, treatment histories, and case strategy documents are exactly the kind of materials that privilege is designed to protect. Routing that information through consumer AI tools, in light of Heppner, risks waiving that protection entirely.

The Bottom Line

U.S. v. Heppner is the first major judicial opinion on AI and legal privilege, and it will not be the last. As AI adoption accelerates across the legal industry, courts will inevitably be asked to draw finer lines—between consumer and enterprise tools, between attorney-directed and client-initiated AI use, and between platforms that protect confidentiality and those that do not.

The ruling does not prohibit law firms from using AI. It establishes that how firms use AI—and which version they use—determines whether the legal profession's most fundamental protections survive. For the 59% of firms that still lack a formal AI policy, the time to act was before this opinion. The next best time is now.

Secure, HIPAA-Compliant Medical Record Analysis

LlamaLab's enterprise platform retrieves and analyzes medical records with full data isolation, encryption, and confidentiality protections — no consumer AI risk.


Sources: Debevoise & Plimpton: Judge Rakoff Issues Written Opinion on AI-Generated Documents, CompleteAI Training: AI-Generated Documents Aren't Privileged, Lexology: SDNY Addresses AI-Generated Documents, Lexology: AI Not a Lawyer, Thomson Reuters: AI in Professional Services 2026, Thomson Reuters Survey: GenAI in Legal Workflow.

This article is for informational purposes only and does not constitute legal advice. Consult with qualified legal professionals for advice specific to your situation.

Stay Updated with Latest Insights

Get the latest articles about medical record retrieval and legal tech delivered to your inbox.