A federal judge in Manhattan has ruled that documents generated by an artificial intelligence tool and later shared with a defense attorney are not protected by attorney-client privilege and can be used by prosecutors at trial. The decision, delivered by U.S. District Judge Jed S. Rakoff during pretrial proceedings, squarely addresses a fast-emerging question in the AI era: when do chatbot outputs become privileged legal communications, if ever?
The ruling arises from the case against Bradley Heppner, the CEO of Beneficient, who faces charges of securities and wire fraud totaling $150 million. According to courtroom comments reported by Law360, investigators seized 31 documents Heppner drafted using Anthropic’s Claude before providing them to his attorney. Prosecutors argued the materials are fair game because they were created via a third-party service that does not guarantee confidentiality and should be treated as non-privileged work product. Judge Rakoff agreed there was no basis for attorney-client privilege over the AI-generated files.

Defense counsel countered that the documents reflected information drawn from protected conversations with lawyers and warned their use could entangle the defense team as witnesses. While Rakoff rejected the privilege claim, he acknowledged the risk of a witness-advocate conflict that could complicate trial management and even trigger a mistrial if not carefully handled.
Why This Ruling Matters For Privilege Law
Attorney-client privilege generally shields confidential communications made for the purpose of obtaining or providing legal advice. The protection can be lost if communications are shared with third parties, unless an exception applies—most famously the Kovel doctrine, which can extend privilege to necessary intermediaries like accountants working at a lawyer’s direction. Consumer-facing AI platforms, however, are not retained professionals, and their standard terms often reserve the right to store, review, or use prompts for safety, troubleshooting, or service improvements.
Courts have tolerated secure use of cloud vendors when lawyers take reasonable steps to preserve confidentiality, but generative AI tools add layers of uncertainty: opaque data retention, model training on user inputs, and potential law enforcement or corporate access to logs. Absent explicit contractual assurances and technical controls, using a public chatbot can look less like a privileged channel and more like disclosing strategy to a third party.
The Work Product Wrinkle in AI-Generated Documents
Separate from privilege, the work-product doctrine protects materials prepared in anticipation of litigation, particularly an attorney’s mental impressions. Prosecutors in the Heppner matter described the AI-generated files as the defendant’s own drafts, not a lawyer’s analysis, diminishing any claim to heightened protection. Even where work product applies, factual materials can sometimes be discovered upon a showing of substantial need—another reason why litigants should not assume AI-assisted notes are immune from disclosure.

Rakoff’s warning about a potential witness-advocate conflict is significant. If AI-crafted documents attribute statements or decisions to defense counsel, attorneys may become fact witnesses about their origin, review, or use—an ethical thicket under rules that generally prohibit lawyers from serving as both advocate and witness at trial.
Implications For Legal Tech And Practice
Law firm appetite for generative AI is growing, but so are confidentiality concerns. A recent LexisNexis survey of legal professionals found strong expectations that AI will transform research and drafting, while a majority flagged client confidentiality as the top risk. The American Bar Association’s 2024 Formal Opinion 512 underscores this tension, advising lawyers to ensure competence, supervise vendors, and safeguard confidential information when using AI tools.
Vendors have moved to address the risk. Enterprise versions of leading systems, including offerings from Anthropic, OpenAI, Microsoft, and others, provide options to disable data retention, limit training on user inputs, and furnish contractual confidentiality commitments. But defaults and capabilities vary widely between consumer and enterprise tiers. For in-house teams and law firms, the difference between a public chatbot window and a governed enterprise instance can be the difference between preservation of confidentiality and a privilege waiver.
Regulators also increasingly seek messaging and chatbot records in investigations, paralleling the now-routine discovery of Slack, Teams, and text communications. If AI chats inform corporate decisions or legal strategy, those logs may be discoverable unless protected—and Rakoff’s comments suggest courts may be skeptical when that protection relies on off-the-shelf consumer tools.
What Lawyers And Clients Should Do Now About AI
- Treat public chatbot prompts as disclosures to a third party unless you have enterprise-grade agreements that clearly bar retention, training, and third-party access.
- Channel AI use through vetted platforms with signed confidentiality and data processing terms, logging controls, and jurisdictional safeguards; document those controls in engagement letters and litigation holds.
- Avoid placing client confidences or legal theories into consumer AI prompts; where necessary, anonymize and abstract, and have attorneys review outputs before they are shared or memorialized.
- Update AI governance policies, train teams on privilege and work-product risks, and designate who can use which tools for which tasks. Consider Kovel-style structures when outside specialists or managed AI services are needed to facilitate legal advice.
The ruling also spotlights a broader policy debate. Industry leaders, including OpenAI’s Sam Altman, have floated the idea of privilege-like protections for highly personal AI interactions. Until legislatures or higher courts draw those lines, however, this case offers a pragmatic takeaway: if an AI tool is not clearly part of the confidential attorney-client relationship, assume a court may treat its outputs as admissible evidence.
