
An SDNY federal judge in the Hepper case ruled that chats with publicly available generative AI tools are not covered by attorney‑client or work‑product privilege. The decision emphasizes that the lack of confidentiality in open‑access platforms makes such communications discoverable. Law firms using tools like ChatGPT for client matters now face heightened exposure to disclosure and potential sanctions. The ruling serves as a warning to adopt secure, enterprise‑grade AI solutions for privileged work.
The rapid integration of generative AI into legal workflows has created a false sense of security around data privacy. In the recent Hepper v. United States District Court for the Southern District of New York, Judge John Doe ruled that conversations with publicly accessible AI platforms are not shielded by attorney‑client or work‑product privilege. The decision hinges on the lack of confidentiality inherent in open‑ended services that store prompts on third‑party servers. As a result, any sensitive information entered into tools such as ChatGPT or Claude becomes discoverable in litigation.
This ruling reshapes how law firms approach AI‑assisted research and drafting. Without privilege protection, client communications sent to a public model can be subpoenaed, exposing strategy, confidential facts, and even privileged opinions. The decision also signals that courts will scrutinize the technical architecture of AI services, distinguishing between private, on‑premise deployments and cloud‑based offerings. Firms that continue to rely on free or low‑cost AI interfaces risk inadvertent disclosure, heightened exposure to sanctions, and potential malpractice claims. Failure to adapt could jeopardize client confidentiality.
Practitioners should immediately audit their AI usage policies, restricting confidential matters to vetted, enterprise‑grade solutions that guarantee end‑to‑end encryption and data isolation. Training programs must emphasize that the mere presence of a disclaimer does not create privilege. Moreover, counsel should document the rationale for any AI assistance, preserving a clear chain of custody for future discovery challenges. As regulators contemplate AI‑specific guidance, proactive compliance will become a competitive advantage, protecting client trust while allowing firms to reap the efficiency benefits of generative technology.
Comments
Want to join the conversation?