When Agents Act: The Rule 26(f) Disclosure Threshold for Agentic AI in eDiscovery
Key Takeaways
- •Morgan ruling makes AI tool disclosure mandatory under protective orders
- •Agentic eDiscovery tools can act without a human ‘run’ button
- •Rule 26(f) disclosure hinges on whether AI decisions shape production
- •Vendor contracts must bar training, limit sharing, and allow data deletion
- •Sovereign‑AI architectures are becoming essential for cross‑border compliance
Pulse Analysis
The March 30, 2026 opinion in Morgan v. V2X, Inc. marks the first federal decision to tie protective‑order language directly to generative‑AI use. Judge Maritza Dominguez Braswell concluded that AI‑generated work remains protected work product and that any party uploading confidential material to an AI platform must ensure the provider cannot train on that data, cannot share it except to fulfill the service, and must delete it on demand. This framework gives litigants a clear baseline for drafting protective orders and signals that courts will scrutinize AI‑driven data flows as part of discovery safeguards.
Simultaneously, eDiscovery vendors such as Exterro and Relativity are deploying "agentic" AI that autonomously performs timeline reconstruction, privilege triage, and draft memos. Unlike traditional predictive coding, these agents make substantive decisions that shape the document set a human reviewer ultimately sees. Under Rule 26(f), parties must now consider whether the agent’s decision‑making materially influences what is produced, because such influence creates a discoverable subject. The Sedona Conference’s ongoing AI‑governance projects and its principle of methodological transparency are being tested as the industry grapples with the black‑box nature of autonomous agents.
For legal operations, the practical response is threefold: verify that vendor contracts meet the Morgan‑style safeguards, ensure the AI system logs a reconstructable audit trail, and be prepared to disclose the specific agents used during the meet‑and‑confer. Data‑sovereignty concerns add another layer, as cross‑border data transfers can clash with the CLOUD Act and GDPR, making sovereign‑AI architectures—customer‑managed keys, EU‑hosted providers—critical. Firms that proactively embed these controls will avoid surprise disclosures, stay compliant with emerging AI ethics guidance, and preserve the defensibility of their discovery strategy.
When agents act: the Rule 26(f) disclosure threshold for agentic AI in eDiscovery
Comments
Want to join the conversation?