Perplexity AI Sued in Class Action over Alleged Data Sharing with Meta, Google

Perplexity AI Sued in Class Action over Alleged Data Sharing with Meta, Google

Pulse
PulseApr 5, 2026

Why It Matters

The Perplexity case spotlights a growing tension between rapid AI adoption and the need for robust data‑privacy safeguards. As enterprises embed generative‑AI tools into workflows, the risk that user‑generated content is silently harvested by dominant platforms like Meta and Google could erode trust and trigger compliance breaches. For CIOs, the lawsuit serves as a cautionary tale that vendor‑risk assessments must evolve beyond traditional security checklists to include granular scrutiny of data‑sharing architectures, especially for services that promise “incognito” or “private” modes. Beyond immediate legal exposure, the dispute could shape industry standards for AI transparency. If courts or regulators deem the alleged practices unlawful, AI providers may be compelled to redesign data‑collection pipelines, publish clearer privacy disclosures, and offer verifiable audit logs. Such outcomes would empower CIOs to make more informed decisions, reduce the likelihood of inadvertent data leakage, and align AI deployments with corporate governance frameworks.

Key Takeaways

  • Utah federal court receives a proposed class‑action alleging Perplexity AI shared user transcripts with Meta and Google
  • Complaint claims Incognito mode fails to block hidden tracking software
  • Perplexity’s chief communications officer Jesse Dwyer says the company has not been served and cannot verify the claims
  • The suit follows a recent injunction that halted Perplexity’s Comet tool for unauthorized web scraping
  • CIOs may need to reassess AI vendor contracts, enforce stricter data‑flow controls and audit rights

Pulse Analysis

The Perplexity lawsuit arrives at a moment when enterprises are racing to embed generative AI into core processes, often without fully understanding the data‑pipeline implications. Historically, vendor‑risk management has focused on perimeter security and compliance certifications; however, AI services introduce a new vector where user‑generated prompts become raw data that can be monetized or repurposed by third parties. This case could accelerate a shift toward contractual clauses that explicitly forbid the sharing of conversational data with unrelated entities, mirroring the data‑processing agreements that have become standard in cloud contracts.

From a market perspective, the allegations could dent Perplexity’s growth trajectory. The startup has positioned itself as a privacy‑conscious alternative to larger players like ChatGPT, leveraging its Incognito promise to attract enterprise customers wary of data leakage. If the claims gain traction, competitors may double‑down on transparent data‑handling practices, potentially reshaping the competitive landscape. Meanwhile, larger platforms such as Meta and Google may face scrutiny over their data‑ingestion practices, prompting regulators to issue broader guidance on AI‑driven data sharing.

For CIOs, the immediate takeaway is to embed AI risk assessments into existing governance frameworks. This includes mapping data flows from AI interfaces, deploying DLP solutions that can flag outbound payloads, and demanding real‑time audit logs from vendors. As the legal environment around AI privacy tightens, organizations that proactively enforce these controls will be better positioned to avoid costly compliance breaches and maintain stakeholder confidence.

Perplexity AI sued in class action over alleged data sharing with Meta, Google

Comments

Want to join the conversation?

Loading comments...