Perplexity AI Accused of Embedding ‘Undetectable’ Trackers for Secretly Routing Sensitive User Data to Meta and Google

Perplexity AI Accused of Embedding ‘Undetectable’ Trackers for Secretly Routing Sensitive User Data to Meta and Google

Mint – Technology (India)
Mint – Technology (India)Apr 1, 2026

Why It Matters

The case spotlights privacy vulnerabilities in AI‑powered search tools and could trigger stricter regulatory enforcement, eroding user trust across the emerging AI market.

Key Takeaways

  • Perplexity allegedly routes data to Meta, Google
  • Trackers claimed to work even in incognito mode
  • Lawsuit cites California privacy law violations
  • Perplexity denies existence of served lawsuit
  • Concurrent Amazon lawsuit raises broader security concerns

Pulse Analysis

The rapid expansion of AI‑driven search tools has introduced a new frontier for data collection. Unlike conventional search engines that rely on query logs, generative chat interfaces often retain entire conversation histories to improve relevance. This depth of insight is attractive to advertisers but also raises red flags when the data is shared with third‑party platforms such as Meta and Google. Users increasingly assume that “Incognito” or private modes shield their inputs, yet embedded tracking scripts can bypass browser safeguards, creating a hidden pipeline for personal and financial information.

California’s privacy statutes, especially the California Consumer Privacy Act (CCPA) and the upcoming California Privacy Rights Act (CPRA), give consumers the right to know and opt out of data sharing. A class‑action filing alleging undisclosed transmission of sensitive details directly challenges compliance with these laws and mirrors a broader wave of privacy lawsuits targeting AI firms. If the allegations prove true, Perplexity could face statutory damages, mandatory remediation, and heightened scrutiny from state regulators, setting a precedent that may compel other AI startups to audit their telemetry and consent mechanisms.

The fallout extends beyond legal exposure; user trust is a critical moat for emerging AI services. Negative publicity can accelerate user migration to competitors that emphasize privacy‑by‑design, such as open‑source models hosted on secure infrastructure. In response, companies may adopt stricter data‑handling policies, transparent disclosures, and independent audits to reassure regulators and customers. Investors will likely weigh privacy risk alongside technical performance when evaluating AI ventures, and policymakers may consider new federal guidelines to close gaps exposed by cases like Perplexity’s.

Perplexity AI accused of embedding ‘undetectable’ trackers for secretly routing sensitive user data to Meta and Google

Comments

Want to join the conversation?

Loading comments...