If You Want Into Anthropic's Claude Club, You May Have to Show ID

If You Want Into Anthropic's Claude Club, You May Have to Show ID

The Register — Networks
The Register — NetworksApr 16, 2026

Why It Matters

The move underscores AI providers’ push toward regulatory compliance, but it also raises privacy and adoption risks that could influence market dynamics.

Key Takeaways

  • Anthropic adds optional ID checks for select Claude features.
  • Verification partner Persona previously faced controversy over Discord age‑check plan.
  • Anthropic says data won’t train models and is retained per contract.
  • Users may see prompts anytime, sparking subscription cancellations.
  • Subprocessors like AWS, Google, OpenAI process verification data.

Pulse Analysis

Anthropic’s decision to embed identity verification into Claude reflects a broader shift among AI providers toward stricter compliance and platform‑integrity safeguards. By partnering with Persona Identities, the company can confirm a user’s age or legal status before granting access to high‑risk capabilities such as code generation or personal‑advice modules. The move aligns with emerging U.S. state regulations that push age‑checks deeper into digital services, and it mirrors similar steps taken by OpenAI and other large language‑model vendors. For enterprises, the added gatekeeping promises reduced liability, but it also introduces a new friction point for developers and end‑users.

The privacy implications, however, have sparked immediate backlash. Persona’s earlier involvement with Discord’s aborted age‑verification pilot left a lingering reputation for data‑handling concerns, especially after a researcher exposed its front‑end on a government server. Anthropic attempts to assuage fears by stating that identity data will never be used to train models and that retention periods are contractually limited. Yet the verification flow still routes selfies and document scans through a network of subprocessors—including AWS, Google, OpenAI, Stripe, and Twilio—raising questions about secondary exposure and auditability. Users on Reddit have already threatened to cancel subscriptions.

From a market perspective, the verification requirement could become a differentiator. Competitors that maintain a frictionless onboarding experience may attract privacy‑sensitive customers, while Anthropic bets that the security benefits outweigh potential churn. The practice also signals to regulators that AI firms are taking proactive steps, potentially softening future legislative pressure. Investors will watch adoption metrics closely; a spike in verification prompts could correlate with slower usage growth, whereas a seamless implementation might reinforce trust and drive enterprise contracts. Ultimately, the balance between safety and convenience will shape the next wave of AI product strategy.

If you want into Anthropic's Claude club, you may have to show ID

Comments

Want to join the conversation?

Loading comments...