Contributor: Investigate the AI Campaigns Flooding Public Agencies with Fake Comments

Contributor: Investigate the AI Campaigns Flooding Public Agencies with Fake Comments

Los Angeles Times – Climate & Environment
Los Angeles Times – Climate & EnvironmentApr 1, 2026

Why It Matters

The manipulation undermines democratic rulemaking and enables fossil‑fuel interests to sidestep health‑protective regulations, threatening public health and climate goals. It also exposes a regulatory blind spot that could be exploited across jurisdictions unless addressed promptly.

Key Takeaways

  • AI platform CiviClick generated 20,000 fake comments
  • Real residents' identities used without consent
  • Fake opposition helped defeat clean‑air rules
  • Similar AI scheme exposed via Speak4 in Bay Area
  • Lawmakers consider SB 1159 to curb identity‑theft

Pulse Analysis

The rise of artificial‑intelligence tools for mass‑generated commentary is reshaping how interest groups influence policy. Platforms like CiviClick can synthesize thousands of plausible‑looking submissions in minutes, allowing well‑funded lobbyists to masquerade as grassroots voices. This capability erodes the credibility of public‑input mechanisms that agencies rely on to gauge community sentiment, creating a feedback loop where regulators may defer to what appears to be overwhelming opposition, even when it is fabricated.

In California, the fallout has been stark. The South Coast Air Quality Management District received more than 20,000 AI‑crafted comments opposing two clean‑air rules, many bearing the names of unsuspecting residents. The comments helped dilute the rules, which were projected to save thousands of lives and prevent asthma cases. A parallel investigation in the Bay Area revealed Speak4, another AI service, was used by the Common Sense Coalition—a front group linked to major oil firms—to submit forged messages echoing fossil‑fuel talking points. These coordinated campaigns illustrate how AI can amplify industry influence far beyond traditional lobbying.

The implications extend beyond environmental policy. By weaponizing identity theft and automated messaging, these tactics threaten the very foundation of participatory governance. Lawmakers are responding with proposals like Senate Bill 1159, aimed at criminalizing the unauthorized use of personal information in regulatory comment processes. Robust enforcement, transparent disclosure requirements for AI‑generated content, and stronger cybersecurity safeguards will be essential to preserve the legitimacy of public decision‑making and ensure that genuine citizen voices are heard.

Contributor: Investigate the AI campaigns flooding public agencies with fake comments

Comments

Want to join the conversation?

Loading comments...