37,000 Fake AI Comments Mysteriously Oppose Washington State’s Effort To Tax The Rich

37,000 Fake AI Comments Mysteriously Oppose Washington State’s Effort To Tax The Rich

Techdirt
TechdirtMar 10, 2026

Why It Matters

Artificial‑intelligence‑driven comment spam can mislead legislators, eroding trust in public consultation and skewing policy outcomes. It highlights an urgent need for robust verification mechanisms in government rulemaking processes.

Key Takeaways

  • 37,000 AI‑generated comments flooded Washington tax hearing
  • Bots duplicated names 50‑100 times each night
  • Fake opposition inflates perceived public resistance
  • AI lowers cost of large‑scale comment manipulation
  • Regulators lack tools to verify comment authenticity

Pulse Analysis

The manipulation of public comment portals is not new; telecom firms once flooded FCC hearings with fake and even deceased individuals to sway net‑neutrality rulings. Those campaigns relied on manual effort and limited automation, yet they still managed to create the illusion of mass support or opposition. Over the past decade, watchdogs have documented similar tactics across environmental, financial, and health regulations, exposing a systemic vulnerability in how agencies collect stakeholder input.

What changes with the rise of generative AI is the speed and scale at which false voices can be produced. In Washington’s millionaire‑tax hearing, software generated tens of thousands of duplicate sign‑ins within minutes, a task that previously would have required a coordinated human effort. The AI‑driven bots replicated names, timestamps, and submission patterns, making detection difficult for officials who lack real‑time verification tools. This case illustrates how inexpensive, off‑the‑shelf AI models can be weaponized to manufacture consensus, undermining the legitimacy of the comment process.

The broader implication for policymakers is clear: without reliable authentication, public consultation risks becoming a theater rather than a genuine dialogue. Agencies may need to adopt multi‑factor verification, CAPTCHA enhancements, or blockchain‑based identity checks to filter out synthetic submissions. Moreover, transparency reports that flag suspicious activity can help restore confidence. As governments grapple with increasingly sophisticated digital manipulation, establishing robust safeguards will be essential to preserve democratic decision‑making and ensure that tax, environmental, or technology policies reflect authentic public sentiment.

37,000 Fake AI Comments Mysteriously Oppose Washington State’s Effort To Tax The Rich

Comments

Want to join the conversation?

Loading comments...