Lawmakers Are Using AI to Write Laws. What Could Go Wrong?

Lawmakers Are Using AI to Write Laws. What Could Go Wrong?

Transformer
TransformerApr 9, 2026

Key Takeaways

  • Vulcan’s AI platform mandated across Virginia agencies to cut regulations by one‑third
  • FiscalNote’s PolicyNote claims clients in all three federal branches
  • Lawmakers cite speed and research aid, but stress human review
  • Experts warn AI‑generated bills may embed bias, lack policy judgment
  • AI‑drafted public comments could overwhelm agency review processes

Pulse Analysis

The adoption of generative AI for lawmaking reflects a broader trend of automating knowledge‑intensive tasks. Early experiments, such as Congressman Ted Lieu’s 2023 ChatGPT‑written resolution, have given way to commercial platforms that aggregate statutes, case law, and regulatory guidance into a single interface. Vendors like Vulcan Technologies and FiscalNote market these tools as "regulatory operating systems" that can draft bill language, suggest citations, and even forecast legislative outcomes. State leaders, notably Virginia’s governor, see AI as a lever to streamline dense regulatory codes, promising cost savings and faster policy cycles.

Proponents argue that AI can level the playing field for legislators who lack legal expertise. By summarizing lengthy bills, generating plain‑language explanations, and surfacing comparable statutes from other jurisdictions, tools help officials like Vermont Rep. Monique Priestley craft more accessible proposals. The speed of draft generation also frees staff to focus on substantive debate rather than rote wording. In a landscape where many legislative offices operate with limited resources, AI promises a productivity boost comparable to the impact of spell‑checkers in the 1990s.

However, the technology’s limitations raise red flags for policymakers and scholars. Large language models predict text based on patterns, not policy intent, which can result in drafts that echo existing law without innovative solutions. Biases in training data may skew language toward entrenched interests, and the opacity of model reasoning hampers accountability—an AI cannot be cross‑examined like a human drafter. Moreover, the flood of AI‑generated public comments threatens to drown genuine stakeholder input, straining agency review capacities. To harness AI’s benefits while safeguarding democratic legitimacy, a hybrid workflow that mandates expert legal review and transparent model documentation is essential.

Lawmakers are using AI to write laws. What could go wrong?

Comments

Want to join the conversation?