LLMs and SCOTUS

LLMs and SCOTUS

Bet On It
Bet On ItMar 25, 2026

Key Takeaways

  • Supreme Court historically favors broad economic regulation
  • Free speech protections now cover AI-generated text
  • Regulating LLMs may conflict with First Amendment
  • Legal scholars argue AI regulation is likely unconstitutional
  • Future SCOTUS decision could shape AI industry fate

Summary

The post argues that Supreme Court precedent makes AI output—specifically large language model (LLM) text—protected speech, limiting the government’s ability to regulate the AI industry. It traces the Court’s shift from the "clear and present danger" test to the "imminent lawless action" standard and notes the expansion of First Amendment protections after Miller v. California. By treating AI-generated language as expression, the author contends most regulatory attempts would be unconstitutional. While acknowledging uncertainty, the piece predicts that when AI reaches the Supreme Court, the justices may still side with government interests, though scholars largely disagree.

Pulse Analysis

The Supreme Court’s jurisprudence has long granted the government expansive authority over economic activity, as illustrated by the 1942 Wickard v. Filburn decision that treated even a farmer’s personal wheat production as interstate commerce. Yet the same Court has simultaneously broadened First Amendment safeguards, moving from the "clear and present danger" doctrine to the stricter "imminent lawless action" test in Brandenburg v. Ohio and dismantling obscenity constraints in Miller v. California. This dual trajectory creates a legal paradox for emerging technologies: while the state can regulate markets, it must also respect speech rights that now encompass AI-generated language.

Large language models (LLMs) produce text that is indistinguishable from human expression, positioning their output squarely within the realm of protected speech. Legal scholars such as Volokh, Lemley, and Henderson argue that any attempt to curb LLMs—whether through content moderation mandates, licensing schemes, or liability rules—will likely be struck down as a violation of the First Amendment. The core of their argument is that the Supreme Court has consistently required a clear, imminent threat before permitting speech restrictions, a threshold that most AI concerns, like misinformation or bias, do not meet.

The stakes for the AI industry are immense. If courts uphold the view that LLMs are protected speech, regulators will need to craft narrowly tailored, threat‑based measures, potentially slowing the rollout of safety standards and increasing compliance costs. Conversely, a decision favoring governmental authority could usher in sweeping oversight, reshaping business models and prompting a wave of litigation. Stakeholders—from venture capitalists to tech firms—must monitor upcoming litigation and legislative proposals, as the eventual Supreme Court ruling will set the legal foundation for AI governance in the United States.

LLMs and SCOTUS

Comments

Want to join the conversation?