Zachary Catanzaro argues that judges consulting ChatGPT for statutory meaning face a fundamental flaw, not merely a reliability issue. Large language models predict token sequences without true semantic comprehension, making computational legal interpretation a category error. He links this flaw to originalist jurisprudence, which already treats meaning as historical usage, and warns that AI‑generated text will contaminate future corpora. The resulting erosion of nuanced, marginal claims threatens the protections of life and liberty.
The legal profession has witnessed a rapid uptake of large language models, with several judges now turning to ChatGPT for quick statutory explanations. Proponents treat this as a reliability problem, arguing that AI outputs need verification. Zachary Catanzaro’s paper flips the script, arguing that the core issue is not accuracy but a fundamental category error: LLMs manipulate symbols without grasping meaning. Because they generate text by predicting token sequences, they cannot produce genuine semantic interpretation, rendering computational legal analysis inherently flawed. Consequently, reliance on such tools risks conflating probabilistic output with legal authority.
Catanzaro links this flaw to originalist jurisprudence, which already treats meaning as an empirical recovery of historical usage. The progression from dictionaries to corpus databases and now to generative models simply extends originalism’s empirical commitments to their logical extreme. As AI‑generated content floods the corpora that future models will train on, the semantic richness of marginal claims—those that safeguard life and liberty—diminishes. The resulting degradation disproportionately harms nuanced arguments, eroding the very protections that originalism purports to preserve. This feedback loop threatens the doctrinal stability that courts rely upon.
The paper’s warning signals a need for robust policy frameworks governing AI’s role in judicial interpretation. Courts should treat LLM outputs as heuristic aids, not authoritative sources, and maintain rigorous human oversight. Moreover, legal scholars must develop interpretive tools that embed semantic reasoning rather than rely on statistical pattern matching. By establishing clear standards and investing in interdisciplinary research, the legal system can harness AI’s efficiency while safeguarding the substantive rights that hinge on deep, contextual understanding. Legislators, too, must consider statutory amendments that define AI’s permissible scope.
Comments
Want to join the conversation?