“Hallucinations” By West and Lexis AI?  A Cautionary Study and Cautions About the Study

“Hallucinations” By West and Lexis AI? A Cautionary Study and Cautions About the Study

beSpacific
beSpacificApr 15, 2026

Key Takeaways

  • U.S. v. Farris flagged errors from Westlaw CoCounsel brief
  • Study found ~33% of Westlaw responses hallucinated
  • Lexis+ AI also generated misleading legal citations
  • Verification of AI output is now an ethical mandate
  • Older AI versions exhibit higher hallucination rates

Pulse Analysis

The legal sector has rapidly embraced generative AI tools such as Westlaw CoCounsel and Lexis+ to accelerate research, draft memoranda, and streamline case preparation. Proponents tout speed and cost savings, but the technology’s underlying language models can fabricate citations, statutes, or factual details—a phenomenon known as "hallucination." As firms integrate these systems into billable work, the line between draft assistance and final advice blurs, raising questions about the reliability of AI‑generated content.

A 2024 academic investigation, peer‑reviewed and published in 2025, systematically evaluated the output of the two market‑leading platforms. Researchers submitted hundreds of real‑world legal queries and measured factual accuracy, discovering that roughly one‑third of Westlaw’s responses contained fabricated or inaccurate information, while Lexis+ produced a comparable, though slightly lower, error rate. The Sixth Circuit’s recent U.S. v. Farris ruling illustrated the stakes: a brief prepared with CoCounsel contained material errors that the court deemed significant enough to note in its opinion, signaling that courts are paying attention to AI‑driven mistakes.

For law firms, the findings translate into a clear operational imperative: AI tools must be treated as research assistants, not autonomous advisors. Implementing a verification workflow—cross‑checking citations, statutes, and factual statements against primary sources—protects against malpractice claims and upholds ethical standards. Moreover, firms should monitor vendor updates, as newer model versions may reduce hallucination rates, but the risk never disappears entirely. By embedding rigorous review processes, the legal industry can reap AI’s efficiency benefits while safeguarding the integrity of its work product.

“Hallucinations” by West and Lexis AI? A Cautionary Study and Cautions About the Study

Comments

Want to join the conversation?