Independent web journal on law, technology, KM, and legal research, published by Sabrina I. Pacifici since 1996.

A growing body of academic research shows that AI hallucinations in legal research are both common and systematic, with general‑purpose models like GPT‑4 fabricating or mischaracterizing authority in over half of pure‑question queries. Specialized, retrieval‑augmented tools such as Lexis+ AI and Westlaw AI‑Assisted Research reduce hallucination rates to the high teens, but still err on complex or local issues. Six persistent patterns—model maturity, sycophancy, jurisdictional gaps, knowledge cutoffs, task complexity, and the confidence paradox—drive error rates across tools and studies. Understanding these patterns is essential for safe AI adoption in law firms.

The article outlines how legal professionals can harness generative AI by treating prompts like legal questions, emphasizing that vague inputs produce useless outputs. It introduces the 7 Ps Framework—persona, product, prompt, purpose, prime, privacy, and polish—as a systematic method for crafting...