
Fourth Circuit Publicly Admonishes Lawyer for "Citations to Nonexistent Judicial Opinions"
Key Takeaways
- •Fourth Circuit publicly admonishes lawyer for phantom case citations.
- •Violations include Local Rule 46(g)(1)(c) and Rule 8.4(d).
- •Errors involve three nonexistent opinions across multiple briefs.
- •Court warns that AI‑generated “hallucinations” are unacceptable.
- •Discipline signals heightened scrutiny of legal AI tools.
Summary
The Fourth Circuit issued a public admonishment to attorney Eric Nwaubani after his appellate briefs cited three nonexistent judicial opinions, a mistake the court linked to possible generative AI use. The panel found his conduct violated Local Rule 46(g)(1)(c) and the broader Rule 8.4(d) prohibition on conduct that interferes with the administration of justice. Nwaubani offered inconsistent explanations, denying AI involvement while attributing the errors to mis‑citing real cases. The decision underscores the judiciary’s growing vigilance over AI‑generated legal research errors.
Pulse Analysis
The Fourth Circuit’s public admonishment of Eric Nwaubani marks a watershed moment in the legal profession’s encounter with generative artificial intelligence. While the court stopped short of confirming AI use, the presence of three phantom citations—"Nationwide Mutual Insurance Co. v. Jackson," a fictitious CFTC case, and a non‑existent 7th Circuit decision—exposes how AI‑driven research can produce convincing yet false authority. This incident illustrates the technology’s double‑edged nature: it can accelerate drafting but also generate "hallucinations" that jeopardize the integrity of judicial submissions.
From an ethical standpoint, the decision reinforces Rule 8.4(d) and Local Rule 46(g)(1)(c) as powerful deterrents against careless or deceptive citation practices. Attorneys are now compelled to treat AI outputs as preliminary data, subject to rigorous human verification before filing. Law firms investing in AI tools must develop robust oversight protocols, including cross‑checking citations against primary sources and documenting research methodologies. Failure to do so not only risks disciplinary sanctions but also erodes client confidence and the court’s trust in counsel.
Looking ahead, the case foreshadows a broader regulatory push as courts nationwide grapple with AI’s role in litigation. Expect formal guidelines, mandatory disclosures of AI assistance, and perhaps even dedicated training on AI‑related ethical pitfalls. Legal tech vendors will likely respond by enhancing fact‑checking features and integrating citation‑validation APIs. For practitioners, the message is clear: embracing AI requires a disciplined, transparent workflow that safeguards the accuracy and credibility of every brief submitted to the bench.
Comments
Want to join the conversation?