Learned Hand’s Shlomo Klapper on Why Courts Are the Next Frontier for Legal AI
Why It Matters
Deploying AI as a neutral research assistant could relieve overloaded courts and broaden access to justice, yet its adoption depends on overcoming judicial skepticism and ensuring unbiased, transparent outputs.
Key Takeaways
- •Learned Hand partners with LA Superior Court to pilot AI
- •AI reasoning engine acts as judicial “sous‑chef,” not decision‑maker
- •Jevons paradox predicts cheaper legal services will increase case volume
- •Judges skeptical; only 24% support AI for adjudication
- •Learned Hand’s tool flags gaps, doubts its own outputs for neutrality
Summary
On Law Next, Shlomo Klapper, CEO of Learned Hand, explains his startup’s mission to build a reasoning engine for courts. The company just announced a pilot partnership with the Los Angeles County Superior Court, the nation’s largest trial court, to test AI assistance across the full case lifecycle.
Klapper describes the platform as a “judicial sous‑chef,” a neutral AI clerk that aggregates and structures case law, filings, and evidence so judges can focus on decision‑making rather than drudgery. He links the need for such tools to Jevons paradox: as AI drives down legal costs, the volume of filings will surge, overwhelming courts that lack dedicated clerks.
The interview highlights stark attitudes: while 90 % of attorneys are comfortable using generative AI, only about 24 % of judges endorse it. Klapper stresses that the system is designed to “doubt its own output,” flagging gaps and refusing to make substantive rulings, thereby preserving judicial legitimacy.
If courts adopt this technology, it could dramatically increase throughput and expand access to justice, but success hinges on earning judges’ trust, addressing bias, and navigating emerging regulatory frameworks.
Comments
Want to join the conversation?
Loading comments...