Data Science Is Quickly Shifting What's Best and Practicable: What Litigators and Judges Interpreting Rule 23 Should Know

Data Science Is Quickly Shifting What's Best and Practicable: What Litigators and Judges Interpreting Rule 23 Should Know

Legal Tech Monitor
Legal Tech MonitorApr 10, 2026

Key Takeaways

  • AI-driven sampling can reduce notice costs while maintaining statistical validity
  • Courts increasingly require transparency of algorithms used in class certification
  • Improper data handling may trigger discovery disputes and appellate reversal
  • Practitioners should adopt documented validation protocols for predictive models

Pulse Analysis

Federal Rule 23’s “best and practicable” standard has long hinged on the court’s judgment of how effectively a class can be notified and compensated. In the past decade, the explosion of big‑data platforms, machine‑learning classifiers, and natural‑language processing has given litigators unprecedented tools to model class demographics, predict response rates, and optimize notice distribution. These analytical advances enable more precise targeting of potential class members, potentially lowering costs and increasing the likelihood that the notice reaches the intended audience. As a result, judges are now asked to evaluate sophisticated statistical evidence rather than relying solely on traditional sampling methods.

However, the integration of advanced analytics brings new procedural pitfalls. Courts have begun demanding full disclosure of the algorithms, data sources, and validation techniques behind any model presented as “best and practicable.” Failure to provide transparent methodology can trigger extensive discovery battles, expert challenges, and even appellate reversal of a certified class. Moreover, bias in training data or over‑fitting can produce misleading predictions, undermining the fairness of the notice process and exposing counsel to ethical scrutiny under the ABA Model Rules.

To navigate this evolving landscape, practitioners should adopt a disciplined validation framework: split data into training and hold‑out sets, conduct sensitivity analyses, and document error rates alongside confidence intervals. Engaging independent data scientists to peer‑review models can bolster credibility and satisfy judicial expectations for methodological rigor. Looking ahead, the courts are likely to codify standards for algorithmic transparency, making robust data‑science practices not just advantageous but essential for successful class actions under Rule 23.

Data Science is Quickly Shifting What's Best and Practicable: What Litigators and Judges Interpreting Rule 23 Should Know

Comments

Want to join the conversation?