New Jersey to Use AI to Score Standardized Writing Tests

New Jersey to Use AI to Score Standardized Writing Tests

GovTech — Education (K-12)
GovTech — Education (K-12)Mar 9, 2026

Why It Matters

AI‑driven scoring promises cost efficiencies and faster results, but errors could affect high‑stakes outcomes for millions of students, prompting scrutiny from educators and policymakers.

Key Takeaways

  • AI will grade most essay responses on NJ adaptive tests
  • Human reviewers will check flagged or borderline AI scores
  • 25% of responses expected to be hand‑scored
  • Teachers union warns of potential AI grading errors
  • Other states report AI scoring inaccuracies and rescoring costs

Pulse Analysis

The adoption of artificial‑intelligence scoring in New Jersey’s new adaptive assessments reflects a growing trend among states to leverage technology for efficiency gains. By training the algorithm on human‑rated practice essays, the Department of Education aims to deliver consistent, rapid scores while reserving human oversight for atypical or borderline responses. This hybrid model seeks to balance the speed of automated grading with the reliability of expert review, a compromise shaped by past experiences where fully automated systems produced notable discrepancies.

Stakeholders are closely watching the implementation because the stakes are high: the tests affect promotion, graduation eligibility, and school accountability metrics for roughly 1.3 million public‑school students. Teachers’ unions and advocacy groups have voiced concerns that AI may misinterpret unconventional writing styles or penalize students for nuances a machine cannot capture. The policy’s design—routing about a quarter of responses to human scorers—mirrors industry best practices that aim to keep error rates low while still reaping financial benefits from reduced labor costs.

New Jersey’s contract with Cambium Assessment, valued at $58.7 million, also underscores the commercial dimension of AI in education. While the state emphasizes that the system does not use generative AI, critics point to similar controversies in Massachusetts and Texas, where rescoring revealed significant score adjustments after human review. As other jurisdictions evaluate AI‑based scoring, New Jersey’s rollout will serve as a real‑world case study on how effectively AI can augment, rather than replace, human judgment in high‑stakes educational assessment.

New Jersey to Use AI to Score Standardized Writing Tests

Comments

Want to join the conversation?

Loading comments...