An AI Model Trained on Prison Phone Calls Now Looks for Planned Crimes in Those Calls

An AI Model Trained on Prison Phone Calls Now Looks for Planned Crimes in Those Calls

MIT Technology Review
MIT Technology ReviewDec 1, 2025

Why It Matters

The deployment merges advanced AI with correctional monitoring, potentially boosting public safety while intensifying debates over inmate privacy and the cost burden on families.

Key Takeaways

  • AI scans inmate calls for crime planning.
  • FCC allows fees to fund surveillance tools.
  • Civil rights groups warn of invasive monitoring.
  • Pilot aims to disrupt trafficking, gang activity.
  • Inmates lack consent for data usage.

Pulse Analysis

The correctional industry has long relied on blanket recording of inmate communications to deter contraband and coordinate investigations. Securus Technologies, the dominant telecom provider in U.S. detention facilities, leveraged its massive archive to train a proprietary large language model capable of detecting linguistic cues that indicate premeditated wrongdoing. By automating the initial review, the system promises to surface threats faster than human analysts, addressing chronic staffing shortages and the growing volume of digital correspondence.

Regulatory momentum has shifted in favor of such surveillance tools. After a 2024 FCC order capped the fees that could be passed to incarcerated callers, the agency recently voted to raise those caps and explicitly allow carriers to charge for security‑related services, including AI‑driven monitoring. This policy reversal, championed by FCC Chairman Brendan Carr, provides a revenue stream that could subsidize the costly development and deployment of predictive analytics in prisons, while also sparking litigation from sheriffs’ associations and state attorneys general concerned about budget impacts.

Nevertheless, the technology raises profound civil‑liberties questions. Inmates are notified that calls are recorded, but they are not informed that their conversations train predictive AI, a practice critics label as coercive consent. Advocacy groups and the ACLU warn that unchecked algorithmic surveillance could erode attorney‑client privilege and amplify racial disparities in the criminal‑justice system. As courts grapple with the balance between security and privacy, the industry faces a pivotal moment: whether to embed transparent oversight into AI tools or risk a backlash that could reshape correctional communication policies nationwide.

An AI model trained on prison phone calls now looks for planned crimes in those calls

Comments

Want to join the conversation?

Loading comments...