AI Voice Fraud Is Exploiting Contact Centers

AI Voice Fraud Is Exploiting Contact Centers

TechRadar
TechRadarNov 19, 2025

Companies Mentioned

Gartner

Gartner

Why It Matters

The rise of AI voice fraud threatens the security of billions of high‑value transactions handled through contact centers, forcing firms to overhaul outdated authentication methods or risk significant financial and reputational damage.

Summary

AI-generated voice cloning has moved from experimental demos to full production, with roughly one in three U.S. consumers reporting synthetic‑voice fraud in Q4 2024 and many incurring losses. Fraudsters now combine breached personal data with low‑cost text‑to‑speech and automated dialing to bypass legacy contact‑center defenses such as knowledge‑based authentication and basic voice‑matching, exploiting the fact that phone service remains a preferred channel for high‑value transactions. Most call centers still rely on first‑generation verification tools that lack liveness detection or synthetic‑speech analysis, making them vulnerable to large‑scale, AI‑driven attacks. Experts recommend a layered, adaptive authentication model that integrates real‑time synthetic‑voice detection, device and network analytics, and step‑up verification to contain the threat while preserving the convenience of voice interactions.

AI voice fraud is exploiting contact centers

Comments

Want to join the conversation?

Loading comments...