AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsNVIDIA Releases PersonaPlex-7B-V1: A Real-Time Speech-to-Speech Model Designed for Natural and Full-Duplex Conversations
NVIDIA Releases PersonaPlex-7B-V1: A Real-Time Speech-to-Speech Model Designed for Natural and Full-Duplex Conversations
AI

NVIDIA Releases PersonaPlex-7B-V1: A Real-Time Speech-to-Speech Model Designed for Natural and Full-Duplex Conversations

•January 18, 2026
0
MarkTechPost
MarkTechPost•Jan 18, 2026

Companies Mentioned

NVIDIA

NVIDIA

NVDA

X (formerly Twitter)

X (formerly Twitter)

Telegram

Telegram

Reddit

Reddit

Why It Matters

By collapsing the traditional ASR‑LLM‑TTS pipeline, PersonaPlex cuts latency and supports natural, interruptible conversations, a critical step for next‑generation voice assistants and enterprise contact centers.

Key Takeaways

  • •7B-parameter full-duplex speech model replaces ASR‑LLM‑TTS pipeline
  • •Dual‑stream transformer enables simultaneous listening and speaking
  • •Hybrid voice and text prompts control persona and role
  • •Trained on 1,200+ hours real calls plus synthetic data
  • •Achieves 0.95 interruption takeover rate with sub‑second latency

Pulse Analysis

The voice‑assistant market has long been constrained by a three‑stage cascade—speech‑to‑text, language generation, and text‑to‑speech—that introduces latency and cannot handle overlapping dialogue. As consumers demand more fluid, human‑like interactions, developers are seeking architectures that can process and generate audio in real time. PersonaPlex‑7B‑v1 addresses this gap by unifying the entire pipeline into a single transformer, allowing continuous audio streams to be encoded and decoded on the fly, which dramatically reduces response times and supports natural interruptions.

Technically, PersonaPlex builds on the Moshi network and NVIDIA’s Helium language‑model backbone, employing a Mimi encoder‑decoder pair that converts waveforms into discrete tokens at 24 kHz. The dual‑stream design shares model state between user and agent channels, enabling the system to listen while speaking and adapt instantly to user barge‑ins. Persona control is achieved through a hybrid prompting scheme: a voice prompt defines timbre and prosody, while a text prompt and system prompt encode role, organization, and business constraints. Training blends over 1,200 hours of real Fisher calls with millions of synthetic dialogues generated by large language models, ensuring both conversational naturalness and task adherence.

The performance gains are evident on FullDuplexBench and the newly introduced ServiceDuplexBench, where PersonaPlex records takeover rates above 0.90 and latency under 0.25 seconds, surpassing many closed‑source competitors. With its open‑source code and MIT‑licensed model weights, the solution is poised to accelerate adoption in customer‑service bots, virtual agents, and real‑time translation services. Enterprises can now deploy voice assistants that handle interruptions and back‑channel cues without the latency penalties of traditional pipelines, unlocking richer, more engaging user experiences.

NVIDIA Releases PersonaPlex-7B-v1: A Real-Time Speech-to-Speech Model Designed for Natural and Full-Duplex Conversations

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...