AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosNVIDIA’s AI Finally Solved Walking In Games
AI

NVIDIA’s AI Finally Solved Walking In Games

•December 21, 2025
0
Two Minute Papers
Two Minute Papers•Dec 21, 2025

Why It Matters

It eliminates costly hand‑crafted animation pipelines and creates high‑fidelity, physically grounded pedestrian simulations that can accelerate both game realism and autonomous‑vehicle safety testing.

Summary

The video spotlights a breakthrough from NVIDIA that replaces traditional capsule‑based NPC movement with fully physically simulated humanoids. By coupling a diffusion‑based path planner called Trace with a joint‑control system dubbed Pacer, the researchers enable agents to generate and follow realistic walking trajectories in real time, eliminating the classic “moon‑walking” foot‑slip bugs that plague many games.

Key technical insights include the use of roughly 20 motor‑driven joints per character, a diffusion model that denoises noisy path predictions into smooth, anticipatory routes, and an adversarial reinforcement‑learning loop where a discriminator judges the naturalness of each step. Over three days, more than 2,000 parallel humanoids performed billions of attempts, learning to balance, swing arms, and adapt to stairs, slopes, and uneven terrain without any handcrafted animation clips.

The demo is peppered with vivid examples: agents shouting “holy crap, help me!” when a foot slips, crowds that organically weave around obstacles instead of following rigid “if neighbor is close, turn left” rules, and the ability to prompt the diffusion model to make groups walk side‑by‑side. The system even handles diverse body types—short, tall, plump—without extra tuning, and it can generate messy pedestrian behavior useful for testing autonomous‑vehicle algorithms.

Implications are twofold. For game developers, the technology promises a dramatic reduction in animation labor while delivering more lifelike crowds that react naturally to complex geometry. For the broader AI and automotive sectors, the open‑source framework provides a scalable way to populate virtual cities with realistic, physics‑grounded pedestrians, improving the fidelity of simulation‑based safety testing for self‑driving cars.

Original Description

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambda.ai/papers
Using DeepSeek on Lambda:
https://lambda.ai/inference-models/deepseek-r1
📝 The paper is available here:
https://research.nvidia.com/labs/toronto-ai/trace-pace/
📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Benji Rabhan, B Shang, Christian Ahlin, Fred R, Gordon Child, Juan Benet, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Taras Bobrovytsky, Tybie Fitzhugh, Ueli Gallizzi
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers
My research: https://cg.tuwien.ac.at/~zsolnai/
X/Twitter: https://twitter.com/twominutepapers
Thumbnail design: Felícia Zsolnai-Fehér - http://felicia.hu
#nvidia
0

Comments

Want to join the conversation?

Loading comments...