How Human Motion Is Fueling the Robot Revolution — by Teaching Robots Like Atlas to Move in Lifelike Ways
AIRobotics

How Human Motion Is Fueling the Robot Revolution — by Teaching Robots Like Atlas to Move in Lifelike Ways

TechRadar
TechRadarJan 15, 2026

Why It Matters

The ability to teach robots via motion capture and large‑scale simulation shortens development cycles, making humanoid automation viable for manufacturing, logistics, and consumer applications.

How human motion is fueling the robot revolution — by teaching robots like Atlas to move in lifelike ways

By Matt Evans · published 3 hours ago

VR, motion capture, and a lot of AI

When I sat down at my desk to cover the first day of CES 2026, I expected to see a bunch of awkward‑looking, pincer‑armed machines with black screens in lieu of faces. I’d seen the early Boston Dynamics videos of those clumsy robots being stress‑tested (read: bullied), and expected more of the same.

Instead, I was surprised by a video of Boston Dynamics’ Atlas robot that seemed almost human. Atlas began to run, then slowed to a halt, lifted a leg and assumed a martial‑arts‑like stance, spun its claws (terrifying) while gesturing to an inert robot on its left, and squatted down to pick up an imaginary object in a way that looked very natural—despite twisting its ball‑socket legs and torso in ways that would be impossible for a human to replicate.

I was fascinated. As someone who regularly looks at the intersection of human performance and technology, that little run felt like a landmark moment. The Atlas walking demonstration at CES, captured by our own Editor‑at‑Large Lance Ulanoff, still looked a little gawkish—like a shuffling, half‑squat walk with small steps in case it fell over, rather than a confident stride.

However, watching the majority of the demonstration both thrilled and unnerved me. Atlas was so lifelike while it was moving, and yet the moment the robot came to a halt it became unnaturally still. The contrast between this statue‑like assemblage of metal and plastic and a living, breathing person was jarring.

How on earth did Boston Dynamics get its Atlas robot to copy a movement in this degree of detail?


Motion capture and VR

The CBS news show 60 Minutes aired a segment on Boston Dynamics and Atlas that included a visit to the Hyundai factory where the robot is being tested, and showed how new movements are taught to the robot.

My use of “taught” rather than “programmed” is deliberate: the development team uses AI and repetition to encourage the robot to learn a movement, rather than follow a pattern precisely. The robot’s body is very different from a human body, despite its similar size and proportions, and it must find the most efficient pathway on its own. Boston Dynamics dubs this method kinematics.

Boston Dynamics records patterns of human motion by having a human model perform the movement. The model wears either an Xsens motion‑tracking bodysuit for full‑body movements, or a virtual‑reality headset and hand‑held controllers for more dexterity‑based tasks such as tying a knot. This turns physical motion into digital data, which can be retargeted—mapped from a human body onto a robot body to account for the robot’s different proportions and joints.

Once captured, the robot learns how to perform the movement in a simulation environment. In the 60 Minutes video you can see thousands of virtual robots performing a basic jumping jack; some fall, stumble, or perform the move incorrectly, but many get it right. Each simulated movement creates more data.

When the robot’s collective “hive mind” has figured out the best way of doing the movement, this data is rolled out to a whole fleet of robots, allowing the robotics team to train new movements at scale with massive efficiency. In this way, an entire army of robots can be trained to perform a new movement pattern in an afternoon—such as operating a new production line—though more complex patterns (e.g., moving in sync to rise up against their former masters) remain out of reach.

Xsens describes the pipeline in simple terms:

Capture human motion → Retarget to the robot → Train at scale in simulation → Deploy to hardware → Repeat.

Despite the fact that robots can move in unnatural ways, they aren’t yet anywhere close to moving as efficiently or with as much complexity as humans. However, they are certainly getting there, and it looks like we’re only a few short years from seeing genuinely humanoid robots in homes, workplaces, and many other arenas.

Comments

Want to join the conversation?

Loading comments...