Together these changes significantly lower the barriers to large‑scale robot learning and evaluation, speeding experiment throughput, improving dataset manageability, and broadening hardware and benchmark interoperability for researchers and builders.
Published: October 24, 2025
Update on GitHub: https://github.com/huggingface/blog/blob/main/lerobot-release-v040.md
Authors
Steven Palma, Michel Aractingi, Pepijn Kooijmans, Caroline Pascal, Jade Choghari, Francesco Capuano, Adil Zouitine, Martino Russi, Thomas Wolf
We’re thrilled to announce a series of significant advancements across LeRobot, designed to make open‑source robot learning more powerful, scalable, and user‑friendly than ever before! From revamped datasets to versatile editing tools, new simulation environments, and a groundbreaking plugin system for hardware, LeRobot is continuously evolving to meet the demands of cutting‑edge embodied AI.
LeRobot v0.4.0 delivers a major upgrade for open‑source robotics, introducing scalable Datasets v3.0, powerful new VLA models like PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. The release also adds support for LIBERO and Meta‑World simulations, simplified multi‑GPU training, and a new Hugging Face Robot Learning Course.
LeRobot v0.4.0: Super Charging OSS Robotics Learning
TL;DR
Table‑of‑Contents
Datasets: Ready for the Next Wave of Large‑Scale Robot Learning
What’s New in Datasets v3.0?
New Feature: Dataset Editing Tools!
Simulation Environments: Expanding Your Training Grounds
LIBERO Support
Meta‑World Integration
Codebase: Powerful Tools For Everyone
The New Pipeline for Data Processing
Multi‑GPU Training Made Easy
Policies: Unleashing Open‑World Generalization
PI0 and PI0.5
GR00T N1.5
Robots: A New Era of Hardware Integration with the Plugin System
Key Benefits
Reachy 2 Integration
Phone Integration
The Hugging Face Robot Learning Course
Deep Dive: The Modern Robot Learning Tutorial
Final thoughts from the team
We’ve completely overhauled our dataset infrastructure with LeRobotDataset v3.0, featuring a new chunked episode format and streaming capabilities. This is a game‑changer for handling massive datasets like OXE (Open X Embodiment) and Droid, bringing unparalleled efficiency and scalability.
Chunked Episodes for Massive Scale: Our new format supports datasets at the OXE‑level (> 400 GB), enabling unprecedented scalability.
Efficient Video Storage + Streaming: Enjoy faster loading times and seamless streaming of video data.
Unified Parquet Metadata: Say goodbye to scattered JSONs! All episode metadata is now stored in unified, structured Parquet files for easier management and access.
Faster Loading & Better Performance: Experience significantly reduced dataset initialization times and more efficient memory usage.
We’ve also provided a conversion script to easily migrate your existing v2.1 datasets to the new v3.0 format, ensuring a smooth transition. Read more about it in our previous blog post: https://huggingface.co/blog/lerobot-datasets-v3. Open‑source robotics keeps leveling up!
Working with LeRobot datasets just got a whole lot easier! We’ve introduced a powerful set of utilities for flexible dataset editing.
With our new lerobot-edit-dataset CLI, you can now:
Delete specific episodes from existing datasets.
Split datasets by fractions or episode indices.
Add or remove features with ease.
Merge multiple datasets into one unified set.
# Merge multiple datasets into a single dataset.
lerobot-edit-dataset \
--repo_id lerobot/pusht_merged \
--operation.type merge \
--operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
# Delete episodes and save to a new dataset (preserves original dataset)
lerobot-edit-dataset \
--repo_id lerobot/pusht \
--new_repo_id lerobot/pusht_after_deletion \
--operation.type delete_episodes \
--operation.episode_indices "[0, 2, 5]"
These tools streamline your workflow, allowing you to curate and optimize your robot datasets like never before. Check out the docs: https://huggingface.co/docs/lerobot/using_dataset_tools for more details!
We’re continuously expanding LeRobot’s simulation capabilities to provide richer and more diverse training environments for your robotic policies.
LeRobot now officially supports LIBERO, one of the largest open benchmarks for Vision‑Language‑Action (VLA) policies, boasting over 130 tasks! This is a huge step toward building the go‑to evaluation hub for VLAs, enabling easy integration and a unified setup for evaluating any VLA policy.
Check out the LIBERO dataset: https://huggingface.co/datasets/HuggingFaceVLA/libero and our docs: https://huggingface.co/docs/lerobot/en/libero to get started!
We’ve integrated Meta‑World, a premier benchmark for testing multi‑task and generalization abilities in robotic manipulation, featuring over 50 diverse manipulation tasks. This integration, along with our standardized use of gymnasium ≥ 1.0.0 and mujoco ≥ 3.0.0, ensures deterministic seeding and a robust simulation foundation.
Train your policies with the Meta‑World dataset: https://huggingface.co/datasets/lerobot/metaworld_mt50 today!
We’re making robot control more flexible and accessible, enabling new possibilities for data collection and model training.
Getting data from a robot to a model (and back!) is tricky. Raw sensor data, joint positions, and language instructions don’t match what AI models expect. Models need normalized, batched tensors on the right device, while your robot hardware needs specific action commands.
We’re excited to introduce Processors: a new, modular pipeline that acts as a universal translator for your data. Think of it as an assembly line where each ProcessorStep handles one specific job—like normalizing, tokenizing text, or moving data to the GPU.
You can chain these steps together into a powerful pipeline to perfectly manage your data flow. We’ve even created two distinct types to make life easier:
PolicyProcessorPipeline: Built for models. It expertly handles batched tensors for high‑performance training and inference.
RobotProcessorPipeline: Built for hardware. It processes individual data points (like a single observation or action) for real‑time robot control.
# Get environment state
obs = robot.get_observation()
# Rename, Batch, Normalize, Tokenize, Move Device ...
obs_processed = preprocess(obs)
# Run inference
action = model.select_action(obs_processed)
# Unnormalize, Move Device ...
action_processed = postprocess(action)
# Execute action
robot.send_action(action_processed)
This system makes it simple to connect any policy to any robot, ensuring your data is always in the perfect format for every step of the way. Learn more about it in our Introduction to Processors documentation: https://huggingface.co/docs/lerobot/introduction_processors.
Training large robot policies just got a lot faster! We’ve integrated Accelerate directly into our training pipeline, making it incredibly simple to scale your experiments across multiple GPUs with just one command:
accelerate launch \
--multi_gpu \
--num_processes=$NUM_GPUs \
$(which lerobot-train) \
--dataset.repo_
Comments
Want to join the conversation?
Loading comments...