AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosDifferentiable Weights-Varying Nonlinear MPC via Gradient-Based Policy Learning (IEEE RA-L)
AutonomyAIRobotics

Differentiable Weights-Varying Nonlinear MPC via Gradient-Based Policy Learning (IEEE RA-L)

•February 10, 2026
0
TUM AVS
TUM AVS•Feb 10, 2026

Why It Matters

Enabling rapid, data‑efficient online MPC weight adaptation reduces manual tuning and boosts performance across dynamic robotic and autonomous‑vehicle applications.

Key Takeaways

  • •Differentiable MPC now supports dynamically varying weight sets.
  • •Lightweight neural policy adjusts MPC weights using environment observations.
  • •Training achieves 38× speedup, 27× fewer samples than RL.
  • •Adaptive controller cuts path‑tracking error up to 50% versus static weights.
  • •Zero‑shot transfer succeeds; two laps fine‑tune on new track.

Summary

The paper presents the first differentiable Model Predictive Control (MPC) framework that can vary its cost‑function weights online for constrained nonlinear systems, leveraging gradient‑based policy learning.

A lightweight neural network receives real‑time observations—such as reference trajectory curvature and velocity—and outputs MPC weight adjustments at each control step. By back‑propagating a user‑defined loss through a differentiable MPC solver, the authors obtain an end‑to‑end gradient that trains the policy in milliseconds, achieving 38‑times faster convergence and using 27‑times fewer samples than conventional weight‑varying reinforcement learning.

Experiments on a high‑fidelity simulation of the full‑scale Delera AV24 race car demonstrate the approach’s potency. The adaptive controller reduces lateral and velocity deviation, cutting path‑tracking error by up to 50 % compared with static‑weight MPC, and matches or exceeds all benchmark algorithms. Moreover, a policy trained on the Monster track transferred zero‑shot to the unseen Laguna Seikka circuit, requiring only two laps of online fine‑tuning to reach performance of a track‑specific controller.

These results suggest that fast, sample‑efficient online weight adaptation can eliminate the labor‑intensive tuning traditionally required for MPC, opening the door to more responsive autonomous systems in racing, robotics, and any domain where dynamics shift rapidly.

Original Description

How do you make autonomous vehicles drive faster and smarter?
This paper introduces Differentiable Weights-Varying MPC (Diff-WMPC) — a gradient-based learning framework that dynamically tunes MPC cost weights using a neural policy. The result: rapid, sample-efficient training and real-time adaptation in autonomous racing scenarios.
Differentiable_Weights-Varying_…
Perfect for researchers in:
• Robot control
• Optimal control & MPC
• Autonomous vehicles
• Learning-based control systems
#MPC #AutonomousVehicles #ControlSystems #MachineLearning
0

Comments

Want to join the conversation?

Loading comments...