NDSS 2025 – TrajDeleter: Enabling Trajectory Forgetting In Offline Reinforcement Learning Agents
Companies Mentioned
Why It Matters
As offline RL expands into sensitive domains like healthcare and energy, the ability to quickly erase the influence of erroneous or privacy‑sensitive data becomes crucial for compliance and safety. TrajDeleter offers a practical, efficient solution that mitigates risks without the costly overhead of retraining from scratch, making responsible AI deployment more feasible.
Summary
The episode discusses TrajDeleter, a novel method for trajectory unlearning in offline reinforcement learning (RL) agents, presented by researchers from the University of Virginia and the Chinese Academy of Sciences. TrajDeleter trains agents to degrade performance on states from specific, unwanted trajectories while preserving overall competence on remaining data, and introduces TrajAuditor to verify successful forgetting. Experiments across six offline RL algorithms and three tasks show the approach removes about 94.8% of targeted trajectories using only 1.5% of the time required for full retraining, maintaining strong real‑world performance.
NDSS 2025 – TrajDeleter: Enabling Trajectory Forgetting In Offline Reinforcement Learning Agents
Comments
Want to join the conversation?
Loading comments...