AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcastsTeaching AI How to Forget
Teaching AI How to Forget
Big DataAI

The Data Exchange

Teaching AI How to Forget

The Data Exchange
•January 15, 2026•43 min
0
The Data Exchange•Jan 15, 2026

Why It Matters

As data privacy laws tighten and AI accountability becomes a priority, the ability to reliably delete data from models is becoming a legal and ethical necessity. Understanding machine unlearning equips businesses to protect user rights, avoid regulatory penalties, and maintain trust while still leveraging AI’s benefits.

Key Takeaways

  • •AI models can't forget once trained, creating risk.
  • •Unlearning removes unwanted data and behaviors from model weights.
  • •Traditional guardrails and fine‑tuning are external, often bypassable.
  • •Hirondo’s neurosurgery approach targets internal representations precisely.
  • •Behavioral and data unlearning cut bias, jailbreaks, and PII.

Pulse Analysis

Enterprises are eager to embed AI into mission‑critical workflows, yet a fundamental flaw stalls adoption: once a model learns, it cannot easily forget. This permanence fuels bias, hallucinations, prompt‑injection vulnerabilities, and accidental exposure of personally identifiable information (PII). Companies therefore face regulatory pressure and costly risk mitigation, while existing safeguards—guardrails, context engineering, and fine‑tuning—operate only on the model’s inputs and outputs, leaving the underlying knowledge untouched. The result is a fragile trust layer that can be bypassed, limiting ROI on AI investments.

Hirondo tackles the problem from the inside out with what Ben Luria calls "neurosurgery" on neural networks. Their platform first maps where specific concepts, behaviors, or data points reside within a model’s weight space, then surgically removes or reshapes those representations without retraining from scratch. This dual‑track approach distinguishes behavioral unlearning—mitigating traits like bias or jailbreak susceptibility—from data unlearning, which erases sensitive content such as PII or copyrighted material. By operating at the weight level, Hirondo avoids the latency overhead of runtime guardrails and delivers a clean copy of the model that retains core performance while shedding unwanted knowledge.

The impact is measurable: internal benchmarks show up to 85% reduction in vulnerability exploitation and comparable drops in bias, while data unlearning can eliminate up to 99% of targeted PII. For enterprises, this translates into lower compliance costs, faster deployment cycles, and a more trustworthy AI stack. For model developers, it adds a post‑training alignment tool that complements reinforcement learning and safety fine‑tuning. As AI regulation tightens and the demand for reliable, enterprise‑grade models grows, unlearning technology positions itself as a critical layer in the next generation of responsible AI.

Episode Description

In this episode, Ben Lorica speaks with Ben Luria, CEO and co-founder of Hirundo, about the emerging necessity of machine unlearning for enterprise AI.

Subscribe to the Gradient Flow Newsletter 📩  https://gradientflow.substack.com/

Subscribe: Apple · Spotify · Overcast · Pocket Casts · AntennaPod · Podcast Addict · Amazon ·  RSS.

Detailed show notes - with links to many references - can be found on The Data Exchange web site.

Show Notes

In this episode, Ben Lorica speaks with Ben Luria, CEO and co-founder of Hirundo, about the emerging necessity of machine unlearning for enterprise AI.

Subscribe to the Gradient Flow Newsletter 📩  https://gradientflow.substack.com/

Subscribe: Apple · Spotify · Overcast · Pocket Casts · AntennaPod · Podcast Addict · Amazon ·  RSS.

Detailed show notes - with links to many references - can be found on The Data Exchange web site.

0

Comments

Want to join the conversation?

Loading comments...