Two DOGE 28-Year-Olds, Terminated $100 Million in Grants Using Chat GPT

Narativ with Zev Shalev

Two DOGE 28-Year-Olds, Terminated $100 Million in Grants Using Chat GPT

Narativ with Zev Shalev Mar 17, 2026

Why It Matters

This case illustrates the dangers of delegating critical public‑policy decisions to untrained individuals and opaque AI tools, exposing vulnerable communities to systemic discrimination. It serves as a warning that without proper oversight, rapid tech‑driven reforms can undermine democratic processes and erode trust in government institutions.

Key Takeaways

  • Two 28‑year‑olds eliminated $100 M in NEH grants via ChatGPT.
  • They fired 65% of agency staff within weeks.
  • Decisions were made using secret Signal communications, leaving no records.
  • AI screened grants using DEI keyword list, excluding white/heterosexual terms.
  • Only private lawsuits exposed actions; no congressional or DOJ probe.

Pulse Analysis

In early 2025 two 28‑year‑old operatives hired under the controversial DOGE initiative entered the National Endowment for the Humanities and, within weeks, terminated roughly $100 million in grant funding. Leveraging ChatGPT to evaluate proposals, they applied a keyword‑based DEI filter that prioritized LGBTQ, racial minority, and immigrant themes while omitting white or heterosexual identifiers. All coordination occurred on the encrypted messaging app Signal, ensuring that no official records survived the rapid purge that also saw 65% of agency staff dismissed.

The episode highlights profound risks when untrained personnel deploy generative AI for policy decisions. By feeding a biased checklist into ChatGPT, the operatives allowed the model’s inherent data‑driven prejudices to dictate which cultural projects received support, effectively marginalizing historically underfunded groups and erasing critical scholarship such as the Colfax Massacre documentation. The lack of oversight, combined with secret communications, created an environment where discriminatory outcomes could flourish unchecked, raising urgent questions about AI governance, transparency, and the ethical use of automated tools in public‑sector funding.

Although DOGE’s stated mission was to slash the federal deficit by $2 trillion, the swift grant cuts and staff reductions failed to produce measurable fiscal savings, and the deficit ultimately rose. With no congressional hearings or Department of Justice inquiries, only a series of private lawsuits have illuminated the misconduct. The case underscores the necessity for robust accountability frameworks, clear statutory guidance for AI deployment, and independent oversight to prevent similar abuses in future government reform efforts.

Episode Description

THE DOGE DEPOSITIONS | A Narativ Live Pop-Up Documentary

Show Notes

Comments

Want to join the conversation?

Loading comments...