
Turns Out The DOGE Bros Who Killed Humanities Grants Are Kinda Sensitive About It
Why It Matters
The case exposes how unchecked AI tools can reshape public‑funded research, raising legal, ethical, and policy questions for the humanities sector and beyond.
Key Takeaways
- •ChatGPT used to label grants as DEI, triggering cancellations
- •1,477 NEH grants terminated, only 42 retained
- •Depositions reveal operatives lacked clear DEI definition
- •Government sought court order to remove deposition videos
- •Lack of due process raises legal and ethical concerns
Pulse Analysis
The NEH’s rapid grant purge illustrates a growing tension between technology and public policy. By feeding terse grant descriptions into ChatGPT, two inexperienced officials generated binary DEI verdicts that guided the cancellation of hundreds of projects, from Indigenous language archives to Black newspaper digitization. This reliance on a language model sidestepped traditional peer review and merit‑based assessment, effectively weaponizing a keyword filter to align funding with a partisan agenda. The episode underscores the need for transparent AI governance frameworks when federal agencies adopt automated decision‑making tools.
Beyond the procedural shortcuts, the depositions reveal a deeper cultural clash. Fox and Cavanaugh could not define DEI without deferring to a vague executive order, yet they applied the label inconsistently—branding anti‑Black historical documentaries as non‑beneficial while praising gender‑focused Holocaust narratives. Their keyword list omitted neutral terms like “white” or “heterosexual,” suggesting an ideologically driven filter rather than an objective metric. This selective application threatens the credibility of grantmaking institutions and may deter scholars whose work falls outside narrow political definitions.
The government’s swift request to seal the deposition videos adds another layer of controversy. Citing harassment and death threats, officials sought a court order to suppress public scrutiny, raising First Amendment concerns and highlighting the opacity of internal decision‑making. The legal battle sets a precedent for how agencies handle discovery in politically charged lawsuits. As AI tools become more embedded in public administration, policymakers must balance efficiency with accountability, ensuring that funding decisions remain evidence‑based, equitable, and subject to robust oversight.
Comments
Want to join the conversation?
Loading comments...