AI Needs Accountability. We Can’t Rely on Companies and Governments Alone.

AI Needs Accountability. We Can’t Rely on Companies and Governments Alone.

Just Security
Just SecurityMar 27, 2026

Key Takeaways

  • Anthropic removed safety pledge, despite earlier public stance.
  • U.S. military used Anthropic models in Venezuela, Iran operations.
  • Government bans risk overreach, often infringe free expression rights.
  • Meta’s Oversight Board shows independent governance possible, but limited.
  • Sustainable, diversified funding essential for effective AI accountability.

Pulse Analysis

The rapid deployment of advanced AI models has outpaced the mechanisms designed to keep them safe. Anthropic’s quiet removal of a core safety commitment, coupled with its technology’s use in covert military missions, underscores how corporate self‑policing can collapse under competitive pressure. Meanwhile, government actions—ranging from blanket platform bans in Gabon to questionable classifications of AI suppliers as "supply chain risks"—often reflect political expediency rather than nuanced risk management. This dual failure leaves a vacuum where public harms can grow unchecked, prompting calls for a neutral, third‑party oversight structure that answers to societal interests rather than profit or political agendas.

Meta’s Oversight Board offers the most concrete example of such independent governance. Though its members were initially appointed by the company, the board now selects its own recruits and issues binding decisions that Meta must publicly address. The board’s recent reversal of a takedown of a Taiwanese anti‑scam video demonstrates how external review can correct corporate overreach. However, its effectiveness is limited by funding dependence on Meta, uneven geographic representation, and non‑binding recommendations for broader industry practices. Strengthening this model requires diversified, industry‑wide public‑interest funds, broader professional diversity—including engineers and public‑health experts—and legal authority to enforce its rulings across multiple platforms.

A resilient accountability ecosystem would layer independent oversight boards, civil‑society watchdogs, academic research labs, and user councils, each with clear mandates and transparent reporting. Funding could mirror universal service funds, ensuring no single corporation dictates the agenda. Embedding oversight outcomes into ESG disclosures would give investors material insight into societal risk, reducing share‑price volatility and incentivizing responsible behavior. By institutionalizing a public‑interest layer, the AI sector can align innovation with democratic values, safeguard human rights, and restore confidence among regulators, investors, and the broader public.

AI Needs Accountability. We Can’t Rely on Companies and Governments Alone.

Comments

Want to join the conversation?