"All Lawful Use": Much More Than You Wanted To Know

"All Lawful Use": Much More Than You Wanted To Know

Astral Codex Ten
Astral Codex TenMar 1, 2026

Key Takeaways

  • DoW labeled Anthropic as supply‑chain risk for AI use.
  • OpenAI signed vague “all lawful use” contract with DoW.
  • Current surveillance laws contain loopholes exploitable by AI.
  • Autonomous weapons policy lacks concrete human‑oversight requirements.
  • Industry fears weak safeguards may enable mass monitoring.

Pulse Analysis

The Department of War’s recent move to label Anthropic a supply‑chain risk underscores a growing friction between defense agencies and AI developers. Anthropic’s refusal to enable mass surveillance or autonomous weaponry forced the DoW to turn to OpenAI, which offered a contract framed around “all lawful use.” While the language sounds reassuring, it provides little concrete protection, leaving the extent of permissible applications open to interpretation. This dynamic illustrates how government procurement pressures can push AI firms into gray‑area agreements that may compromise their ethical commitments and public trust.

Legal analysts point out that the United States’ surveillance framework is riddled with loopholes that AI can exploit at scale. Existing statutes permit the incidental collection of vast citizen data, and the government can legally query that data in a targeted manner. By deploying large‑language models, the DoW could automate the analysis of these troves, creating detailed “loyalty” scores without new legislation. The phrase “all lawful use” therefore does not guarantee protection against mass domestic surveillance; it merely aligns with current, often ambiguous, legal standards that can be reinterpreted as technology evolves.

Autonomous weapons present a parallel challenge. Department of Defense Directive 3000.09 calls for “appropriate” human judgment but offers no precise definition, granting the DoW latitude to expand AI‑driven lethality. OpenAI’s contract does not lock in specific oversight mechanisms, meaning future policy shifts could permit fully autonomous systems powered by cloud‑based models. For the AI industry, this signals a need for stronger contractual clauses, transparent safety stacks, and perhaps legislative action to ensure that rapid AI integration into defense does not outpace ethical and legal safeguards.

"All Lawful Use": Much More Than You Wanted To Know

Comments

Want to join the conversation?