Anthropic’s ban forces developers to abandon a popular open‑source agent framework or pay substantially higher API fees, accelerating migration toward competing AI platforms and highlighting the risks of proprietary token lock‑ins.
Anthropic announced today that it is permanently disabling OpenClaw and any other third‑party agents that rely on its Cloud Code authentication tokens. The move effectively ends the popular open‑source framework that let developers plug Claude‑4.5, Opus, and newer models into autonomous loops, turning them into a closed‑garden service. The company cited runaway costs as the primary driver: a single Claude Max subscriber could generate thousands of dollars in compute by running unlimited agentic loops, while Anthropic’s flat‑rate pricing model left it exposed to massive token consumption. After a silent technical block on Jan 9 and formal terms‑of‑service changes on Feb 17, Anthropic now classifies such usage as a violation and will only allow tokens for its own Cloud Code tools. The ban has already sparked a wave of disruption. Peter, the creator of OpenClaw, recently joined OpenAI, and users report workflows collapsing overnight. OpenAI, in contrast, has explicitly permitted its subscription tokens for external API calls, prompting many developers to migrate to OpenAI’s ecosystem. Community sentiment is sharply divided, with power users feeling betrayed while others see an opportunity to diversify across OpenAI, Google, or open‑source models. For the broader AI developer community, Anthropic’s decision underscores the fragility of relying on proprietary token ecosystems. Developers must now weigh the cost of staying within Anthropic’s walled garden against the flexibility—and potentially higher per‑token fees—of alternative providers. The episode may accelerate a shift toward more open, interchangeable model stacks, reshaping how AI agents are built and deployed.
Comments
Want to join the conversation?
Loading comments...