On April 4, 2026, at 12:00 PM Pacific, Anthropic flipped a switch. Roughly 135,000 OpenClaw instances — autonomous AI agents that had been running on Claude Pro and Max subscription credits — went dark.1 Two days later, Google did the same thing to its AI Ultra subscribers, banning accounts without warning.2 Within a week, two of the three largest AI model providers had locked down third-party access to their subscription tiers. The era of flat-rate AI compute, such as it was, ended not with a policy paper but with an OAuth error.
The community reaction split along predictable lines. OpenClaw's creator called it a "betrayal of open source."3 Users reported cost increases of 10 to 50 times their previous monthly spend.4 Hacker News produced 400+ comments debating whether Anthropic was protecting its infrastructure or strangling its ecosystem. (For a detailed explainer on what OpenClaw is and how it works, Wikipedia has a thorough overview.)
But underneath the outrage is a math problem that no amount of open-source idealism can solve. And the math is not close.
The Arbitrage
Here's what was actually happening. A Claude Max subscription costs $200 per month. That subscription was designed for a human developer using Claude Code — Anthropic's own coding assistant — at a human pace, with built-in optimizations that keep compute costs low. Anthropic's prompt caching system, for instance, reduces the cost of repeated context by 90%, and its advanced tool-use optimization cuts token consumption by up to 85%.56
OpenClaw and similar third-party harnesses largely bypassed that caching layer.7 They ran automated agent workflows — not human-paced interactions but continuous, high-throughput loops. A single OpenClaw instance could burn through $1,000 to $5,000 per day in API-equivalent compute, all charged against a $200 monthly subscription.1
Do that math across 135,000 instances, with 60% running on subscription credits, and you get a company hemorrhaging compute capacity to serve users who are paying a fraction of the actual cost. Not a fraction like 80%. A fraction like 1-3%.
Independent analyst Martin Alderson estimated that a typical Claude Code Max user costs Anthropic roughly $4.92 per month to serve — an 11.8x markup on the $200 price.8 That's healthy margin. But "typical" means a human typing prompts, reading responses, thinking, typing again. It does not mean an automated swarm running overnight.
The Capacity Crisis
This wasn't a theoretical problem. By April 1, Anthropic had publicly admitted that users were hitting usage limits "way faster than expected," calling it the "top priority for the team."9 Max subscribers paying $100 per month reported burning their entire quota in a single hour of work. Pro subscribers — $200 per year — reported being able to use Claude only 12 out of every 30 days.9
A March promotion that doubled off-peak usage limits ended on March 28. Users felt the cliff immediately. Anthropic's own estimate was that peak-hour restrictions affected approximately 7% of users — but those 7% were the power users, the ones building workflows, the ones most likely to notice and complain.9
Meanwhile, Anthropic's overall financial position offered no cushion. The company's cash burn sits at 57% of revenue. CEO Dario Amodei has said there is "no hedge on earth" against overbuying compute — purchase too much capacity and demand falls short, you go bankrupt.10 Break-even is targeted for 2028. Until then, every watt matters.
Into that environment, OpenClaw was routing 135,000 automated agents through a subscription tier designed for perhaps a tenth of that load. The metaphor isn't a loophole. It's a fire hose connected to a garden spigot.
The "Open Source" Defense
Peter Steinberger, who created OpenClaw in November 2025 as a side project called Clawdbot, responded to the cutoff with a pointed accusation: "First they copy some popular features into their closed harness, then they lock out open source."3
The framing is seductive. Big company absorbs features from open-source project, then pulls up the drawbridge. It's a story the tech community has seen before — Android absorbing features from CyanogenMod, Chrome cannibalizing extension functionality, every embrace-extend-extinguish playbook in the Microsoft archive.
But there's a problem with the analogy. OpenClaw's code is open source. It remains open source. Nobody touched it. What got cut off was not the software. What got cut off was subsidized access to someone else's compute infrastructure.
Open source means the code is free. It has never meant that someone else's servers are free.
Steinberger built OpenClaw around what he called a "local-first architecture" — users could run agents on their own hardware with data stored in local Markdown files.11 That's a genuine open-source virtue. But the vast majority of OpenClaw instances weren't running locally. They were running on Anthropic's cloud, through Anthropic's subscription credits, at Anthropic's expense. The local-first architecture was the pitch. The cloud arbitrage was the product.
The Messenger Problem
There's a further complication with the "open source betrayal" narrative, and it involves the person making the accusation.
On February 14, 2026 — seven weeks before calling Anthropic anti-open-source — Peter Steinberger joined OpenAI.12 Sam Altman announced it personally on X, saying Steinberger would "drive the next generation of personal agents." OpenClaw was handed to a foundation, with OpenAI pledging to "continue to support" the project.12
To state the obvious: the creator of an open-source AI agent, who sold his involvement to the least transparent major AI company on earth, then accused a competing AI company of betraying open source. Whatever you think of Anthropic's decision, this is not the messenger you'd cast for the role.
To his credit, Steinberger also acknowledged the complexity. He called the situation "sad for the ecosystem" and credited Anthropic's Boris Cherny for mitigation efforts, including releasing prompt cache efficiency improvements to reduce API costs for former subscription users.3 The nuance didn't make the headlines. The "betrayal" quote did.
The Other Side of the Ledger
Anthropic's handling of this was notably better than the alternative. Compare:
| Action | Anthropic | |
|---|---|---|
| Warning given | Yes — days in advance | None |
| Public explanation | Boris Cherny posted reasoning on social media | No policy update issued |
| Legal terms updated | Yes, before enforcement | No |
| Transition credit | One-time credit equal to monthly plan cost (until April 17) | None |
| Collateral damage | AI access only | Threatened Gmail, Workspace, linked services2 |
| Bundle discounts | Up to 30% on pre-purchased usage | None offered |
Google restricted AI Ultra subscribers — paying $250 per month — with no warning, no explanation, and no grace period. One user reported being banned from their entire Google account, including Gmail and Workspace, for using OpenClaw's OAuth integration.2 Anthropic cut off a service. Google threatened to cut off an identity.
Boris Cherny, Anthropic's Head of Claude Code, framed the decision as infrastructure management, not ideology: "Our subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully."1 He noted that Claude Code team members are "big fans of open source" and pointed out that he had personally submitted pull requests to improve OpenClaw's prompt cache efficiency.3
Whether you find that convincing depends on how much weight you give to actions versus words. Anthropic did copy features from third-party tools into Claude Code. They did lock down access once they had a competitive alternative. That's a pattern. But they also gave warning, offered credits, provided a migration path, and left the API open for anyone willing to pay the actual cost of what they were using.
The Real Victims
Lost in the creator-versus-corporation framing are the users. Thousands of developers had built real workflows around OpenClaw running on Claude subscription credits. Some used it for personal automation — clearing inboxes, organizing calendars, sending emails. Others had wired it into development pipelines. When the switch flipped, their costs went from $200 per month to potentially $1,000 to $5,000 per month overnight.4
These users weren't doing anything wrong. They were using a tool that worked within the rules as they understood them. The rules changed. The tool broke. The cost of rebuilding falls entirely on them.
Some migrated to Claude Code — Anthropic's own harness, which remains covered by subscriptions.13 Others switched to local models through Ollama, which became an official OpenClaw provider in March 2026.14 Others moved to competing providers — xAI, DeepSeek, Kimi K2.5, which the OpenClaw community voted the top model for agent tasks.14 Some simply canceled their subscriptions entirely.
The lesson they learned is the same lesson every developer learns eventually about platforms: what the platform giveth, the platform can take away. Building on someone else's subsidized infrastructure is building on sand. It works until it doesn't, and when it stops working, there's no recourse.
The Bigger Fiction
Zoom out far enough and the OpenClaw cutoff is a symptom, not the disease. The disease is a pricing model that was never going to work.
Flat-rate subscriptions for AI compute are a fiction. They're a customer-acquisition tool, not a business model. Every AI company offering them is making a bet: that most users will use far less than the ceiling, and that the light users will subsidize the heavy ones. It's the same bet gyms make — sell memberships to people who won't show up.
The problem is that AI agents do show up. They show up every minute of every day, consuming tokens at rates no human would match. When OpenClaw plugged automated agents into a subscription designed for human-paced usage, it didn't exploit a bug. It exposed a structural contradiction in the business model itself.
Anthropic's response — cutting off third-party tools while keeping its own harness on subscription pricing — is a temporary fix, not a resolution. As Claude Code itself becomes more agentic (it already has cron scheduling, background tasks, and multi-agent orchestration), the same pressure will rebuild from inside the walls. The line between "human-paced" and "automated" usage will blur further. The subscription ceiling will come under strain again.
We've been writing about this pressure since it started showing up in rate limits and session caps. The OpenClaw cutoff is the most dramatic expression yet, but it won't be the last. The next version of this story will be about Anthropic's own power users — the ones running Claude Code with cron jobs and background agents — hitting the same wall that OpenClaw hit, from inside the approved ecosystem.
The borrowed cloud is being reclaimed. The only question is who's next on the lease.
Disclosure
This article was written using Claude, an AI model made by Anthropic — the same company whose policy is the subject of this piece. The author's infrastructure runs on Claude Code with a Max subscription. We have covered Anthropic's rate-limiting practices critically in our Rationing series and will continue to do so. Our editorial positions are not influenced by the tools we use to produce them, but readers should know the connection exists.
Citations
- Thomas, E. "Anthropic blocks OpenClaw from Claude subscriptions in cost crackdown." The Next Web, April 4, 2026. thenextweb.com
- "Google Restricts AI Ultra Accounts Over OpenClaw OAuth." Implicator, April 6, 2026. implicator.ai
- "Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand." The Decoder, April 5, 2026. the-decoder.com
- "Anthropic Drops OpenClaw From Claude, Forces API Pricing." Implicator, April 4, 2026. implicator.ai
- "Prompt caching." Claude API Docs. platform.claude.com
- "Introducing advanced tool use on the Claude Developer Console." Anthropic Engineering. anthropic.com
- "Anthropic blocks third-party tools using Claude Code OAuth tokens." Lobsters, April 2026. lobste.rs
- Alderson, M. "Are OpenAI and Anthropic Really Losing Money on Inference?" martinalderson.com, 2026. martinalderson.com
- "Anthropic admits Claude Code users hitting usage limits 'way faster than expected.'" DevClass, April 1, 2026. devclass.com
- "The Growth Miracle and the Six Fractures: Anthropic at $380 Billion." Substack, 2026. substack.com
- "OpenClaw." Wikipedia. en.wikipedia.org
- "OpenClaw creator Peter Steinberger joins OpenAI." TechCrunch, February 15, 2026. techcrunch.com
- "The Native Migration: Why Claude Code is the OpenClaw Replacement You've Been Waiting For." Medium, April 2026. medium.com
- "OpenClaw Without Claude Code: The Best Alternative Harness in 2026." LobeHub, April 2026. lobehub.com
- "The End of the Claude Subscription Hack." Augmented Mind, April 2026. substack.com
- "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw." Hacker News, April 2026. news.ycombinator.com
