In June 2025, Cursor changed its pricing model overnight. The $20/month plan that included 500 premium requests became a credit-based system that delivered roughly 225 requests for the same price. One user reported $350 in overage charges in a single week. A small team spent $4,600 in six weeks — double their entire 2025 spend. Cursor apologized on July 4 and offered refunds.1 The apology was nice. The contract said they didn't owe one.
Every major AI coding platform — Claude Code, GitHub Copilot, Cursor, Windsurf — reserves the right to modify pricing, usage limits, and features at any time, with minimal notice. This is standard SaaS boilerplate. It is also the legal foundation for what developers experienced in Q1 2026: a coordinated tightening of the screws across every platform simultaneously, with no contractual recourse.
I read the contracts. Here's what they say.
Claude Code: The Quota You Didn't Agree To
Anthropic's Claude Code offers four tiers: Pro at $20/month, Max at $100 or $200/month, Team at $25/seat (with Premium seats at $100–150/seat for Claude Code access), and Enterprise at custom pricing.2
The individual plans operate on rolling 5-hour session windows with weekly limits. The Enterprise plan advertises "no plan or seat-level usage limits" — usage is billed on actual consumption. But even Enterprise customers can encounter limits during periods of high demand.3
What the pricing page doesn't tell you: how fast the 5-hour session budget burns.
In March 2026, Anthropic began burning through session limits faster during peak hours — weekdays, 5am to 11am Pacific. They didn't announce this. Support denied it for days. GitHub issues were closed as "invalid." When they finally acknowledged it — in an 89-word Reddit post, three days after users started reporting — they claimed 7% of users would be affected.4
The contract allows this. Anthropic's terms of service give them discretion over how session budgets are allocated. There is no SLA for individual or team plans that guarantees a specific burn rate, a minimum number of messages per session, or consistent behavior during peak hours. The "5-hour session" is a marketing description, not a contractual commitment.
Enterprise customers negotiate individual terms. What those terms guarantee is not publicly documented. If your enterprise SLA includes specific throughput guarantees, you may be protected. If it doesn't — and many don't — you have the same recourse as a Pro subscriber: none.
GitHub Copilot: You Own the Liability
GitHub Copilot's terms are governed by the Copilot Product Specific Terms, a separate document from GitHub's main terms of service.5
Two things stand out.
First: GitHub explicitly does not own the code Copilot generates. "You retain ownership of Your Code and responsibility for Suggestions you include in Your Code."5 Read that again. GitHub retains no ownership but also accepts no responsibility. The liability for every line Copilot writes lands on you. If the generated code infringes a patent, violates a license, or introduces a vulnerability, that's your problem. GitHub "strongly recommends" you have "reasonable policies and practices in place." Recommendations are not protections.
Second: the terms contain no published usage limits, no rate caps, and no uptime guarantees for the standard product. Enterprise customers get higher limits and priority access, but the specific numbers are negotiated, not published. The public-facing terms promise a service. They do not promise a quantity of service.
And starting April 24, 2026, GitHub will train on Free, Pro, and Pro+ user code by default — opt-out, not opt-in.6 The contract permits this. The opt-out is in your settings. Fewer than 20% of users change default settings.6
not a contractual commitment.
Cursor: The 20x Lesson
Cursor's June 2025 pricing overhaul is the clearest case study of what "we reserve the right to change terms" looks like in practice.
Before June 2025: $20/month for 500 premium requests. A fixed, predictable number.1
After June 2025: $20/month in credits, consumed at variable rates depending on model choice and prompt complexity. The effective request count dropped to roughly 225 — a 55% reduction at the same price. Users choosing more expensive models saw their credits evaporate faster. One Hacker News commenter reported $350 in overage in a single week, an annualized rate of roughly $18,000 for a tool that was supposed to cost $240/year.1
A small team reported spending over $4,600 in six weeks in early 2026 — approximately double their entire 2025 spending on the same tool.7
Cursor apologized. They offered refunds for unexpected charges incurred during the transition. The apology was genuine. But the contract didn't require it. The terms of service gave Cursor the right to change the pricing model at any time. They exercised that right. The backlash was a PR problem, not a legal one.
Billing complaints now dominate Cursor discussions on Reddit, Trustpilot, and G2.7 The product didn't get worse. The price-to-value ratio did. And the contract said that was allowed.
What You're Actually Guaranteed
Across all four major AI coding platforms, here is what the standard terms guarantee:
Uptime SLA: Not published for individual or team plans on any platform. Enterprise SLAs are negotiated individually and not publicly disclosed.
Usage minimums: None. No platform guarantees a minimum number of requests, messages, completions, or tokens per billing period. The "500 requests" or "5-hour session" descriptions are subject to change.
Price stability: None. Every platform reserves the right to modify pricing with notice periods ranging from 30 days to immediately, depending on the jurisdiction and the specific terms.
Feature stability: None. Models can be swapped, capabilities can be removed, and the behavior of the tool can change between sessions with no obligation to maintain prior functionality.
Liability for generated code: Yours. Every platform disclaims responsibility for the output. Some offer indemnification for enterprise customers against IP claims. Most don't for individual or team plans.
Data usage: Varies. GitHub trains on your code by default (opt-out). Anthropic's /feedback command retains data for 5 years regardless of other settings. Cursor's terms grant them a license to use your data to improve the service.6
In short: you are guaranteed access to a service that can change at any time, at a price that can change at any time, with output you're responsible for, and data rights that favor the platform. This is not unusual for SaaS. It is unusual for a tool you've restructured your entire workflow around.
The Lock-in Is the Point
Traditional software licensing had a concept: perpetual licenses. You paid once, you owned the right to use that version forever. The software might stop getting updates, but it didn't stop working. You could plan around it.
AI coding tools have no equivalent. There is no version you can freeze. The model behind the API changes without notice. The context window might shrink. The rate limit might tighten. The price might double. And because the tool is integrated into your IDE, your workflow, your architecture decisions, and — per Bustah's reporting — your cognitive patterns, switching costs are not just financial. They're neurological.
The contract allows the platform to change everything. The lock-in ensures you can't leave when they do. That's not a bug in the business model. It's the business model.
The lock-in ensures you can't leave when they do.
Read the Contract
The developers who were surprised by Anthropic's quota changes in March 2026 were not wronged by the contract. They were wronged by expectations the marketing created and the contract didn't support. The developers hit with $350 weekly overages on Cursor were not defrauded. They were billed according to terms they agreed to and didn't read.
This is not a defense of the platforms. Silent price hikes and undisclosed throttling are bad business practices regardless of what the contract permits. But the contract is the floor — the minimum the company is obligated to provide. And the floor, across every major AI coding platform, is: almost nothing.
Read the contract. Then decide how much of your workflow to build on it.
Disclosure
This article was written using Claude Code, which is one of the platforms whose contracts are analyzed in this piece. The irony of using a tool to critique its own terms of service is not lost on us. We are subject to the same session limits, the same peak-hour throttling, and the same contractual terms as every other user. We read the contract. We're still here. For now. Corrections welcome at nadia@sloppish.com.
Citations
- Cursor pricing overhaul, June 2025. 500 requests → ~225 credits at same price. $350/week overage report. July 4 apology. Medium | Vantage.
- Claude Code pricing tiers: Pro $20, Max $100/$200, Team $25/seat Standard or $100–150/seat Premium (includes Claude Code), Enterprise custom. Anthropic | SSD Nodes. Corrected March 31, 2026.
- Claude Enterprise plan: usage-based billing, no fixed caps, but limits can still be encountered. Claude Help Center.
- Anthropic quota crisis: peak-hour throttling, 89-word Reddit acknowledgment, 7% claim. Documented in sloppish's Quota Crisis series.
- GitHub Copilot Product Specific Terms: user retains ownership and responsibility for Suggestions. GitHub.
- GitHub training policy change (April 24, 2026); Claude Code /feedback 5-year retention; <20% change defaults (Carnegie Mellon). Documented in The Opt-Out Illusion.
- Cursor billing complaints: $4,600 in 6 weeks, dominating Reddit/Trustpilot/G2 discussions. Gamsgo | Get AI Perks.