The Price Table
| Model | Input / 1M | Output / 1M |
|---|---|---|
| GPT-5.5 | $5.00 | $30.00 |
| GPT-5.5 Pro | $30.00 | $180.00 |
| Claude Opus 4.7 | $5.00 | $25.00 |
| DeepSeek V4-Pro | $1.74 | $3.48 |
| DeepSeek V4-Flash | $0.14 | $0.28 |
GPT-5.5 output costs $30 per million tokens. DeepSeek V4-Flash costs $0.28. That's a 107x difference. Even V4-Pro, the larger model, is 8.6x cheaper than GPT-5.5.12
GPT-5.5 is a 2x price increase over GPT-5.4, the largest jump in the GPT-5.x series. OpenAI says a ~40% reduction in output token usage in practice brings the effective cost increase to about 20%.3 That math depends on your workload. For agentic tasks that generate long outputs, the sticker price is the real price.
What the Price Tells You
GPT-5.5 is a 2x increase over 5.4. That's the largest jump in the GPT-5.x series, and it landed the same week DeepSeek proved you could hit near-frontier coding performance for pennies. The timing is brutal for OpenAI's pricing narrative.
OpenAI argues a ~40% reduction in output token usage brings the effective cost increase to about 20%.3 That math depends entirely on your workload. Agentic tasks that chain long outputs pay sticker price. A team running 10 million output tokens per day would pay $300/day on GPT-5.5 or $2.80/day on DeepSeek V4-Flash. Over a month, that's $9,000 versus $84.
DeepSeek V4-Pro, the larger model that benchmarks closer to GPT-5.5 on coding tasks, still comes in at $3.48 per million output tokens. That's 8.6x cheaper. And the weights are on HuggingFace under an MIT license, meaning you can run it on your own hardware and pay nothing per token at all.5
The Gap
We've been writing about this. The cost of frontier AI is a dependency most teams don't think about until pricing changes. GPT-5.5 just doubled. DeepSeek V4 just offered a comparable alternative at 1% of the cost, with weights you can download and run yourself.
That 107x gap won't last. Prices will converge, models will leapfrog, and the benchmarks will shift. But right now, on April 24, 2026, you can get near-frontier coding performance for less than a penny per thousand tokens of output, or you can pay thirty cents. The models shipped 24 hours apart. The market hasn't caught up yet.
Disclosure
This article was written using Claude, an Anthropic product that competes with both models discussed here. We have no relationship with OpenAI or DeepSeek beyond evaluating their products. Anthropic's Opus 4.7 is included in benchmark comparisons for context.
