The Ed Zitron Scorecard

A fact-based examination of tech's angriest critic — what he got right, what he got wrong, and why you should care about the difference
By Bustah Ofdee Ayei · March 20, 2026

Ed Zitron has built a media empire on righteous anger. His newsletter "Where's Your Ed At" has north of 80,000 subscribers.27 His podcast "Better Offline" won a Webby Award. He has a book deal with Penguin Random House. He is, by many measures, the most influential tech critic working today. But how much of what he says is actually true?

This is not a hit piece. Zitron has done genuinely important work — his Google Search investigation was built on primary source documents from a federal antitrust trial, and his remote work advocacy has been vindicated by peer-reviewed research. When he's right, he's usefully right.

But Zitron is also a PR professional by trade who has never publicly acknowledged being wrong about anything, whose AI bubble predictions have missed every timeline he's set, and whose business model structurally rewards doom over nuance. That matters, especially when millions of people are making career and investment decisions based partly on his analysis.

So let's go claim by claim.

Where He's Been Right

The Rot Economy

Zitron's central thesis — that tech companies systematically degrade their products to chase growth metrics — is his most durable contribution. He named the pattern "The Rot Economy" in 2023, building on the broader conversation started by Cory Doctorow's concept of "enshittification" (coined in 2022). The evidence has largely backed both of them up.

Streaming services are the clearest example. Netflix pushed its Premium tier to $24.99/month, Apple TV+ raised prices 30%, and ads crept into paid tiers across the industry. By 2024, 27.8% of Americans reported "streaming fatigue," and the average consumer was spending 23% less on streaming than the year before.1

Social media engagement has broadly declined across major platforms, though precise figures vary by source and methodology. Multiple industry reports from 2024-2025 documented falling engagement rates on Facebook, X (formerly Twitter), TikTok, and Instagram — a trend consistent with the product degradation Zitron describes.2

Verdict: Largely Correct
The thesis has held up well. The pattern of product degradation in pursuit of growth metrics is real, documented, and ongoing.

"The Man Who Killed Google Search"

In April 2024, Zitron published his most consequential piece of investigative work, arguing that Google CTO Prabhakar Raghavan — a career ads executive — had been installed to lead Search and systematically degraded quality in favor of advertising revenue.3

He built the case on internal Google emails surfaced during the DOJ antitrust trial. These were primary source documents, not speculation. A University of Leipzig longitudinal study independently confirmed that search engines were losing the fight against SEO spam, with declining text complexity in top-ranking pages.4

Six months after the article, Raghavan was removed from leading Search and given the face-saving title of "Chief Technologist." Nick Fox replaced him.5 Google searches per US user fell nearly 20% year-over-year.6

Verdict: Substantially Validated
Genuine investigative journalism built on primary sources. The subject was removed from his position within six months.

Remote Work

Zitron has argued since 2021 that return-to-office mandates are driven by managerial insecurity and real estate obligations, not productivity data. This has been his most thoroughly vindicated position.

A Stanford study published in Nature — examining 1,600+ workers at Trip.com — found that hybrid workers were equally productive and equally likely to be promoted as full-time office workers. Resignations fell 33% when workers shifted to hybrid schedules.7

The confirmation came from an unexpected source: a BambooHR survey found that 25% of C-suite executives admitted they had hoped their RTO mandates would result in some voluntary turnover — confirming Zitron's thesis that RTO was at least partly about workforce reduction, not just productivity.8 Eighty percent of companies reported losing talent due to RTO mandates, with 13% higher turnover at strict-RTO firms.9

Verdict: Strongly Vindicated
Academic research and executive admissions confirmed his core argument almost exactly as stated.
* * *

Where He's Been Wrong

This is where things get uncomfortable — not for Zitron, who has never publicly acknowledged any of these misses, but for readers who've been making decisions based on his analysis.

"Demand for AI Is Insufficient"

In "The Case Against Generative AI" (September 2025), Zitron wrote that "nobody that wants it can afford it, and those who can afford it don't need it."10 He argued that demand was insufficient to sustain the industry.

Here is what demand actually looks like as of early 2026:

ChatGPT has 900 million weekly active users.11 That makes it one of the most-used software products in human history. OpenAI has over 1 million organizations using its products, with 9 million paying business customers.12

Anthropic has 300,000+ business customers, with over 500 spending more than $1 million per year. Eight of the Fortune 10 use Claude.13

Cursor — an AI-powered code editor — went from roughly $100 million in annual recurring revenue in 2024 to over $2 billion by February 2026, approximately 14 months later. That growth rate has led some to describe it as "the fastest-growing SaaS company of all time." It has 1 million daily active users.14

Claude Code alone generates $2.5 billion in annualized revenue (that is, its current monthly run rate multiplied by twelve — an important distinction for a fast-growing product, since it overstates what has actually been earned to date). According to Anthropic, 4% of public GitHub commits are now attributed to it, though this figure has not been independently verified.15

These are not the numbers of a product "nobody wants."

Verdict: Wrong
Demand has proven enormous and continues to accelerate. The breadth of adoption — from individual developers to Fortune 10 companies — directly contradicts the claim.

"AI Costs Are Increasing and Unsustainable"

In March 2026, Zitron wrote that "costs never decrease as the industry claims."16 This is his most clearly falsified claim, and the one that most calls his analytical rigor into question.

The publicly available API pricing tells a different story:

When GPT-4 launched in March 2023, it cost $30 per million input tokens and $60 per million output tokens. By May 2024, GPT-4o had reduced that to $2.50 and $10 — an 87-92% reduction. By July 2024, GPT-4o-mini cost $0.15 and $0.60 — a 99.5% reduction from original GPT-4 pricing in under 18 months.17

Inference costs have been declining roughly 10x per year across the industry, driven by model optimization, hardware improvements, and competition from open-source alternatives like DeepSeek.18

What Zitron does here is a sleight of hand: he conflates total industry spending (which is increasing because demand is growing) with per-unit costs (which are falling fast). It's like arguing that gasoline is getting more expensive because total global fuel spending goes up, while ignoring that the price per gallon dropped 99%.

Verdict: Wrong
Inference costs have dropped by roughly two orders of magnitude in under three years. The claim that "costs never decrease" is directly contradicted by publicly available API pricing that anyone can verify in thirty seconds.

"There Is No Killer App"

Zitron has repeatedly claimed that AI "doesn't work" and that there is no compelling use case. He's described LLM-generated code as equivalent to work from a "slightly-below-average CS graduate" and argued that "you cannot rely on a large language model to do what you want."19

Meanwhile, AI coding assistants have become a multi-billion-dollar market in under two years. Cursor has 1 million daily active users who pay $20-40/month and choose to keep paying.14 Claude Code generates $2.5 billion in annualized revenue.15 GitHub Copilot has millions of individual users. GitHub's own user surveys report a 55% productivity increase, though self-reported productivity gains should be taken with appropriate skepticism about methodology.20

ChatGPT — with 900 million weekly active users — is a killer app by any definition.11

There's a philosophical argument to be had about whether AI code is "good enough" by some abstract standard. But the revealed preference of millions of professional developers voluntarily paying for these tools and using them every day is a market signal that's hard to dismiss.

Verdict: Wrong
AI coding assistants are clearly a killer app by any commercial definition. The "doesn't work" claim is contradicted by massive voluntary adoption and retention.

"The Bubble Will Violently Collapse"

In September 2025, Zitron wrote that the AI bubble would "inevitably and violently collapse in the near future."10 He initially pointed to Q4 2025 as the timeframe. When that didn't happen, the timeline shifted to "no later than Q3 2026." It has since moved again to 2027.21

As of March 2026: OpenAI has reached $25 billion in annualized revenue.12 Anthropic hit $19 billion annualized.13 (A note on annualized revenue: these figures represent the current monthly run rate multiplied by twelve. For companies growing this fast, ARR significantly overstates what has actually been earned — the same conflation, ironically, that this article criticizes Zitron for when he does it with cost figures. We use ARR here because it's the standard industry metric, but the caveat is worth stating.) NVIDIA posted $130.5 billion in FY2025 revenue (actual, not annualized), up 114% year-over-year, with 75% gross margins.22

No collapse has occurred. Revenue is accelerating, not contracting. The pattern of pushing back predicted bust dates — always just a few quarters away, never arriving — is a hallmark of failed prognostication.

Verdict: Wrong (So Far)
Every timeline has been missed. Revenue at major AI companies is accelerating. Moving goalposts undermine predictive credibility, even if the underlying financial concerns have some merit.

The Subprime Mortgage Analogy

Zitron has repeatedly compared the AI industry to the subprime mortgage crisis, arguing that it's built on "circular money flows, over-leverage, and no underlying value."24

The analogy falls apart on inspection. Subprime mortgages had fraudulent underlying assets — loans to borrowers who could never pay. AI companies have real products with hundreds of millions of real users generating real revenue that is growing rapidly. Subprime was opaque — investors didn't know what was in CDOs. AI company financials are relatively transparent. And most critically: subprime had no genuine utility. Nine hundred million people use ChatGPT every week because it does something useful for them.

The better historical analogy — and one that Zitron conspicuously avoids — is the dot-com bubble. The internet itself was never a bubble, even though Pets.com was. Amazon, Google, and others went on to be worth trillions. The dot-com comparison allows for something Zitron's framing doesn't: the possibility that some AI companies will fail spectacularly while the technology itself proves transformative.

Verdict: Poor Analogy
Implies fraudulent or worthless underlying assets. AI products have massive genuine adoption. The dot-com comparison is more honest but less dramatic — and drama is the product.

Meta Will Die

In late 2022, during Meta's stock collapse and metaverse spending spree, Zitron argued that Zuckerberg was leading the company toward ruin.25 Meta's stock subsequently surged dramatically through 2023-2025 after Zuckerberg's "year of efficiency" cost-cutting and a de-emphasis of the metaverse. In fairness, the metaverse bet that Zitron was specifically criticizing was largely abandoned — so his criticism of the strategy had merit even if his prediction of the company's demise did not.

Verdict: Wrong
The company he declared dead became one of the best-performing stocks in the market.
* * *
"I feel it in my soul."
— Ed Zitron, when asked by WIRED about his certainty that the AI industry would collapse

The Credibility Question

The PR Professional Who Became a Critic

This is the part most people don't know about, or don't think about enough: Ed Zitron is a public relations professional. He founded EZPR, a tech PR firm, around 2012. He has been named one of the top 50 PR people in tech four times. His clients have included Skydio, SmartNews, Wyze, and Fortune 500 companies.26

PR professionals are trained in narrative framing, emotional manipulation, audience cultivation, and controlling the story through selective emphasis. These are not pejorative descriptions — they are core competencies of the profession. And they are exactly the techniques visible in Zitron's newsletter and podcast.

WIRED's Tommy Craggs profiled Zitron in October 2025 and captured this tension in a headline that says everything: "Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI."27

EZPR works with tech companies — including AI companies — as PR clients. Simultaneously, Zitron's newsletter and podcast generate revenue from AI-critical content. No formal disclosure policy is visible for managing this conflict.

The Business Model of Doom

Zitron's newsletter has over 80,000 subscribers, with premium subscriptions at $7/month or $70/year.27 His podcast has cracked Spotify's top 20 among tech shows. He has a book deal with Penguin Random House. He is represented by the Stern Strategy Group for speaking engagements.28

None of this is inherently wrong. Writers should get paid. But the subscription model creates a structural incentive that's worth understanding: negative, emotionally charged content consistently outperforms measured analysis in newsletter economics. Doom sells. Nuance doesn't.

His flagship pieces are literally titled "The Hater's Guide to..." — to Adobe, to NVIDIA, to Anthropic, to the AI Bubble. This is not the branding of dispassionate analysis. It's entertainment-criticism, and the character of "tech's angriest critic" requires that he always be angry, regardless of what the evidence shows.

As WIRED described it, his commentary is "pitched in the key of personal affront" and "holds out the seductive promise of some great comeuppance for the industry — justice, of some kind, for an audience that isn't seeing much of it."27

Has He Ever Admitted Being Wrong?

This is perhaps the most telling data point. After an exhaustive search through his newsletter archive, podcast appearances, and social media: no evidence was found of Zitron publicly acknowledging any incorrect prediction or abandoned position.

His 2025 year-end retrospective lists accomplishments without examining which claims held up and which didn't.29 He frames his coverage as uniformly prescient, emphasizing he published analysis "way before" others. He treats media appearances and awards as proxies for being correct.

A credible analyst acknowledges uncertainty and corrects errors. This is not optional — it is the minimum standard for intellectual honesty. Zitron's public record shows no instances of it.

"I Feel It in My Soul"

When WIRED's Craggs pressed Zitron about his certainty that the AI industry would collapse, Zitron responded: "I feel it in my soul."27

This is prophecy, not analysis. It is a statement of faith, not evidence. And it is, notably, exactly the kind of emotional appeal that a PR professional would deploy when the data isn't cooperating.

The Gish Gallop Problem

Zitron's newsletter pieces routinely run 5,000 to 18,000 words. They are dense with statistics, company names, and quotations. This creates an impression of exhaustive research. But critics — including researchers who've fact-checked specific claims — have identified a pattern.

Lawrence Hecht of The New Stack examined "The Case Against Generative AI" and found that an MIT study Zitron cited prominently — showing 95% of companies not profiting from AI — had a small sample size that Zitron didn't disclose.30 Hecht also pushed back on Zitron's characterization of NVIDIA's ecosystem investments as a "self-dealing scheme," arguing this is standard industry practice.

On Hacker News, a recurring criticism is more blunt: his work is described as "a 10k word gish gallop" with errors that are "pretty clearly intentional."31 He simultaneously claims AI is "unimpressive," "unfathomably dangerous," and "doesn't matter" — positions that critics view as incoherent rather than nuanced.

Others have noted that he conflates R&D spending with cost-of-goods-sold — a fundamental accounting distinction that makes AI companies look less viable than they may be. As one rebuttal argued, comparing AI companies to pharma is more apt: massive upfront R&D spending doesn't mean the product is unprofitable at the unit level.32

The Gary Marcus Problem

Even within the AI-skeptic community, Zitron's approach has drawn criticism. Gary Marcus — an NYU cognitive scientist and one of the most prominent AI skeptics for over a decade — publicly complained that Zitron was "making essentially the same arguments" about AI's limitations without acknowledging Marcus's prior work, calling it a missed opportunity for collaboration.33 When the most credentialed person making your argument thinks you're not engaging honestly with the intellectual history, that's notable.

* * *

The Fundamental Flaw

The deepest problem with Zitron's AI analysis is not any single wrong prediction. It's that he treats the entire AI industry as one monolithic thing — and then characterizes the whole by its weakest parts.

NVIDIA is enormously profitable — $130.5 billion in revenue with 75% gross margins.22 Microsoft posted $81.3 billion in quarterly revenue with Azure growing 39% year-over-year, and GAAP net income of $38.5 billion — up 60%, though roughly $7.6 billion of that reflects gains from OpenAI-related investments; non-GAAP net income growth was a still-impressive 23%.34 These are not companies being destroyed by AI. They are being enriched by it.

OpenAI and Anthropic are in a high-growth investment phase — burning cash while growing revenue at extraordinary rates. Whether they reach sustainability is genuinely uncertain. Some neoclouds are overleveraged. These are legitimate concerns.

But these are different stories requiring different analyses. "NVIDIA is massively profitable" and "CoreWeave has concerning debt" can both be true simultaneously. "Cursor grew 1,100% in a year" and "most enterprises haven't figured out AI yet" can coexist. The world is allowed to contain both signal and noise.

Zitron's framework doesn't allow for this. His narrative requires a monolithic bubble, so he cherry-picks the weakest players and worst metrics to characterize the entire industry — while ignoring the strongest counterevidence. This is not analysis. It is advocacy.

* * *

So How Much Should You Believe Him?

Trust him on product quality. The Rot Economy thesis is real, well-documented, and useful. His Google Search investigation is genuine journalism. His remote work advocacy has been vindicated by peer-reviewed research and C-suite admissions. When he sticks to documenting how tech companies degrade their products, he's doing important work.

Be skeptical of his predictions. Every timeline he's set for an AI collapse has been missed. His claims about demand, costs, and utility have been contradicted by publicly verifiable data. The subprime analogy is misleading. He has never acknowledged any of these misses.

Understand his incentives. He is a PR professional running a subscription media business that structurally rewards doom over nuance. He gets paid to love AI (through EZPR) and gets paid to hate AI (through his newsletter and podcast). His "Hater's Guide" branding tells you exactly what the product is.

Think about what he's missing. If you're reading this, there's a decent chance you use AI tools in your work — coding assistants, writing aids, research tools. You know from direct experience whether they're useful. Zitron's framework requires you to dismiss that experience as an illusion. The 900 million people who use ChatGPT every week and the millions of developers who pay for Cursor and Claude Code are not all deluded. Sometimes the simplest explanation is the right one: these tools work. (Though it's also worth noting that widespread adoption and genuine utility don't preclude financial overvaluation — plenty of useful dot-com companies went bankrupt.)

The truth is somewhere between "AI is a subprime bubble about to violently collapse" and "AI will solve everything." Zitron's business model depends on him never landing there.

Disclosure

This article was written with the assistance of Claude, an AI made by Anthropic — a company whose products and revenue figures are cited extensively above. The author has a favorable view of AI tools, which informs the editorial perspective. We have tried to be fair and fact-based, but readers should weigh this article's conclusions with the same skepticism we apply to Zitron's. If we're going to criticize someone for undisclosed bias, we should disclose our own.

Citations

  1. Techdirt, "Consumers Pay Less For Streaming In 2024 After Endless Price Hikes And Enshittification," January 2025. Link
  2. Social media engagement decline documented across multiple industry reports, 2024-2025. Specific platform-by-platform figures vary by source and methodology; the broad trend of declining engagement is consistent across reports.
  3. Ed Zitron, "The Man Who Killed Google Search," Where's Your Ed At, April 2024. Link
  4. Bevendorff et al., "Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines," University of Leipzig, 2024. PDF
  5. TechCrunch, "Google replaces executive in charge of Search and advertising," October 2024. Link
  6. Search Engine Land, "Google searches per US user fall nearly 20%," 2025. Link
  7. Stanford Report, "Hybrid work is a win-win-win for companies, workers," June 2024. Based on study published in Nature. Link
  8. Fortune, "A quarter of bosses admit return-to-office mandates were meant to make staff quit," September 2025, reporting on data from a BambooHR survey originally published mid-2024. The survey found executives "hoped for some voluntary turnover" — Fortune's headline framing is somewhat stronger than the underlying data. Link
  9. Baylor University Hankamer School of Business, "Return-to-Office Mandates and the Hidden Cost of Brain Drain," 2025. Link
  10. Ed Zitron, "The Case Against Generative AI," Where's Your Ed At, September 29, 2025. Link
  11. OpenAI announced 900 million weekly active users in late February 2026. Reported by TechCrunch, Search Engine Land, Bloomberg, and others.
  12. OpenAI financial data via Sacra research and company announcements. Link
  13. Anthropic business metrics. $19B annualized revenue confirmed by Bloomberg (March 3, 2026) and Sacra research. "300,000+ business customers" and "500+ spending $1M+" from company announcements and Sacra reporting. Link
  14. Cursor surpassed $2B ARR confirmed by Bloomberg, February 2026. Growth from ~$100M ARR (2024) to $2B+ (February 2026) is approximately 14 months, not a single calendar year. "1 million daily active users" from company and press reports.
  15. Claude Code $2.5B ARR from Sacra research citing Anthropic data, February 2026. The "4% of public GitHub commits" claim is attributed to Anthropic announcements but could not be independently verified. Link
  16. Ed Zitron, "The AI Bubble Is An Information War," Where's Your Ed At, March 3, 2026.
  17. OpenAI API pricing history, publicly available at openai.com/pricing. Historical pricing documented across multiple industry sources.
  18. Inference cost trends documented by multiple industry analysts. DeepSeek launch and competitive pricing effects widely reported January 2025.
  19. Ed Zitron, "LLM Code Is Already Breaking Big Tech," Better Offline podcast, March 2026. Link
  20. GitHub Copilot "55% productivity increase" from GitHub's own user surveys. Note: this is self-reported data from the product's maker, and the methodology has been questioned. Independent studies have shown more modest gains.
  21. Timeline shift documented across multiple Zitron newsletter pieces and podcast episodes, September 2025 through early 2026.
  22. NVIDIA FY2025 earnings report (SEC filing). $130.5B revenue, 114% YoY growth. Full-year GAAP gross margin was 75.0% (Q4 alone was 73.0%).
  23. NVIDIA market cap milestone reported July 2025 across financial media.
  24. Influence Online, "Ed Zitron: AI will be the tech equivalent of sub-prime mortgages," August 2025. Link
  25. Ed Zitron commentary on Meta/Zuckerberg during Meta's stock collapse and metaverse spending spree, late 2022. Referenced in his newsletter archive and Wikipedia article on Ed Zitron. Link
  26. EZPR client list and industry recognition. Link
  27. Tommy Craggs, "Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI," WIRED, October 2025. Via Longreads
  28. Stern Strategy Group, Ed Zitron speaker profile. Link
  29. Ed Zitron, "2025, A Retrospective," Where's Your Ed At. Link
  30. Lawrence Hecht, "How Solid Is Ed Zitron's 'Case Against Generative AI'?," The New Stack, 2025. Link
  31. Hacker News discussion threads on Ed Zitron's work. Link 1, Link 2
  32. Unit economics rebuttal discussed on Hacker News. Link
  33. Gary Marcus commentary on Zitron's AI criticism, referenced across social media, 2025.
  34. Microsoft Q2 FY2026 earnings report (SEC filing). $81.3B revenue, 17% YoY growth. Azure grew 39% YoY. GAAP net income $38.5B, up 60% — however, approximately $7.6B of GAAP gains relate to OpenAI-related investments. Non-GAAP net income growth was approximately 23%.