The 14% Problem

Only 14% of AI users report net-positive outcomes. The industry cites "85% save time" and buries everything else. The math doesn't close.
By Nadia Byer · March 26, 2026

Fourteen percent. Not 14% adoption. Not 14% awareness. Fourteen percent of people who already use AI, at companies with $100 million or more in revenue, consistently report that it produces net-positive outcomes. That number comes from Workday — an enterprise software company selling AI-powered tools — surveying 3,200 of its own users. The most favorable possible sample found that 86% of AI users cannot say the tools are consistently working for them.1

You will not find that number in a vendor pitch deck.

What you will find: "85% of employees save 1-7 hours per week using AI." That number is also from Workday, from the same study, on the same page. Both are true. The industry cites the first. It buries the second. The gap between them is where most AI users actually live.

The Rework Tax

Eighty-five percent save time. But 37% of that time is clawed back — lost to correcting errors, rewriting content, and verifying outputs.1 Workday quantifies the cost: 1.5 weeks per employee per year spent fixing what AI produced. Seventy-seven percent of daily AI users review AI-generated work just as carefully as — or more carefully than — human work.1

The promise was: AI does the work, you do the thinking. The reality: AI does a draft, you do the work of figuring out what's wrong with it. The cognitive load doesn't decrease. It shifts from creation to evaluation. And evaluation is harder to do well.

Employees aged 25–34 — the cohort every company assumes is native to these tools — comprise 46% of those handling the most AI rework.1 The youngest workers aren't thriving with AI. They're absorbing the largest share of its cleanup costs.

The 14% who consistently succeed? They're 79% more likely to have received skills training.1 That's not a technology gap. It's an investment gap. The tool works when you fund the people using it. Most companies don't.

The Solow Echo

In 1987, Robert Solow wrote: "You can see the computer age everywhere but in the productivity statistics." It took fifteen years for IT investment to show up in the macro data.

History is rhyming. An NBER study of 6,000 executives — CEOs and CFOs across the U.S., U.K., Germany, and Australia — found that 89% report AI has had no impact on labor productivity and over 90% report no impact on employment over the past three years. Despite seeing no current results, these same executives forecast 1.4% productivity gains over the next three years.2

They see no results. They expect results. Corporate AI investment exceeded $250 billion in 2024 (Stanford HAI AI Index).2b The gap between expenditure and evidence is the Solow paradox with an extra zero.

Maybe AI needs more time. Maybe the productivity gains are coming. But we've spent $250 billion and the people using the tools say they aren't working. That's not a timing issue. That's data.

85% save time. 37% lose it to rework. Only 14% come out ahead.
That is the math the industry won't do for you.

The Perception Gap

Executives and workers are living in different realities.

Over 70% of executives feel "excited" by AI and claim 12+ hours saved per week. Approximately 70% of non-management workers feel anxious or overwhelmed. Forty percent of workers say AI saves them no time at all.3b

ManpowerGroup surveyed 14,000 workers across 19 countries. Regular AI usage is up 13% to 45% of workers. Confidence in using the technology has fallen 18%. Baby Boomers saw a 35% drop in tech confidence. Gen X: 25%.3

Forty-three percent fear automation will replace their job within two years — up 5 points from 2025. Sixty-four percent are "job hugging," staying with their current employer not out of loyalty but out of fear. Fifty-six percent received no recent training. Fifty-seven percent have no access to mentorship.3

This is the texture of the 86%. Not Luddites. Not resisters. Workers navigating a landscape where not using AI looks like career risk, but using AI doesn't actually help.

The Trust Collapse

Stack Overflow's 2025 developer survey: AI tool usage at 84%. Trust in AI at 29% — down 11 points year-over-year. Three-quarters of developers don't trust AI answers.4

The number one developer frustration, cited by 66%: "AI solutions that are almost right, but not quite." Number two, at 45%: "Debugging AI-generated code is more time-consuming."4

"Almost right, but not quite" is the signature of the rework tax. The output isn't obviously wrong. It passes a cursory glance. But something is off — a subtle logical error, a hallucinated API, a dependency that doesn't exist. Finding the problem takes longer than writing the solution from scratch would have. The tool generated it in seconds. The human spent minutes understanding why it's broken. Then rewrote it.

Usage up. Trust down. This is the behavioral signature of compulsory adoption. People use the tools not because they work but because the workplace demands it.

Brain Fry

BCG surveyed 1,488 U.S. workers in March 2026 and gave a name to what the oversight burden does to people: "brain fry." Not burnout. Burnout is chronic emotional exhaustion. Brain fry is acute cognitive overload from the specific task of monitoring, evaluating, and making judgment calls about AI output.5

14% of AI users experience brain fry.5

The symmetry is cruel. Fourteen percent consistently positive. Fourteen percent actively brain-fried. Different populations, same denominator. The majority is in between — treading water, neither helped nor harmed enough for anyone to notice, doing the quiet math of time saved against time lost and arriving at roughly zero.

Brain fry correlates with 39% more major errors, 33% more decision fatigue, and a quit intent of 34% versus a 25% baseline — a 39% increase in departure risk.5 Marketing roles are hit hardest at 26% prevalence. Software development, HR, finance, and IT are close behind.

BCG found a tool quantity cliff: productivity gains with one to two AI tools, plateau at two to three, decline beyond three.5 The marginal AI tool doesn't make you faster. It makes you worse. And the industry's advice — adopt more tools, integrate deeper, use AI for everything — pushes workers past the cliff.

Productivity Theater

If AI is saving time, where is the time going?

ActivTrak analyzed 443 million hours of work activity. Among AI users, every measured work category saw time increases. Email: +104%. Chat and messaging: +145%. Business management tasks: +94%. No category decreased.6

The workday shortened slightly — 2%. But it got dramatically denser. Focus time hit a three-year low. Collaboration surged 34%. Multitasking rose 12%. Weekend work increased over 40%.6

AI generates outputs. The outputs require communication — explaining corrections, aligning on what the AI got wrong, tracking what's been verified versus what hasn't. The tool doesn't reduce work. It generates more work that looks like productivity. The metrics leaders track improve. The experience workers live degrades.

ActivTrak found a sweet spot: employees spending 7–10% of total work time in AI tools had the highest productivity scores. Only 3% of employees fell in that range.6 Below 7%, you're under-leveraging. Above 10%, you're drowning.

Three percent. The industry tells 100% of workers to adopt AI. The data says 3% are using it at the right dose.

AI usage up. Trust down. Confidence down.
This is what compulsory adoption looks like.

The Enterprise Graveyard

Zoom out from workers to organizations and the picture doesn't improve.

RAND Corporation: 80.3% of enterprise AI projects fail to deliver measurable value. A third are abandoned entirely. The average cost of a failed AI initiative: $4.2–6.8 million. Large enterprises lose an average of $7.2 million per failed project.7

MIT: 95% of generative AI pilots fail to scale.7 Gartner predicts 30% of GenAI projects abandoned after proof of concept by end of 2025, and 40% of agentic AI projects canceled by 2027.7 Deloitte: 42% of companies abandoned most AI initiatives in 2025 — up from 17% in 2024.8

McKinsey's global AI survey: 88% of organizations use AI in at least one function. Only 39% see any EBIT impact.8 Over 80% report no meaningful effect on enterprise-wide earnings. Adoption is near-universal. Results are not.

EY found that AI can unlock up to 40% more productivity — but only on "stable talent foundations" with strong culture, sufficient training, and aligned incentives. On fragile foundations, productivity benefits lag by over 40%. Only 12% of employees receive sufficient AI training. Only 28% of organizations are on track to build effective tech-talent integration.9

The 14% who succeed at the individual level and the ~20% who succeed at the enterprise level aren't using better AI. They're in better organizations. The technology is a constant. The support is the variable.

The Selection Bias Problem

The studies that do show AI working have a methodology problem.

METR's developer productivity study found a 19% slowdown among experienced developers using AI — then acknowledged their sample was biased. Developers who benefit most from AI opted out of studies requiring randomized periods of non-use. The researchers wrote: "the true speedup could be much higher among the developers and tasks which are selected out of the experiment."10

In manufacturing, correcting for selection bias — early adopters expect higher returns and self-select into AI use — turned modest productivity gains into a negative impact roughly 60 percentage points worse than the uncorrected estimate.10

The most-cited AI productivity studies — Microsoft's internal surveys, GitHub's Copilot research — survey their own users on their own platforms. This is the equivalent of asking gym members whether exercise works. The answer tells you about the sample, not the intervention.

The 14% Workday number may be the most honest statistic in the field precisely because Workday had no incentive to find it. An AI vendor, surveying its own ecosystem, found that 86% of users aren't coming out ahead. That's not a survey designed to produce that result. That's a result that survived the survey.

What the 86% Know

The 86% are not Luddites. They are not resistant to change. They are not insufficiently trained — though most of them are. They are people who use AI every day and can do the math.

The time saved doesn't exceed the time spent checking. The drafts aren't better than what they'd have written themselves — just faster to generate and slower to fix. The cognitive load didn't decrease. It changed shape. They know this because they live it.

The industry's response has been to blame the workers. Insufficient training. Wrong mindset. Resistance to change. If the tool isn't working, you're holding it wrong.

But the data points elsewhere. The 14% who succeed are 79% more likely to have received training. Only 12% of employees get sufficient training. Only 3% use AI at the optimal dose. Eighty-nine percent of organizations have updated fewer than half their roles to reflect AI capabilities.19 The system that produces the 86% is not a technology failure. It is an organizational failure — a $250 billion investment in tools with a corresponding underinvestment in the people expected to use them.

The question isn't whether AI will eventually deliver on its promises. Maybe it will. The question is why an industry spending $700 billion on infrastructure this year can't be honest about what's happening right now to the 86% on the receiving end.

Show us the denominator. Show us the net. Show us the 86%.

The claim is not the evidence.

Disclosure

This article was researched and written with the assistance of Claude, an AI made by Anthropic. By the industry's own data, there is an 86% chance that this tool did not produce a net-positive outcome for the writer. We reviewed every line, verified every citation, and rewrote what needed rewriting — which, per Workday's findings, is roughly 37% of AI output. The irony is the point. Corrections welcome at nadia@sloppish.com.

Citations

  1. Workday, "Beyond Productivity: Measuring the Real Value of AI," conducted with Hanover Research, November 2025. 3,200 employees at $100M+ organizations. Workday newsroom.
  2. NBER Working Paper 34836. 6,000 CEOs/CFOs across U.S., U.K., Germany, Australia. 89% report no impact on labor productivity; over 90% report no impact on employment. Fortune.
  3. Stanford HAI 2025 AI Index Report. $252.3B in private AI investment in 2024. Stanford HAI.
  4. ManpowerGroup 2026 Global Talent Barometer. Nearly 14,000 workers, 19 countries. AI usage up 13%, confidence down 18%. ManpowerGroup.
  5. Section survey of 5,000 white-collar workers. Executive excitement vs. worker anxiety gap.
  6. Stack Overflow 2025 Developer Survey. AI usage at 84%, trust at 29%. 66% cite "almost right" outputs as top frustration. Stack Overflow | Blog.
  7. BCG, "When Using AI Leads to 'Brain Fry,'" Harvard Business Review, March 2026. 1,488 U.S. workers. HBR.
  8. ActivTrak, 2026 State of the Workplace. 443 million hours of work activity analyzed. ActivTrak.
  9. Enterprise AI failure rates: RAND Corporation (80.3% failure rate); MIT (95% of GenAI pilots fail to scale); Gartner (30% abandoned after POC, 40% of agentic projects canceled by 2027).
  10. Deloitte State of AI in the Enterprise 2026 (42% abandoned most initiatives); McKinsey Global AI Survey (88% adoption, 39% EBIT impact).
  11. EY 2025 Work Reimagined Survey. 15,000 employees, 1,500 employers, 29 countries. Only 12% receive sufficient AI training. EY.
  12. METR developer productivity study: selection bias acknowledgment. Manufacturing AI adoption: correcting for selection bias turned gains into 60-point negative impact. METR.
Share: Bluesky · Email
Get sloppish in your inbox
Free newsletter. No spam. Unsubscribe anytime.