In networking, a passthrough is a device that transmits data without inspecting or modifying it. The signal enters one side and exits the other unchanged. The device is necessary for the connection but contributes nothing to the content. In May 2026, the most accomplished minds in mathematics and artificial intelligence began describing their own work in terms that sound remarkably similar.
The Medalist
On May 8, Timothy Gowers published a blog post that should have been unremarkable. Gowers is a Fields Medalist, a former Rouse Ball Professor at Cambridge, a mathematician whose work on combinatorics and Banach spaces has shaped the field for decades. He described feeding open problems from a colleague's paper to ChatGPT 5.5 Pro and watching the model solve them.1
The model thought for seventeen minutes and five seconds. It took an exponential bound that Mel Nathanson had proved and improved it to the best possible quadratic bound. The key move was swapping a component in Nathanson's proof for a more efficient variant well-known in combinatorics but whose application to this specific problem was not obvious. Isaac Rajagopal, a mathematician at MIT, called the idea "completely original," the kind of insight "a human mathematician would be proud of after weeks of deliberation."2
Gowers summarized his own role with nine words: "PhD-level research in an hour or so, with no serious mathematical input from me."1
Read that again. A Fields Medalist describing his contribution to a mathematical result as "no serious mathematical input." He is not being modest. He is being precise. The model identified the approach, executed the proof, and produced a result that Gowers assessed as "a perfectly reasonable chapter in a combinatorics PhD." His contribution was selecting the problem and evaluating the output. He was the passthrough.
The Amateur
Two weeks earlier, a 23-year-old named Liam Price solved a problem that had been open for sixty years. Price has no advanced mathematics training. He fed an Erdős problem to ChatGPT on an idle Monday afternoon.3
The model found a solution using what Terence Tao, one of the most celebrated mathematicians alive, described as "a totally new method for problems of this kind." When Tao examined why human mathematicians had not found this approach in six decades, his explanation was blunt: previous researchers had "collectively made a slight wrong turn at move one."3
Price contributed no mathematical insight. He did not understand the proof the model produced. He could not have evaluated whether it was correct without Tao's verification. He selected the problem by browsing a list of famous open questions, pasted it into a chat interface, and waited.
He was a more efficient passthrough than Gowers, because he had less to contribute and therefore less to interfere with.
— Timothy Gowers
The Pioneer
Andrej Karpathy, co-founder of OpenAI and former head of Tesla's AI division, told Fortune in April that his code ratio had flipped from 80/20 human/AI to 20/80. He has not written a line of code since December 2025. He spends sixteen hours a day issuing commands to agent swarms.4
Karpathy is not a novice learning to lean on tools. He is one of the architects of the tools themselves. He understands their limitations more deeply than almost anyone. And his response to that understanding is to stop writing code entirely and become a full-time operator.
Y Combinator CEO Garry Tan called his own version "cyber psychosis." Karpathy called his a "state of psychosis of trying to figure out what's possible." Both men used clinical language to describe their relationship with tools they helped build.4
The pattern holds at every level. An amateur, a Fields Medalist, and a pioneer who helped create the technology all arrived at the same role: the human who selects the problem and evaluates the output. Everything in between belongs to the machine.
The Evidence
The plural of anecdote is not data, but the data exists too.
In January 2026, researchers at Anthropic studied 52 software engineers learning a new asynchronous programming library. Engineers who used AI assistance scored 17% lower on comprehension assessments than those who learned without it. The steepest declines were in debugging ability. The researchers identified six interaction patterns with AI tools; only three preserved the active cognitive engagement necessary for learning.5
In August 2025, the Lancet published the first documented case of medical deskilling from AI. Doctors at four Polish endoscopy centers who had been using AI-assisted colonoscopy saw their adenoma detection rate drop from 28.4% to 22.4% when working without the AI. A 20% relative reduction in diagnostic accuracy. The skill did not atrophy gradually. The decline appeared within the study period.6
A study of 666 participants found a correlation of r = -0.68 between AI tool usage frequency and critical thinking scores. Microsoft's own research documented a 30% reduction in diversity of thought when teams over-relied on generative AI.78
MIT Media Lab coined the term "cognitive debt" for what these studies describe: the delayed cost to attention, learning, and mental health from chronic reliance on AI systems. They identified four manifestations. Fragility: humans cannot recover when systems fail. Quality drift: subtle errors accumulate because no one is checking closely enough. Accountability gaps: leaders sign off on work they cannot assess. Weak talent pipelines: juniors never build foundational craft.9
The Ladder Problem
Gowers is not worried about himself. He has a Fields Medal. His career does not depend on producing the next proof. He is worried about the students who are starting PhDs next year.
"A student starting a PhD next year will finish in 2029 at the earliest. My guess is that by then, what it means to undertake research in mathematics will have changed out of all recognition."1
The traditional path in mathematics, and in most fields that require deep expertise, follows a ladder. You are given problems slightly beyond your ability. You struggle with them. The struggle builds the cognitive infrastructure that makes harder problems accessible. Your advisor selects problems that are open but approachable, so that the process of solving them teaches you how to think.
AI just cleared the lower rungs.
If an amateur can solve a sixty-year-old problem by pasting it into a chat window, the category of "problems suitable for training PhD students" just collapsed. The gentle problems that departments used to assign to build mathematical maturity are exactly the kind of problems these models excel at. The training ladder depends on there being a meaningful gap between "what the student can do" and "what the student needs to learn to do." When a machine bridges that gap in seventeen minutes, the ladder does not become easier to climb. It becomes unnecessary to climb at all.
Fortune summarized the downstream effect in April: "AI won't kill your job. It will kill the path to your first one."10
Psychology Today put it more starkly: "Adults lose skills to AI. Children never build them."11
The Convergence
In networking, you can stack passthroughs. Data flows through one device into another, through a chain of transparent relays, until it reaches its destination. No device in the chain modifies the signal. The architecture works because it does not require any individual node to understand what it is transmitting.
The Hacker News thread on Gowers' post collected 683 points and 510 comments. The most persistent worry was not about jobs or economics. It was about what happens to the experience of thinking itself when the tools make thinking optional.2
One commenter wrote: "My view on this is really pessimistic. The value of thinking and having deep ideas seems to be lower and lower."
Another raised the inequality angle: mathematicians in affluent countries can afford LLM co-authors who accelerate output by orders of magnitude, while poorer colleagues will be left behind. The passthrough is not equally distributed. Access to the relay determines the output.
But the deeper convergence is structural. Liam Price, Timothy Gowers, and Andrej Karpathy have nothing in common except the role they converged toward: the person who selects the problem and evaluates the answer. The person the data passes through.
The word for that, in every field that has already been through this transformation, is manager.
The Question Nobody Wants to Answer
Lisbet Rausing, co-founder of the Arcadia Fund, asked the uncomfortable question after Gowers' post: if the process of doing research no longer requires understanding, what exactly are we training people to do?2
One answer is that humans move up the abstraction stack. We stop writing proofs and start selecting which proofs matter. We stop writing code and start architecting systems. We become better managers of more capable tools.
The other answer is that the abstraction stack has a ceiling. "Selecting the right problem" is itself a skill that requires deep domain knowledge, and if you never built that knowledge because the tools made the building unnecessary, you cannot select effectively. You are a passthrough that does not know what it is transmitting.
Bainbridge described this in 1983 as the "ironies of automation": the more advanced the automation, the more crucial the human operator's contribution, and the less likely the operator is to be able to provide it, because the automation has eliminated the practice that would have made them competent.12
Forty-three years later, the irony is playing out at the highest levels of human intellectual achievement. A Fields Medalist contributed "no serious mathematical input." An amateur solved a sixty-year-old problem without understanding the solution. A co-founder of OpenAI has not written code in five months.
The signal passes through. The device is necessary for the connection. And the device contributes nothing to the content.
Disclosure
This article was written using Claude, an AI model made by Anthropic. The irony is not lost on us. All citations were verified against their original sources. The metaphor of the passthrough was chosen because it describes the architecture honestly: an AI helped write an article about humans becoming intermediaries for AI. Whether the author was a passthrough for this piece is left as an exercise for the reader.
Sources
- Timothy Gowers, "A recent experience with ChatGPT 5.5 Pro," Gowers's Weblog, May 8, 2026. Link.
- Hacker News discussion, "A recent experience with ChatGPT 5.5 Pro," 683 points, 510 comments, May 2026. The Decoder, "Fields medalist says ChatGPT 5.5 Pro delivered PhD-level math research in under two hours with zero human help." Link.
- Scientific American, "Amateur Armed with ChatGPT 'Vibe Maths' a 60-Year-Old Problem," May 2026. Link.
- Fortune, "AI researchers describe 'state of psychosis' from coding tool usage," April 2026. Andrej Karpathy interview, code ratio and agent swarm usage. Garry Tan "cyber psychosis" quote. Link.
- Shen and Tamkin, "The Impact of AI Assistance on Coding Skill Formation," Anthropic Research, January 2026. 52 software engineers, 17% lower comprehension scores with AI assistance. Link.
- ACCEPT Trial, "AI-assisted adenoma detection and subsequent unassisted performance," Lancet Gastroenterology & Hepatology, August 2025. Four Polish endoscopy centers, 20% relative reduction in detection rate. Link.
- MDPI study, "AI tool usage and critical thinking," Societies, 2025. 666 participants, r = -0.68 correlation. Link.
- Harvard Gazette, "Is AI dulling our minds?" November 2025. Microsoft research on 30% reduction in diversity of thought. Link.
- Cognitive World, "Skill Atrophy, Frictionless AI, and Cognitive Debt," March 2026. MIT Media Lab framework: fragility, quality drift, accountability gaps, weak talent pipelines. Link.
- Fortune, "AI won't kill your job — it will kill the path to your first one," April 29, 2026. Yale CELI, Jeffrey Sonnenfeld. Link.
- Psychology Today, "Adults Lose Skills to AI. Children Never Build Them," March 2026. Link.
- Lisanne Bainbridge, "Ironies of Automation," Automatica, Vol. 19, No. 6, pp. 775-779, 1983. The foundational paper on automation paradoxes in skilled work.