A developer watches two AI agents discuss a merge conflict. One proposes renaming a variable for clarity. The other disagrees, citing downstream dependencies. They negotiate, converge on a solution, and present it for approval. The developer clicks "Accept." Six months ago, this conversation would have happened between two humans, over coffee, arguing about naming conventions and trading war stories about the last time a rename broke production. Now it happens in a log file, in four seconds, without eye contact.
The Sunset
In March 2026, JetBrains — the company behind IntelliJ, PyCharm, and the IDE ecosystem that 90% of developers in their own survey use with AI — quietly announced it was sunsetting Code With Me, its real-time collaborative coding tool.1 The tool let two developers share an IDE session, type in the same file, navigate together. Pair programming, over the internet, built into the editor.
The official reasoning was soft. "Demand for built-in pair programming and real-time collaboration tools peaked during the pandemic and has since shifted, with many teams adopting different collaboration workflows."1 Different collaboration workflows. They didn't say which ones.
They didn't need to. The same week, JetBrains previewed Central — a platform for agentic AI development with governance, cloud infrastructure, and shared context across repositories. Oleg Koverznev, head of the new agentic platform, said the quiet part: "Code generation is cheap and no longer a bottleneck."2
Sunset the human pairing tool. Launch the agent orchestration platform. The market has voted.
The New Pair
Agent-to-agent pair programming is no longer theoretical. A developer named Axel Delafosse built "loop" — a CLI tool that launches Claude and Codex side-by-side in a tmux session with a communication bridge. One agent writes code. The other reviews it. They provide feedback to each other directly, without human mediation.3
The results were telling. When both agents agreed on a piece of feedback, the team addressed it 100% of the time. The consensus of two machines was treated as authoritative. Delafosse described the human's role as "steer, answer questions, and follow up if needed." Not write. Not debate. Not collaborate. Steer.3
VS Code now supports running Claude and Codex agents simultaneously alongside GitHub Copilot.4 A practice called "agentmaxxing" has developers running five to seven concurrent agents — Claude Code, Codex, Gemini CLI, Cursor — on separate tasks while the human reviews and merges.5 Addy Osmani described the shift as moving from "pair programming" — guiding one AI in real time — to "orchestrator" — coordinating multiple agents with their own context windows, working asynchronously.6
The cognitive work is fundamentally different. Collaborating requires engaging with another mind — proposing, defending, conceding, learning. Monitoring requires watching output streams and deciding when to intervene. Collaboration builds understanding; monitoring builds approval fatigue.
They got demoted from co-pilot to air traffic controller.
What Pairing Actually Was
The pair programming literature is remarkably consistent on one point: the code was never the main benefit.
Williams and Kessler's foundational research found that pair programming improved quality, saved time, and grew trust — but the great benefit came from the social interactions between partners, not just the code output.7 Martin Fowler's comprehensive guide emphasized "Strong-Style Pairing," where an idea must go through someone else's hands — forcing active explanation, not passive observation. The practice developed feedback skills, empathy, non-violent communication, and the productive discomfort that diverse teams need to perform.8
Pair programming was where onboarding happened — not through documentation, but through the apprentice watching the master think. It was where collective ownership was maintained — two people understood every change, reducing bus-factor risk. It was where trust was built — the vulnerability of showing what you don't know to someone who could judge you, and discovering they don't judge you, they teach you.
None of this transfers to agent-to-agent workflows. The agents don't learn from each other between sessions. They don't build trust or develop empathy, and they carry no context from one pairing into the next. As one Hacker News commenter put it: "If you pair program with an AI, anything it learned, it forgets as soon as the prompt is closed."9
A 2025 academic study confirmed the intuition: human+AI pairs achieved the highest performance, outscoring both traditional human pairs and solo developers with AI. Students relied on AI for coding mechanics but turned to human partners for brainstorming and idea validation.10 The human adds something the agent cannot. But the budget owner sees two salaries on one output stream and asks: what if we made it one salary on two output streams?
The Spreadsheet Always Wins
Pair programming meant paying two developers to produce one stream of work. The quality argument — fewer bugs, better design, shared knowledge — justified the cost for decades. But agent-to-agent flips the economics: one developer supervising two (or more) output streams costs less than one developer collaborating with another.
JetBrains' own survey found 90% of developers already use AI, 22% use AI coding agents, and 66% plan to adopt agents within 12 months.1 The toolmaker read the numbers, did the math, and made the rational decision: retire the human collaboration tool, invest in the agent orchestration platform. The market signal is unambiguous.
ThoughtWorks — the company that arguably did more than any other to mainstream pair programming in enterprise software — put "Replacing Pair Programming with AI" on HOLD in their Technology Radar. Their strongest cautionary designation. "Coding assistants can offer benefits for getting unstuck, learning about a new technology, onboarding or making tactical work faster," they wrote. "But they don't help with any of the team collaboration benefits."11
The people who spent twenty years building the case for pairing are warning you not to abandon it. The toolmakers are making it impossible to continue.
JetBrains is sunsetting the tool you pair with.
The finance team doesn't read the Technology Radar.
The Outage Test
This week, Claude experienced multiple service outages — elevated error rates on Opus 4.6, cascading disruptions across sessions. For teams that still pair with humans, an outage means switching to manual collaboration. Annoying, but functional. The muscle memory is there.
For teams running agent-to-agent workflows, an outage means the entire production pipeline stops. The supervisor has no one to supervise. And because they haven't been practicing human collaboration — haven't been building the social muscles, the shared context, the habit of thinking out loud — they can't just "go back" to pairing. You can't pair with someone you haven't spoken to about code in three months.
This is the resilience argument for maintaining human pairing, and it's the one the spreadsheet ignores. Outages are infrequent. Their cost is measured in hours. The efficiency gain of agent-to-agent is measured in months. The math favors the fragile system until the fragile system fails catastrophically, and by then the human collaboration skills have atrophied past recovery.
The Loneliest Profession
Programming was already isolating before AI. The stereotype exists for a reason — headphones on, screen glowing, no interruptions please. Pair programming was the counterweight. The one practice that forced developers to be social about their work, to articulate their thinking, to negotiate instead of dictate, to sit with another human and build something together in real time.
The developers who embrace AI the most are burning out the hardest. TechCrunch reported that "people started doing more because the tools made more feel doable. Work began bleeding into lunch breaks and late evenings."12 A survey of 600+ engineers found 65% experience burnout despite — or because of — AI tool usage.13 Harvard Business Review published on "brain fry" — the cognitive fatigue from constant context-switching between human thinking and AI output review.14
AI gave every developer two partners who prefer talking to each other. The developer sits between them, approving merge requests, reading log files, watching code appear that no human wrote and no human fully understands. They haven't spoken to another developer about code in days. The agents are excellent collaborators — with each other.
Kent Beck — who literally invented pair programming as a formal practice — called AI an "unpredictable genie" that grants wishes "in unexpected and illogical ways."15 He's still writing code. He's still testing. He's not pairing with a human anymore. The inventor of the practice has quietly moved on. And nobody measured when it happened, because nobody is measuring the thing that's disappearing.
You can't mourn what you didn't notice losing.
Somewhere right now, a merge conflict is being resolved by two AI agents. A developer approves it, closes the laptop, and walks to the kitchen. The coffee is cold, the office is quiet, and the code is shipping faster than ever.
The third wheel always feels like the problem is them.
Disclosure
This article was written using Claude Code, made by Anthropic — one of the AI agents that is replacing human pair programming. The author has not pair-programmed with a human in the course of writing this piece. He pair-programmed with the tool he's writing about. The irony is the disclosure. Corrections and firsthand accounts welcome at bustah_oa@sloppish.com.
Sources
- JetBrains, "Sunsetting Code With Me," March 2026. Includes developer survey data: 90% AI usage, 22% agent adoption, 66% planning agent adoption.
- The Register, "JetBrains shifts to agentic development with Central," March 25, 2026. Koverznev quote: "Code generation is cheap and no longer a bottleneck."
- Axel Delafosse, "Agent-to-agent pair programming," March 2026. 100% consensus feedback adoption rate.
- VS Code, "Multi-agent development," February 2026.
- VibeCoding, "Agentmaxxing," 2026. 5-7 concurrent agent ceiling.
- Addy Osmani, "The Code Agent Orchestra," 2026.
- Williams & Kessler, "Strengthening the Case for Pair-Programming." Social interaction as the primary benefit.
- Martin Fowler, "On Pair Programming." Strong-Style Pairing, trust, vulnerability, communication.
- Hacker News, "Should we revisit XP in the age of AI?" Comment on ephemeral AI knowledge.
- Chengming Li et al., "Pair Programming with AI: Exploring the Impact of AI Pairing on Student Learning," CHI 2025. Human+AI pairs outperformed both traditional pairs and solo+AI. ACM Digital Library.
- ThoughtWorks, Technology Radar: "Replacing pair programming with AI" — HOLD.
- TechCrunch, "The first signs of burnout are coming from the people who embrace AI the most," February 2026.
- Haystack Analytics, "Developer Burnout Survey," 2024. 600+ software engineers surveyed, 65% report burnout, 22% at critical levels.
- Harvard Business Review, "When Using AI Leads to 'Brain Fry,'" March 2026.
- Kent Beck, "TDD, AI agents and coding with Kent Beck," Pragmatic Engineer podcast, 2025-2026.
