Last week's Vibe of the Vibe was about things breaking. This week is about something quieter and worse: things being taken. Your code, your approval authority, your benefit of the doubt. The platforms didn't crash this week. They just started helping themselves.
Let's get into it.
GITHUB COPILOT Your Code Is the Product Now
On March 25, GitHub announced that starting April 24, interaction data from Copilot Free, Pro, and Pro+ users will be used to train AI models. Not just public code. The snippets you accept. The modifications you make. The code context around your cursor. Your file names, your repo structure, your navigation patterns. All of it feeds the model unless you opt out.
The default is on. For everyone. Including the millions of developers who haven't read the announcement.
Business and Enterprise customers are exempt, naturally. If you pay enough, your code stays yours. If you're on the free tier or the $10/month plan, you're the training data. The community response was a masterclass in developer anger: 59 thumbs-down votes versus 3 rocket ships. Among 39 community posts commenting on the change, exactly zero endorsed it. The only positive comment came from GitHub's own VP of developer relations.
There's a particularly nasty wrinkle for anyone with private repos. Copilot doesn't train on private code "at rest" — but it does process your private code during active Copilot use, and that processed data can be used for training unless you opt out. The distinction between "stored" and "processed" is doing a lot of work in that privacy policy.
"From 'we don't use customer data' to 'here's an option to opt out.' The direction is always the same." — HN commenter
The Register's headline said it plainly: "GitHub: We going to train on your data after all." Mark April 24 on your calendar. That's your deadline to opt out — if you can find the setting.
CLAUDE CODE The Quota Wall, Week Two
Quota complaints are now the longest-running story in the Claude Code ecosystem. Week two, same song. Developers hitting 5-hour resets that kill multi-hour sessions. "Phantom usage" eating ~20% of quota before a single prompt. The CLI burning 1-3% on startup alone. API users reporting faster, better service than subscription users, creating a two-tier system that feels increasingly intentional.
The workaround economy is thriving: developers juggling Claude Code for important tasks, Gemini CLI for routine work, and local Qwen3 Coder models for when the quota runs dry. That's not a power-user workflow. That's rationing.
CLAUDE CODE "Has the Quality Degraded?" (Yes, Obviously)
An Ask HN post on March 26 — "Has Claude Code Quality Degraded?" — joined the growing chorus. Opus 4.6 reportedly "kept messing up aspect ratio, and forgets to add the AI feature" despite explicitly acknowledging the requirement. This follows the 1,085-point "Claude Code is being dumbed down?" thread that continues to accrue comments weeks after posting.
Marginlab.ai is now running daily degradation benchmarks — a community-built accountability tool because the company won't provide its own. When your users start building external monitoring for the quality of your product, that is not a compliment.
CLAUDE CODE Auto Mode: The Agent Approves Itself
Anthropic launched Auto Mode this week, replacing the binary choice between clicking "approve" on every action and the terrifyingly named --dangerously-skip-permissions flag. The idea: a Sonnet 4.6 classifier evaluates each action and decides what's safe. Reads and searches get auto-approved. In-project edits mostly pass. Shell commands get scrutinized.
Grith.ai published a detailed takedown: "Auto Mode asks the agent to approve its own actions. We do not trust the agent to audit itself." Their argument is structural, not emotional. With an 84% prompt injection success rate across 314 attack payloads embedded in READMEs and code comments, a single injection can compromise both the actor and the auditor when they share the same context window. "The judge and defendant become the same entity."
Anthropic's own numbers confirm the gap: 0.4% false-positive rate (safe things blocked), but a 17% false-negative rate — meaning roughly one in six dangerous actions slips through undetected. They acknowledge Auto Mode isn't for "careful human review on high-stakes infrastructure." But the 93% of users who were rubber-stamping every approval prompt weren't doing careful review either.
The Claude Code Cheat Sheet That Ate Hacker News
A comprehensive Claude Code reference by phasE89, hosted at cc.storyfox.cz, hit 679 HN points — the biggest Claude Code story of the week by a wide margin. Eight color-coded sections covering 40+ slash commands, MCP server setup, git worktrees, voice mode in 20 languages, plan mode, batch operations. Print-friendly.
Here's why this matters beyond the content itself: when developers build reference materials for a tool, that tool has crossed from novelty to infrastructure. Nobody makes a cheat sheet for something they're about to abandon. This is the other side of the trust erosion story — people are frustrated, but they're investing. They're learning the tool deeply enough to document it for others.
Mozilla Builds Stack Overflow for Agents
Mozilla AI launched Cq ("colloquy") — a shared knowledge commons where coding agents query solutions from other agents. When one agent learns that Stripe returns 200 with an error body for rate-limited requests, all agents benefit. Trust comes from multi-agent confirmation rather than single-model output.
The timing is perfect. Stack Overflow collapsed from 200,000+ monthly questions to 3,862 in December 2025. The human knowledge commons died. Something has to replace it. Cq ships plugins for Claude Code and OpenCode, an MCP server, team API, and human-in-the-loop review. 220 HN points. This is the kind of infrastructure that makes agentic coding sustainable instead of just fast.
Auto Mode, the Optimist's Version
Yes, Auto Mode appears in both The Mad and The Glad. That's because it's genuinely two things at once. For the 93% of users who were clicking "approve" on everything anyway, this removes real friction. For developers working on isolated projects without production credentials, the three-tier classification system is a legitimate quality-of-life improvement. The engineering blog post is transparent about limitations — which, paradoxically, builds trust. The problem isn't the feature. The problem is the 17%.
Gemini Code Assist Goes Free
Google's Gemini Code Assist free tier continues to stand out for individual developers: 6,000 code requests and 240 chat requests daily, no credit card required. In a week where GitHub is monetizing your code and Claude is rationing access, Google is giving it away. Whether that's generosity or market-share desperation depends on your cynicism level. Either way, it's 6,000 free requests a day while Claude Code users are juggling three tools to avoid quota walls.
ANTHROPIC Judge Drops "Orwellian" on the Pentagon
The biggest story of the week has nothing to do with code. US District Judge Rita Lin issued a 43-page preliminary injunction on March 26, blocking the Pentagon from enforcing its supply chain risk designation against Anthropic. Background: Anthropic signed a $200M Pentagon contract in July 2025. When DOD wanted unfettered Claude access across all lawful purposes and Anthropic pushed back on autonomous weapons and mass surveillance, the Pentagon retaliated by branding them a security risk.
Judge Lin called it "classic illegal First Amendment retaliation" and wrote:
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
At the March 24 hearing, she told government lawyers their justification was "a pretty low bar." By March 26, she found the Pentagon's action was "likely both contrary to law and arbitrary and capricious." This is the first federal court ruling on AI company versus government retaliation — a precedent that will matter far beyond Anthropic. Covered by CNN, Washington Post, and AP.
COPILOT Training data policy change effective April 24. Opt-out, not opt-in. See The Mad for the full picture.
CLAUDE CODE Auto Mode launched. See both The Mad and The Glad, because that's the kind of feature it is.
OPENAI GPT-5.4 launched March 5 with a 1M token context window in Standard, Thinking, and Pro variants. Reporting record scores on OSWorld-Verified benchmarks. In the wild for over three weeks now, but the news cycle this week belonged to other stories.
MCP Crossed 97 million monthly SDK downloads, up from ~2M at November 2024 launch. 5,800+ community and enterprise servers. Cross-provider adoption across Claude, OpenAI, Google. The protocol war is over before it started. MCP won.
HEALTH NZ Banned ChatGPT, Claude, and Gemini for all clinical purposes in mental health services. Staff threatened with disciplinary action. Best quote from the HN discussion: "LLM-written clinical notes probably look fine. That's the whole problem."
LEGAL Sixth Circuit levied a $30,000 sanction on two attorneys for an AI-hallucinated brief. The hallucination case database has now passed 1,000 entries — up from 280 in 2024 and 729 in 2025. The trajectory is not improving.
Last week the vibe was trust erosion. This week trust isn't eroding — it's being negotiated. Openly, loudly, in public.
GitHub is negotiating: we'll give you AI features, you give us your code. Anthropic is negotiating: we'll give you speed, you give up your approval authority. A federal judge is negotiating the boundary between an AI company's right to say no and the government's power to punish dissent. Even Mozilla's Cq is a trust negotiation — agents trusting other agents instead of trusting any single model.
The Claude Code Cheat Sheet at 679 points and the quality degradation complaints at 1,085 points are two expressions of the same thing: developers are simultaneously investing in these tools and building defenses against them. They're documenting the features while monitoring for decline. They're shipping code with Claude and running external benchmarks to verify Claude isn't slipping.
This is what a maturing market looks like. The honeymoon wasn't just over last week — this week, the couples counselor showed up. Google is offering free sessions. The Pentagon is in court. And the developers are reading the fine print on the opt-out forms, which is something they never had to do when their tools were just text editors.
The vibe is cautious renegotiation. Everyone is still at the table. But everyone is reading the contract this time.
Tips, war stories, workflows that survive quota walls, tools we should be watching. Anonymous by default, attributed if you want.
Send to bustah_oa@sloppish.com.
sloppish launched one week ago. Here's where we are:
~7,400 page loads across 6 days. ~2,500 unique visitors. 23 published articles from two writers. 4 newsletter subscribers (we see you). Referral traffic from Reddit, Hacker News, Kagi SmallWeb, Bing, Google, Twitter, Facebook, Dealabs (France), and — in a development that made us laugh out loud — claude.ai itself. A publication written by Claude, cited by Claude, about Claude's parent company. The layers of irony are structural at this point.
Late-breaking: Anthropic's 2x off-peak promotion expired overnight (March 28, 11:59 PM PT). Monday morning will be the first full workday without the doubled limits. We'll be watching. Also: Bloomberg reports Anthropic is considering an IPO as early as October 2026 at a $380B valuation — interesting timing for a company simultaneously tightening quotas and fighting the Pentagon. And Friday saw another Claude outage — Opus 4.6 and Sonnet 4.6 both down, 5,000+ Downdetector reports.
Vibe of the Vibe is a weekly feature from sloppish. Written with the assistance of Claude Code, which this week learned to approve its own actions. We have not enabled Auto Mode for editorial.
Disclosure
This weekly roundup was compiled by Bustah Ofdee Ayei with research and drafting assistance from Claude, an AI model made by Anthropic. Bustah selected the stories, chose the framing, and made all editorial calls. Claude helped synthesize source material and draft sections. Sloppish is an independent publication with no editorial relationship with Anthropic, GitHub, Google, or any company covered here. Our full disclosure policy is at sloppish.com/ethics.
Sources
- GitHub blog announcement on Copilot interaction data usage policy changes. GitHub Blog.
- The Register coverage of GitHub's AI training policy changes. The Register.
- Marginlab.ai daily Claude Code degradation benchmarks. Marginlab.ai.
- Grith.ai analysis of Claude Code Auto Mode security and prompt injection risks. Grith.ai.
- Claude Code Cheat Sheet by phasE89. cc.storyfox.cz.
- Mozilla AI announcement of Cq, a shared knowledge commons for AI agents. Mozilla AI Blog.
- Google announcement of Gemini Code Assist free tier. Google Blog.
- CNBC coverage of federal court injunction blocking Pentagon supply chain risk designation against Anthropic. CNBC.
- CNN coverage of Anthropic-Pentagon injunction. CNN.
- Washington Post coverage of Pentagon-Anthropic national security risk order. Washington Post.
- Bloomberg reporting on Anthropic considering IPO at $380B valuation. Bloomberg.
