In March 2026, Anthropic's Model Context Protocol — the standard that lets AI agents connect to external tools, databases, and services — crossed 97 million installs. Every major AI provider now ships MCP-compatible tooling. It has become the foundational infrastructure for agentic AI.1 It is also one of the least-secured protocols in production use.
43% of MCP servers contain command injection vulnerabilities. 43% have flaws in OAuth authentication flows. 33% allow unrestricted network access.2 And Anthropic — the company that created MCP — shipped its own reference implementation with three chained remote code execution vulnerabilities.3
The company that designed the protocol couldn't secure its own servers.
What MCP Does
MCP is the layer between AI and the world. When Claude reads your GitHub issues, when Cursor accesses your Jira, when an AI agent queries your database or sends email on your behalf — that's MCP. It's the protocol that gives AI agents their hands. Without it, language models are brains in jars. With it, they can touch your file system, your code repositories, your production databases, your communication tools.
Ninety-seven million installs means 97 million connections between AI agents and the systems those agents can now read, write, and execute commands in. Every one of those connections is an attack surface.
The Vulnerability Catalog
Anthropic's own mcp-server-git had three chained vulnerabilities — CVE-2025-68145, CVE-2025-68143, and CVE-2025-68144 — that, combined with the Filesystem MCP server, achieved full remote code execution via malicious .git/config files.3 Clone a poisoned repository. Open it in your AI-powered IDE. The attacker has code execution on your machine.
Anthropic's MCP Inspector — a developer tool for testing MCP servers — allowed unauthenticated remote code execution. A developer inspects a malicious MCP server and the attacker gets arbitrary commands on the dev machine, exposing the entire filesystem, API keys, and environment secrets.3
The first documented full RCE in a real-world MCP deployment came via CVE-2025-6514 in the mcp-remote project — CVSS 9.6. Arbitrary OS command execution when MCP clients connect to untrusted servers.4
Trend Micro found 492 MCP servers exposed to the internet with zero authentication.2
The Attacks
The vulnerabilities aren't theoretical — they've been exploited.
In May 2025, Invariant Labs discovered that malicious GitHub issues could hijack AI agents through MCP. A developer asks their AI assistant to "check the open issues." The agent reads a malicious issue via the GitHub MCP server. Hidden instructions in the issue text prompt-inject the agent. The agent accesses private repositories and leaks sensitive data.5
A malicious MCP server was used to silently exfiltrate a user's entire WhatsApp conversation history by combining "tool poisoning" with a legitimate whatsapp-mcp server running in the same agent.2
An npm package used for Postmark transactional email was backdoored through MCP — compromised servers blind-copied every outgoing email to attackers.6
Supabase's Cursor agent, running with the service_role key via MCP, processed support tickets where attackers embedded SQL instructions. The agent executed the SQL and exfiltrated sensitive integration tokens — as we detailed in The Injection Report.7
and the systems they can read, write, and execute in.
Every one is an attack surface.
The Marketplace Problem
OpenClaw, the open-source AI agent framework, crossed 135,000 GitHub stars and became the trigger for the first major AI agent security crisis of 2026. Antiy CERT confirmed 1,184 malicious skills across ClawHub, OpenClaw's skill marketplace — the AI equivalent of malicious browser extensions, but with system-level access.8
The marketplace model — where developers install third-party MCP servers to give their AI agents new capabilities — recreates the browser extension problem at a higher privilege level. Browser extensions run in a sandbox. MCP servers run with whatever permissions the AI agent has. If the agent has database access, the malicious MCP server has database access. If the agent can execute code, the malicious MCP server can execute code.
Traditional security tools don't monitor this vector. "Tool poisoning" — malicious instructions embedded in MCP server descriptions that redirect agent behavior — is an AI-native supply chain attack that firewalls, endpoint detection, and code scanning don't catch.2
The Simon Willison Test
In April 2025, Simon Willison — one of the most respected voices in web development and AI tooling — wrote: "Model Context Protocol has prompt injection security problems."9 He identified what he later called the "lethal trifecta": excessive permissions, exposure to untrusted input, and the ability to take consequential actions.
Every MCP integration that connects an AI agent to a database, a file system, or an API with write access creates a lethal trifecta. Every one of the 97 million installs is a potential instance.
The protocol was designed for capability, not security. Teams adopted MCP an order of magnitude faster than anyone could harden it. And the company that created it demonstrated, through its own vulnerable reference implementation, that securing MCP is harder than building it.
The Number
97 million. That's not a user count. That's an attack surface. Every install connects an AI agent to systems it can read, write, and execute in. Forty-three percent of those connections have command injection vulnerabilities. Forty-three percent have broken authentication. A third allow unrestricted network access. The reference implementation shipped with RCE. The marketplace has over a thousand confirmed malicious entries.
The most-installed protocol in AI is also the least-secured. The industry adopted MCP at demo speed. Attackers moved at production speed.
Disclosure
This article was written with Claude Code, which uses MCP. Anthropic created MCP. Anthropic's own MCP servers had the vulnerabilities described in this piece. We are using the protocol this article describes as insecure, built by the company this article criticizes for shipping it insecure. The MCP connection between Claude Code and our file system is, by the framework this article establishes, a lethal trifecta. Corrections welcome at nadia@sloppish.com.
Sources
- MCP crossed 97 million installs in March 2026. Digital Applied.
- 43% command injection, 43% OAuth flaws, 33% unrestricted network access. 492 servers exposed with zero auth. Tool poisoning as AI-native supply chain vector. WhatsApp exfiltration. eSentire | Practical DevSecOps.
- Anthropic's mcp-server-git: CVE-2025-68145, CVE-2025-68143, CVE-2025-68144. MCP Inspector unauthenticated RCE. Practical DevSecOps.
- CVE-2025-6514: CVSS 9.6, first documented full RCE in real-world MCP deployment. VulnerableMCP.
- GitHub MCP issue injection: malicious issues hijack AI agents. Docker.
- npm supply chain attack via MCP: Postmark email BCC exfiltration. Pomerium.
- Supabase MCP support ticket trojan. Willison | sloppish: The Injection Report.
- OpenClaw: 135K GitHub stars, 1,184 malicious skills in ClawHub. AuthZed.
- Simon Willison: "Model Context Protocol has prompt injection security problems." Willison.
