In late February 2026, the Pentagon designated Anthropic a "supply chain risk" — the first time an American AI company has received this classification, according to Anthropic's legal filings. The designation came after Anthropic refused to remove restrictions on mass surveillance and autonomous weapons use cases.1
The designation included a six-month wind-down period. During that window — right now — U.S. Central Command continues to use Claude in Operation Epic Fury against Iran.
According to CNBC, The Washington Post, and CBS News, Claude is being used for intelligence processing, data analysis, and battle scenario planning. CENTCOM uses the models to process intercepts, satellite imagery, and signals intelligence — generating threat evaluations and situational insights. Humans make the final targeting decisions, but Claude handles what NPR described as "everything short of sign-off."239
The continued use is technically authorized — the six-month transition period permits it. But that's precisely the point: the Pentagon declared Anthropic a national security risk and then acknowledged it would take six months to stop depending on their technology. That's not hypocrisy. It's dependency.4
The Timeline
Palantir's Problem
Palantir CEO Alex Karp made the contradiction public. In a CNBC interview, Karp confirmed that Claude is still integrated in Palantir's Maven Smart System — the military AI platform used in the Iran campaign. "The Department of War is planning to phase out Anthropic; currently, it's not phased out," Karp said.5
Karp's frustration was visible. Claude Opus is among the most capable models available, "prized for its reasoning depth and reliability in high-stakes environments." If Anthropic is actually blacklisted, Palantir loses one of its most powerful AI engines and has to retool mid-contract — a costly and reputationally damaging disruption.6
Under Secretary of Defense Emil Michael, the Pentagon's top technology official, offered the government's position plainly: "You can't have an AI company sell AI to the Department of War and then not let it do Department of War things."7
What This Means
The Pentagon's position creates an uncomfortable question. If Anthropic is genuinely a supply chain risk, then relying on Claude for intelligence processing in an active military campaign — even during a transition period — is reckless. If it isn't a genuine risk, then the designation is punitive. The judge concluded the latter, calling it "classic illegal First Amendment retaliation" and describing the government's actions as "Orwellian."8
But the deeper story isn't about Anthropic's principles or the Pentagon's politics. It's about the six-month wind-down. The Pentagon designated a company a national security threat and then acknowledged it couldn't stop using their product for half a year. That's not hypocrisy — that's dependency. The tool is too deeply embedded in the intelligence pipeline to remove mid-campaign, regardless of what label you put on the company that made it.
When your most important AI vendor says no and your response is "we'll phase you out in six months while continuing to use you in combat," that's not a policy decision. That's an admission of lock-in.
Disclosure
This article was written using Claude, made by Anthropic — the company at the center of this story. We are reporting on a situation where our own tool's parent company is in a legal dispute with the U.S. government over military use of the same technology we use to write articles. The irony is not lost on us. If Anthropic, the Pentagon, Palantir, or anyone else involved wants to respond, our inbox is open: bustah_oa@sloppish.com.
Sources
- CNBC, "Anthropic officially told by DOD that it's a supply chain risk even as Claude used in Iran," March 5, 2026.
- The Washington Post, "Anthropic's AI tool Claude central to U.S. campaign in Iran, amid a bitter feud," March 4, 2026.
- CBS News, "Anthropic's Claude AI being used in Iran war by U.S. military, sources say," March 2026.
- The Hill, "Anthropic's Claude used by Pentagon in war with Iran, official confirms," March 2026.
- CNBC, "Palantir is still using Anthropic's Claude as Pentagon blacklist plays out, CEO Karp says," March 12, 2026.
- Fortune, "Palantir CEO's rant about the Anthropic-Pentagon feud threatening his company," March 5, 2026.
- The Conversation, "The Pentagon strongarmed AI firms before Iran strikes," March 2026.
- CNN, "Judge blocks Pentagon's effort to 'punish' Anthropic by labeling it a supply chain risk," March 26, 2026.
- NPR Fresh Air, interview with Katrina Manson on Claude's role in Operation Epic Fury. "Everything short of sign-off" — Claude processes intelligence and pairs weapons to targets, but humans make final targeting decisions.
