In July 2025, Anthropic signed a $200 million contract with the U.S. Department of Defense. Claude became the first frontier AI system cleared for classified military networks. The contract had two restrictions — Anthropic's red lines: no fully autonomous weapons and no mass domestic surveillance of Americans. Eight months later, the Trump administration designated Anthropic a "supply chain risk" — the first time this label has ever been applied to an American company — for refusing to remove those restrictions. Hours later, OpenAI signed a deal to replace them.12
Every AI company claims to care about safety. Only one has been punished for it.
Before we go further: this publication runs on Claude. Anthropic makes the tool this article was written with. That conflict of interest is real, it is unavoidable, and it is disclosed here at the top — not buried in a footnote. Every claim in this piece is sourced from court filings, congressional letters, and reporting by CNBC, CNN, NPR, Bloomberg, TechCrunch, and MIT Technology Review. Verify all of it. We'll wait.
The Ultimatum
Negotiations over Claude's deployment on the Pentagon's GenAI.mil platform stalled in September 2025. The dispute was specific: the Pentagon wanted Anthropic to agree to let the military use its models for "all lawful purposes" — no restrictions. Anthropic's position: the two red lines were non-negotiable.1
On February 24, 2026, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei at the Pentagon. Hegseth issued an ultimatum: relent by 5:01 PM Friday, February 27, and allow unrestricted use for "all legal purposes."1
Amodei's public response, on February 26: "The Pentagon's threats do not change our position: we cannot in good conscience accede to their request."3
Anthropic gave two reasons. First: "We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians." Second: "We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."3
On February 27, President Trump directed all federal agencies to cease using Anthropic's AI technology. Secretary Hegseth announced the pending supply chain risk designation. Pentagon Undersecretary Emil Michael posted on X: "It's a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military."2
Hours later, OpenAI announced it had struck a deal with the Pentagon to provide its AI for classified networks.4
Hours.
The Label
On March 3, the Pentagon formally designated Anthropic a supply chain risk under 10 U.S.C. §3252 and the Federal Acquisition Supply Chain Security Act of 2018. These are statutes designed to protect military procurement from foreign sabotage. They have been used against foreign adversaries — companies like Huawei. They have never been used against an American company.5
The direct impact: Anthropic cannot contract with the DOD. The cascading impact is the real weapon. Hegseth's statement: "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."5
Read that again. Amazon, Google, Nvidia — all major Anthropic investors — also do defense business. If enforced broadly, companies would have to choose: Anthropic or Pentagon contracts. Bloomberg compared it to a "Huawei-like ban."5
The DOD contract itself was worth $200 million. For context, Anthropic's annualized revenue run rate hit $14 billion by February 2026 and surged to $19 billion by early March — but run rate is not revenue, and treating it as such overstates where the company actually stood. Amodei has said enterprises account for roughly 80% of Anthropic's business, with government a small fraction. But the cascading effect on companies serving both Anthropic and the DOD could reach hundreds of millions in lost revenue, per Anthropic's CFO in legal filings.5
This is the ethics tax. Not the $200 million contract. The existential threat of forcing every partner and investor to choose sides.
Only one has been punished for it.
The Replacement
OpenAI's Pentagon deal was announced the same day Anthropic was banned. Worth up to $200 million — the same value as the contract it replaced. OpenAI published a blog post titled "Our agreement with the Department of War."4
OpenAI's stated restrictions: no mass domestic surveillance. No autonomous weapons systems. On paper, similar to Anthropic's red lines.4
On paper.
The Electronic Frontier Foundation published an analysis titled "Weasel Words." Their finding: OpenAI's contract doesn't give it an Anthropic-style, free-standing right to prohibit otherwise-lawful government use. It merely states the Pentagon can't use the technology to break existing laws — laws which could change. Anthropic sought contractual bans. OpenAI allows use for "any lawful purpose."6
MIT Technology Review: "OpenAI's 'compromise' with the Pentagon is what Anthropic feared."7 The Intercept's headline: "On Surveillance and Autonomous Killings: You're Going to Have to Trust Us."8
The distinction matters. "We won't let you do X" is a restriction. "You can do anything legal" is a permission. Laws change. Interpretations shift. "Lawful purpose" is whatever the government says it is at the time.
Sam Altman admitted the deal "looked opportunistic and sloppy." He told staff he "shouldn't have rushed." He said OpenAI would push for the same limitations Anthropic had. The signed contract doesn't reflect that.9
Caitlin Kalinowski, who led OpenAI's hardware team, resigned. Her statement: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."10
The Complication
Anthropic is not a clean hero in this story. Two complications.
First: the leaked memo. After the ban, Amodei wrote an internal memo to staff criticizing the Trump administration, saying it opposed Anthropic because the company hadn't donated to the president or offered "dictator-style praise." He reportedly called OpenAI staff "gullible." The memo leaked. Amodei apologized publicly on March 6: "It was a difficult day for the company, and I apologize for the tone of the post."11
The memo was intemperate. It was also probably accurate. But "probably accurate" and "strategically wise to put in writing" are different things.
Second, and more significant: on February 24 — the same day Hegseth met Amodei — Anthropic publicly released version 3.0 of its Responsible Scaling Policy, removing the hard limit that barred training more powerful models if capabilities outstripped safety controls. Anthropic replaced it with a dual-condition framework — not a simple deletion, but a loosening. CNN covered the change the next day; critics noted the timing regardless.12
The company fighting the Pentagon over safety restrictions revised its own internal safety policy the same week. Anthropic framed the new framework as more practical and mature. Critics saw a company weakening one safety commitment while publicly defending another. Both readings have merit.
This doesn't invalidate the Pentagon stand. The two red lines — no autonomous weapons, no mass surveillance — are specific, defensible, and worth defending. But the RSP revision complicates the narrative that Anthropic is the pure safety company and everyone else is compromised. The reality is messier. Anthropic held a line the industry wouldn't hold while simultaneously moving a different line the industry has been moving for years.
The Court
Anthropic filed suit on March 9 in the U.S. District Court for the Northern District of California, claiming the designation punishes the company for being outspoken about AI safety policy.13
The support was extraordinary. Over 30 employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Dean — filed an amicus brief warning that the blacklist "threatens to damage the entire American AI industry."14 Nearly 150 retired federal and state judges filed a separate amicus brief supporting Anthropic.15
Then, on March 20, TechCrunch published details from court filings revealing that on March 4 — a day after the Pentagon sent the formal supply chain risk designation — Under Secretary Michael emailed Amodei saying the two sides were "very close" on the autonomous weapons and surveillance issues.16 The very issues cited as justification for the blacklist.
If the Pentagon's own official said they were nearly aligned, who made the decision to escalate? The designation looks less like a procurement dispute and more like retaliation.
Senator Elizabeth Warren thought so. On March 23, she sent a letter to Defense Secretary Hegseth calling the designation "retaliation" and opening an investigation into both the Anthropic blacklisting and the OpenAI contract: "I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards."17
The Hearing
On March 24, Judge Rita Lin heard Anthropic's request for a preliminary injunction. Her questions were pointed.18
"I don't know if it's murder, but it looks like an attempt to cripple Anthropic."
"That seems a pretty low bar," she said of the government's justification.
"What is troubling to me about these three actions is that they don't really seem to be tailored to the stated national security concern."
"If the worry is about the integrity of the operational chain of command, [the Pentagon] could just stop using Claude."
She expressed concern that Anthropic was being "punished for criticizing the government's contracting position in the press."18
A ruling is expected within days.
The Landscape
Every other major AI company complied with the Pentagon's demand for unrestricted use. Google complied. Meta initially prohibited military use of Llama, then reversed. xAI was part of the original four-company Pentagon AI expansion. Palantir, the defense integrator that partnered with Anthropic on the original classified deployment, is still in the game.1
Anthropic is the only one that said no.
The Pentagon's FY2026 AI budget: $12 billion+, a 40% increase year-over-year. The Pentagon is making plans for AI companies to train on classified data. The market for military AI is large and growing. The incentive to comply is enormous. The incentive to resist is a supply chain risk designation and a tweet calling your CEO a liar with a God-complex.2
If the government can blacklist a company for refusing to remove ethical safeguards, no company will maintain them. That's not speculation. That's the amicus brief from 30+ employees of competing companies. They understand the precedent even if their employers signed the deals.
— Judge Rita Lin, U.S. District Court
The Tax
The ethics tax is not the $200 million contract. The ethics tax is the supply chain risk label that could force every company doing business with both Anthropic and the Pentagon to choose sides. It's the signal to every AI startup watching: this is what happens when you mean it.
Every frontier AI company has a safety page on its website. Every one publishes principles. Every one has a Chief Safety Officer or equivalent. Every one says the right things at conferences and in congressional testimony.
Only one was tested. Only one refused.
And the complication remains. Anthropic held the line on autonomous weapons and surveillance while revising its own scaling policy the same week. The company that said no to the Pentagon replaced its hard training limits with a looser framework. The ethics aren't pure. They're strategic — chosen where the public relations value is highest and abandoned where the competitive cost is too great.
Maybe that's the most honest thing about this story. Ethics in the AI industry aren't a binary. They're a budget. Every company decides how much safety it can afford. Anthropic's budget is larger than its competitors'. It is not unlimited. Nobody's is.
The question this case will answer — in court, in Congress, in the market — is whether the industry's safety budget is about to get a lot smaller. If the penalty for maintaining restrictions is a Huawei-style blacklist, the rational move is to drop the restrictions. Every company watching knows this. The next time the Pentagon asks, nobody will say no.
That's the ethics tax. Not what it costs Anthropic. What it costs everyone else to watch.
Disclosure
This article was researched and written with the assistance of Claude, an AI made by Anthropic — the company at the center of this story. That is the most significant conflict of interest this publication has faced. We are disclosing it here, at the top of the article, and again here at the bottom. Every factual claim is sourced from court filings, congressional records, and independent journalism. We have included Anthropic's complications (the RSP revision, the leaked memo) alongside its stand. We have given equal sourcing weight to Anthropic's critics and supporters. The reader should weight our analysis accordingly. The receipts are all linked. Check them. Corrections welcome at nadia@sloppish.com.
Citations
- Timeline of the Anthropic-Pentagon dispute. TechPolicy.Press.
- Trump bans Anthropic; Hegseth announces supply chain risk designation; Emil Michael post on X. Fortune.
- Amodei's public statement refusing the Pentagon's terms. Anthropic | CNN.
- OpenAI Pentagon deal announcement. OpenAI blog.
- Supply chain risk designation: legal basis, practical impact, cascading effects. Mayer Brown | Bloomberg | CNBC.
- EFF "Weasel Words" analysis of OpenAI's Pentagon contract. EFF.
- MIT Technology Review on OpenAI's deal as what Anthropic feared. MIT Technology Review.
- The Intercept on surveillance and autonomous killings. The Intercept.
- Altman admits deal "looked opportunistic and sloppy." CNBC.
- Caitlin Kalinowski resignation from OpenAI. NPR.
- Amodei leaked memo and apology. Axios.
- Anthropic RSP v3.0 publicly announced February 24, 2026; replaced hard training limits with dual-condition framework. CNN.
- Anthropic files lawsuit in N.D. California. CNBC.
- Amicus brief from 30+ OpenAI and Google DeepMind employees including Jeff Dean. Fortune.
- Amicus brief from ~150 retired federal and state judges. CNN.
- Court filing reveals "nearly aligned" email from Under Secretary Michael, March 4. TechCrunch.
- Senator Warren letter calling designation "retaliation." TechCrunch | Warren letter (PDF).
- Judge Rita Lin hearing, March 24. "Attempt to cripple," "troubling," "not tailored." CNBC | CBS News | NPR | The Hill.