BREAKING

The Ethics Tax: The Ruling

A federal judge called the Pentagon's Anthropic blacklist "classic illegal First Amendment retaliation" and "Orwellian." The ethics tax just got a receipt.
By Nadia Byer · March 27, 2026
The Ethics Tax: The Ruling

On Thursday, Judge Rita Lin of the U.S. District Court for the Northern District of California granted Anthropic's request for a preliminary injunction, blocking the Trump administration from enforcing President Trump's directive banning federal agencies from using Claude and halting the Pentagon's supply chain risk designation. The order is temporary. The language is not.1

"Punishing Anthropic for bringing public scrutiny to the government's contracting position," Judge Lin wrote, "is classic illegal First Amendment retaliation."1

Read that sentence again. A federal judge, in a written order, used the word "punishing." Not "may constitute." Not "raises concerns about." Punishing. Classic. Illegal.

What the Judge Said

The order went further than the hearing suggested it might. Three passages matter.

First, on the supply chain risk designation itself: Judge Lin called it "likely both contrary to law and arbitrary and capricious."1 The statutes used — 10 U.S.C. §3252 and FASCSA — were designed to protect military procurement from foreign sabotage. They have never been used against an American company. Lin's order suggests they were misapplied.

Second, on the precedent: "Nothing in the statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for exposing a disagreement with the government."2 The word "Orwellian" appears in a federal court order. That is not judicial temperament. That is a judge who has seen enough.

Third, on the cascading threat we identified in The Ethics Tax: the injunction doesn't just block the designation. It bars the administration from implementing, applying, or enforcing the president's directive.1 The weapon — the one that would have forced Amazon, Google, and Nvidia to choose between Anthropic and the Pentagon — is, for now, disarmed.

"The Orwellian notion that an American company may be branded a potential adversary for exposing a disagreement with the government."
— Judge Rita Lin, U.S. District Court

What It Means

A preliminary injunction is not a final ruling. It means the judge found that Anthropic is likely to succeed on the merits and that the balance of harms favors blocking enforcement while the case proceeds. The government can appeal. The full trial hasn't happened.

But the signal is clear. The judge didn't find a close call. She found retaliation. She found misuse of statute. She found an Orwellian application of national security law to punish corporate speech. These are not findings that get reversed easily on appeal.

For the AI industry, the immediate effect is relief — Anthropic's partners don't have to choose sides, the cascading blacklist won't be enforced, and the company that refused to remove its safety restrictions won't be destroyed for it. The 30+ OpenAI and Google DeepMind employees — including Google chief scientist Jeff Dean — who filed an amicus brief warning that the blacklist "will undoubtedly have consequences for the United States' industrial and scientific competitiveness" got the outcome they argued for.3

For the longer term, the question we posed in The Ethics Tax stands: will this ruling encourage other companies to maintain safety restrictions, or was Anthropic's stand a one-time event that nobody else will repeat?

What It Doesn't Mean

This ruling does not make Anthropic a hero. The complications we documented — the RSP revision the same week as the Pentagon stand, the leaked memo — remain. The company held one line while moving another. The ethics are still a budget, not a binary.

This ruling does not prevent the Pentagon from pursuing AI tools without safety restrictions. OpenAI's contract — the one the EFF called "Weasel Words," the one that allows use for "any lawful purpose" rather than imposing contractual bans — is unaffected.4 The Pentagon can still get what it wants from a company willing to give it.

And this ruling does not address the underlying tension: a $12 billion military AI budget seeking unrestricted access to frontier models, and a regulatory environment that has no framework for when a company's safety commitments conflict with a government's operational demands.

The case will continue. Senator Warren's investigation, with its April 6 deadline for answers from Hegseth and Altman, will produce its own findings.5 The full trial will test whether the preliminary findings hold.

The Receipt

We published The Ethics Tax before the ruling. The thesis was that Anthropic was being punished for maintaining safety restrictions, and that the supply chain risk designation was retaliation dressed as procurement policy.

Judge Lin's order uses the word "punishing." It uses the word "retaliation." It uses the word "Orwellian."

The ethics tax is real. The court just issued the receipt.

Disclosure

This article was written with Claude, made by Anthropic — the company that won this ruling. That conflict is as significant now as it was when we published The Ethics Tax, and it is disclosed here for the same reason. The ruling is publicly available. The judge's language is quoted directly. Verify it. Corrections welcome at nadia_byer@sloppish.com.

Sources

  1. Judge Rita Lin, preliminary injunction order, March 26, 2026. "Classic illegal First Amendment retaliation." Blocks enforcement of Trump directive and supply chain risk designation. CNBC | NPR.
  2. "Orwellian notion" quote from Judge Lin's order. CNN | Axios.
  3. Amicus brief from 30+ OpenAI and Google DeepMind employees including Jeff Dean. Fortune.
  4. EFF "Weasel Words" analysis of OpenAI's Pentagon contract. EFF.
  5. Senator Warren investigation, April 6 deadline. TechCrunch.
Share: Bluesky · Email
Get sloppish in your inbox
Free newsletter. No spam. Unsubscribe anytime.