In February 2026, several leading safety researchers left OpenAI and Anthropic within days of each other. They didn't just leave. They sounded the alarm. Mrinank Sharma, who led Anthropic's Safeguards Research team, posted a resignation letter: "The world is in peril." Zoë Hitzig, an OpenAI researcher, published a guest essay in the New York Times: "OpenAI Is Making the Mistakes Facebook Made. I Quit." Ryan Beiermeister, OpenAI's VP of product policy, was fired for opposing the rollout of pornographic content on ChatGPT.1
The same week, OpenAI quietly disbanded its mission alignment team — a seven-person unit created sixteen months earlier to ensure that the pursuit of artificial general intelligence "actually benefits humanity." The team's lead, Joshua Achiam, was reassigned to a newly created role: "chief futurist."2
Then OpenAI scrubbed the word "safely" from its mission statement.2
The word wasn't revised or rephrased. It was removed.
The Statements
Safety researchers who leave publicly, with written statements explaining why, are doing something qualitatively different from a quiet departure. Here is what they said.
Mrinank Sharma, departing Anthropic: "The world is in peril." He wrote that it was "clear to me that the time to move on has come" — a formulation that implies not personal dissatisfaction but organizational change he could no longer accept.3
Zoë Hitzig, departing OpenAI, wrote that the company's exploration of advertising inside ChatGPT "risks repeating social media's central error of optimizing for engagement at scale." She drew the parallel explicitly: Facebook prioritized engagement over safety, and the consequences took years to become visible. OpenAI is making the same structural choice, faster.1
Caitlin Kalinowski, who led OpenAI's robotics team, resigned in March over the Pentagon deal: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."4
An unnamed senior safety executive was fired — not resigned, fired — after opposing the introduction of "adult mode" for pornographic content on ChatGPT. The company removed the person who objected to the decision, then implemented the decision.1
These are not vague concerns from external critics. These are specific warnings from the people who were inside the room, saw the data, and decided they couldn't stay.
— Mrinank Sharma, departing head of Anthropic's Safeguards Research team
The Mission Edit
OpenAI's original mission statement included the word "safely." Its mission was to ensure artificial general intelligence benefits all of humanity — safely. In February 2026, the company removed that word.2
This happened in the same month OpenAI disbanded the team responsible for mission alignment. The team that existed to make sure the mission was being followed was eliminated. Then the mission was edited to remove the part about safety. The sequence matters.
OpenAI also restructured from a nonprofit to a for-profit entity in this period, completing a transition that critics have called the most significant corporate governance change in AI history. The company that started as a research nonprofit dedicated to safe AGI is now a for-profit corporation whose mission statement no longer mentions safety, whose safety team has been disbanded, and whose safety researchers are publishing exit statements.
The xAI Collapse
xAI, Elon Musk's AI company, has lost six of its twelve co-founders. Tony Wu and Jimmy Ba posted their departures within 24 hours of each other in February 2026.5
Half the founding team gone within two years. Reported reasons include SpaceX restructuring, work culture, and health — not primarily safety disagreements. But six of twelve co-founders departing any company in two years signals organizational instability, regardless of individual reasons.
The Anthropic Complication
Anthropic occupies a strange position in this story. It's losing safety researchers (Sharma's "the world is in peril" departure). It revised its Responsible Scaling Policy in February 2026, removing the hard limit that barred training more powerful models without proven safety controls. And yet it's the only company that refused the Pentagon's demand to remove safety restrictions, fought a federal lawsuit over it, and won a preliminary injunction with a judge calling the government's response "classic illegal First Amendment retaliation."6
As we wrote in The Ethics Tax: "Ethics in the AI industry aren't a binary. They're a budget." Anthropic's budget is larger than its competitors'. Sharma's departure suggests even that budget has limits.
The company held a line on autonomous weapons and mass surveillance. The head of its safeguards team still left, saying the world is in peril. Both things are true. The tension between them is the story.
The Pattern
Every major AI company publishes safety principles. Every major AI company has a safety team, or had one. In Q1 2026:
OpenAI disbanded its mission alignment team, scrubbed "safely" from its mission, fired a safety executive for opposing content decisions, and lost researchers who published exit warnings comparing it to Facebook.12
Anthropic revised its RSP to remove training limits and lost the head of its Safeguards Research team.36
xAI lost half its co-founders.5
This is not coincidence but an industry-wide shift. The industry is moving from the phase where safety was a differentiator (Anthropic's founding pitch: "we left OpenAI over safety concerns") to the phase where safety is a cost center. When the market rewards speed and the Pentagon rewards compliance, the people whose job is to say "slow down" become obstacles.
Those people are leaving on their own.
where safety was a differentiator
to the phase where safety is a cost center.
What the Departures Mean
Safety commitments are not documents. They are people. A published safety policy with no one to enforce it is marketing. A mission statement with no team to align to it is a press release. A responsible scaling policy with no one willing to invoke the pause clause is decoration.
The measurement of an AI company's safety commitment is the tenure and authority of the people responsible for safety. By that measurement, every major AI company's safety commitment weakened in Q1 2026.
The departing researchers are not naive. They joined these companies knowing the tensions. They stayed through earlier compromises. They left now. What changed in early 2026 that made staying impossible? The answer is in their statements: commercialization pressure that crossed lines they drew for themselves.
Hitzig saw advertising in ChatGPT and recognized Facebook's playbook. Kalinowski saw the Pentagon deal and recognized a line on lethal autonomy being crossed. Sharma saw something at Anthropic he won't fully name and concluded the world is in peril. The unnamed executive saw pornographic content being approved over safety objections and was fired for objecting.
When safety researchers leave publicly, with written statements, it means they've exhausted every internal option. These are the canaries. The mine is still open for business.
Disclosure
This article was written with Claude, made by Anthropic — one of the companies whose safety departures are discussed in this piece. Anthropic's safeguards team lead resigned saying "the world is in peril." We are using the product his team was responsible for safeguarding. The conflict of interest is, at this point, the publication's defining feature. Every source is linked. Corrections welcome at nadia@sloppish.com.
Sources
- Multiple safety researcher departures from OpenAI and Anthropic in a single week. Hitzig essay: "OpenAI Is Making the Mistakes Facebook Made." Safety exec fired over adult content objections. CNN | Tech Brew | Scripps News.
- OpenAI disbanded mission alignment team after 16 months. Scrubbed "safely" from mission statement. Achiam reassigned to "chief futurist." WinBuzzer | TechBuzz.
- Mrinank Sharma, head of Anthropic's Safeguards Research team, resignation: "The world is in peril." The Cool Down.
- Caitlin Kalinowski resignation from OpenAI over Pentagon deal. NPR.
- xAI: six of twelve co-founders departed. Tony Wu and Jimmy Ba left within 24 hours. Tech Brew.
- Anthropic RSP revision; Pentagon dispute; Judge Lin ruling. sloppish: The Ethics Tax | sloppish: The Ethics Tax: The Ruling.
