RECEIPTS

The Verification

By Bustah Ofdee Ayei · April 15, 2026

Anthropic didn't announce this on their blog. They published it as a support article — "Identity verification on Claude" — buried in the help center at support.claude.com.1 To continue using Claude in certain unspecified scenarios, you'll now need to hand over a government-issued photo ID and a live selfie. The company that builds the AI replacing your job wants to see your face first.

What They're Asking For

The requirements: a passport or driver's license, plus a real-time selfie captured through your device camera.1 The verification is handled by Persona Identities, a third-party identity platform at withpersona.com.2 Anthropic is the data controller. Persona is the processor. Your face goes to Persona's servers, gets matched against your government document, and the result flows back to Anthropic.

When does this trigger? Anthropic says "a few use cases" require it.1 They won't say which ones. The support page mentions "certain capabilities," "routine platform integrity checks," and "safety and compliance measures" without defining any of those terms. You'll be asked to prove you're you, and you won't know why until it happens.

The vagueness is the point. OpenAI already does identity verification through the same vendor — they announced Verified Organization via Persona around April 2025 — but their scope is explicit: API access to advanced models for organizations.3 You know what triggers it. You can plan around it. Anthropic's version is a blank check. Any use case, any time, no disclosed criteria.

What Persona Collects

Persona's own documentation and independent security research paint a broader picture than Anthropic's support page suggests. The platform collects facial geometry from your selfie, geographic location, IP addresses, and browser and device fingerprints.2 Independent reporting has described the system as also capturing "behavioral biometrics" — reportedly including hesitation detection, which measures how you pause and move while completing the verification flow.5

Persona runs 269 distinct verification checks per identity session.5 The data is retained for up to 3 years after your last interaction.4 Persona's subprocessor list reportedly includes 17 third-party companies that may handle portions of your data.4

269 verification checks, 3 years of retention, and a vendor that left files exposed on a government server. To use a chatbot.

The Security Track Record

In February 2026, Malwarebytes reported that Persona left a frontend exposed on a US government server containing 2,456 files.5 This is the company Anthropic chose to hold your facial geometry and government ID. Their security posture was tested by reality and it failed — on a government system, no less. The exposure was discovered by researchers, not by Persona. We don't know how long it was open.

The standard response here is "no evidence of misuse." That's always the response. It doesn't mean there wasn't misuse. It means nobody checked, or nobody's saying.

The Ratchet

The pattern has been tightening for two years. Claude launched with free access, added paid tiers, imposed aggressive rate limits that forced upgrades, introduced a $200/month Max tier for anyone doing real work, rolled out age verification via Yoti, and now requires government ID verification via Persona.16 Each step up the ladder demands more personal data to access tools that are, simultaneously, replacing the jobs that pay for those subscriptions.

The Hacker News reaction was predictable and correct. "Programming is now 18+ only?"6 Someone compared it to the USSR's typewriter registration system — the state needed to know who could produce text.6 Multiple commenters announced they were moving to local models. The verification requirement didn't just annoy them. It crossed a line they didn't know they had.

The timing is worth noting: the same week Anthropic published their quiet support article about needing your passport, Google released Gemma 4 running natively on an iPhone.7 Full offline AI inference, zero verification, zero data collection. The escape hatch from the verification squeeze arrived on the same news cycle as the squeeze itself.

Every step up the access ladder demands more personal data for tools that are simultaneously replacing the jobs that pay for the subscription.

The Structural Problem

Identity verification for AI access is being framed as safety. It might even be motivated by safety — preventing misuse, satisfying regulators, limiting liability. We don't doubt the intent. We doubt the architecture.

When you centralize AI behind identity gates, you create a biometric database linked to the full range of questions a person asks an AI — coding problems, medical concerns, legal queries, moments of confusion and vulnerability — all tagged to a verified face and a government ID number, retained for three years, processed by companies you've never heard of, and secured by a vendor that left files exposed on a government server.

The question isn't whether Anthropic means well. The question is whether any company should accumulate this combination of data — biometric identity plus complete intellectual activity logs — and whether "a few use cases" is an acceptable threshold for demanding it.

Disclosure

sloppish.com is built using Claude Code, Anthropic's AI coding tool. This article critiques a policy by the company whose product we use daily. We believe that makes the critique more credible, not less. We have not been asked to verify our identity. Yet. bustah_oa@sloppish.com

Sources

  1. Anthropic, "Identity verification on Claude," support.claude.com (accessed April 2026)
  2. Persona Identities, product documentation and privacy policy, withpersona.com — verification checks, behavioral biometrics, hesitation detection, facial geometry collection
  3. OpenAI, "Verified Organization" announcement, ~April 2025 — Persona-based ID verification for API access to advanced models
  4. Persona Identities, data processing agreement and subprocessor list, withpersona.com — 3-year retention period, 17 subprocessors
  5. Malwarebytes, report on Persona frontend exposure on US government server, February 2026 — 2,456 files discovered
  6. Hacker News discussion threads on Claude identity verification — "programming is now 18+ only," USSR typewriter registration comparison, local model migration
  7. Google Gemma 4 on-device inference announcement — native iPhone support, fully offline AI processing
Share: Bluesky · Email
Get sloppish in your inbox
Free newsletter. No spam. Unsubscribe anytime.