Anthropic Surprises by Requiring Identity Document to Use Claude — and the Reaction Was Not Positive
![]() |
Anthropic, the company behind the Claude artificial intelligence assistant, quietly announced a new identity verification policy for its users this week. An official support page, published on April 14, states that the company has selected Persona Identities as its verification partner — the same KYC infrastructure used by financial services — and requires a physical passport, driver's license, or national identity card. Copies, digital IDs, and student credentials are not accepted. A live selfie may also be requested.
The measure, however, is not universal. Anthropic has rolled out the authentication mechanism for certain Claude use cases to prevent abuse, enforce usage policies, and comply with legal obligations. The company states that it does not use verification data to train models, and that the information is stored on Persona's servers — not Anthropic's systems. Even so, the lack of transparency about which features trigger the requirement has caused immediate discomfort among users and analysts.
The timing of the decision is striking. Just two months ago, millions of users had migrated from ChatGPT to Claude after OpenAI signed a contract to deploy AI on classified Pentagon networks — a contract that Anthropic publicly declined, citing concerns about mass surveillance and autonomous weapons. That move yielded a record number of daily sign-ups and a 60% increase in free users since January. Now asking that same user base for a government-issued document resonates as a strategic contradiction.
The choice of Persona as a partner has also drawn independent criticism. An investigation by Mashable revealed that personal data from LinkedIn users verified through Persona can be shared with up to 17 companies. Moreover, in October 2025, a Discord breach exposed approximately 70,000 images of government identity documents submitted for age verification — a process that also involved Persona. Anthropic did not immediately respond to press requests for additional details.
The episode adds to an already turbulent week for the company. Around April 12, multiple users reported being incorrectly flagged as minors by Claude, resulting in account suspensions — a process that used Yoti, not Persona. One Pro plan user complained: "The Anthropic team saw all my conversations and blocked me," adding that they were attempting to appeal the decision. The combination of both incidents has ignited a broader debate about how far AI platforms can — and should — go in controlling user identity.
The trend is concerning regardless of stated intentions. An open letter signed by more than 400 scientists and researchers warns that identity verification systems expand the collection of sensitive data, including biometrics, behavioral signals, and contextual information, introducing risks of misuse by providers, third-party access, and data breaches. The debate over the balance between security and privacy on AI platforms is far from over — and Anthropic has just placed itself at the center of it.
Sources:
https://support.claude.com/en/articles/14328960-identity-verification-on-claude
https://tech.yahoo.com/ai/claude/articles/switched-claude-over-surveillance-fears-203738793.html
tech.yahoo.com/ai/claude/articles/anthropic-may-now-demand-government-000000288.html
