Anthropic has begun rolling out identity verification requirements for users of its Claude platform, signaling a stronger push toward safety, compliance, and misuse prevention as AI systems become more powerful. The new process will apply selectively, with users prompted to verify their identity when accessing certain features or during routine integrity checks.
The verification system is powered by Persona, a third-party provider specializing in digital identity checks. Users are required to submit a government-issued photo ID and, in some cases, complete a live selfie capture using a phone or webcam. The process typically takes a few minutes and is designed to confirm identity without collecting unnecessary data.
Anthropic says the verification rollout is tied to broader efforts to enforce its usage policies and comply with legal obligations, particularly as advanced AI capabilities raise concerns about misuse. The company emphasized that verification data is used solely for identity confirmation and not for training AI models or other secondary purposes.
Accepted identification includes passports, driver’s licenses, and national ID cards, provided they are physical, valid, and clearly legible. The system explicitly rejects digital IDs, photocopies, or non-government credentials such as student cards or employee badges. Failed verification attempts can result from poor image quality, expired documents, or technical issues, though users are allowed multiple retries.
From a data handling perspective, Anthropic positions itself as the data controller, while Persona processes the information on its behalf. Importantly, identity documents and selfies are stored on Persona’s systems rather than Anthropic’s infrastructure. The company says all data is encrypted in transit and at rest, and Persona is contractually restricted from using the data beyond verification and fraud prevention purposes.
Anthropic also clarified that identity data will not be shared with third parties for marketing or advertising. Access is limited to verification and compliance workflows, with exceptions only in cases where legal obligations require disclosure.
Accounts may still face suspension or bans after verification if they violate platform rules, including repeated misuse, operating from unsupported regions, or breaching terms of service. Users who believe enforcement actions are incorrect can submit appeals for review.
Why This Matters
Identity verification marks a shift toward stricter governance in AI platforms. As models gain more advanced capabilities, companies face increasing pressure to prevent harmful use cases, particularly in areas like cybersecurity, fraud, and misinformation.
For businesses and developers, this introduces an additional compliance step that may affect onboarding and user experience. However, it could also improve trust in AI systems by reducing anonymous misuse and enforcing accountability.
For users, the tradeoff is clear: access to more powerful features may require sharing sensitive identity information, even if safeguards are in place.
Context
Anthropic’s move aligns with a broader industry trend toward tighter controls on AI access. Competitors like OpenAI have also explored verification, tiered access, and usage restrictions for advanced AI tools.
The rollout comes alongside Anthropic’s increasing focus on safety frameworks, including recent efforts to limit high-risk capabilities and introduce safeguards in newer models. As regulators worldwide examine AI risks more closely, identity verification may become a standard requirement across leading platforms.