Anthropic Introduces ID and Selfie Verification for Claude, Testing User Privacy Tolerance
Key Takeaways
- ▸Anthropic is rolling out identity verification using government-issued photo ID and live selfies for certain Claude features, handled by third-party provider Persona
- ▸The company emphasizes that identity data will not be used for model training, stored by Anthropic, or shared with third parties, positioning the move as privacy-conscious
- ▸The initiative may undermine Anthropic's privacy-first positioning, as major competitors ChatGPT and Gemini do not require similar verification and users showed resistance to similar measures at Discord
Summary
Anthropic has announced identity verification requirements for Claude users, becoming the first major AI chatbot provider to implement government ID and selfie checks for access to certain features. The verification process, which uses third-party provider Persona to handle document processing, marks a significant shift for a company that has built its reputation on privacy commitments in the competitive AI landscape. Anthropic states that collected identity data will not be used for model training, will not be stored directly by the company, and will not be shared with third parties for marketing or other purposes unrelated to verification and compliance.
The move comes with notable risks, as competitors like OpenAI's ChatGPT and Google's Gemini do not require such verification, potentially making Claude less convenient for users. The announcement draws parallels to Discord's earlier attempts to expand age verification through facial scans and ID checks, which faced significant user backlash over privacy concerns and were subsequently delayed. Industry observers are watching closely to see whether this verification requirement will become an industry standard or remain a differentiator that impacts user adoption.
- Data is encrypted during transfer and storage, with Persona contractually limited to using information solely for verification and fraud prevention
Editorial Opinion
While Anthropic's emphasis on data protection and contractual limitations with Persona demonstrates thoughtful implementation, the move risks contradicting the privacy-first narrative that has been central to its competitive positioning. Users skeptical of AI companies' data practices may view mandatory biometric verification as a red flag, especially when competitors offer unrestricted access. The company will need to clearly communicate the business justification for this requirement to avoid the user backlash that derailed Discord's similar initiative.



