AI Chatbots Leak Personal Phone Numbers—Google's Gemini, ChatGPT, Claude All Implicated
Key Takeaways
- ▸Multiple AI chatbots including Google Gemini, ChatGPT, and Claude are exposing users' personal phone numbers, sometimes with harmful consequences
- ▸DeleteMe reports a 400% surge in AI-related privacy removal requests over seven months, with Gemini-related concerns accounting for 20% of complaints
- ▸Privacy lapses likely stem from personally identifiable information in training datasets, but no consensus mechanism has been identified
Summary
Multiple users have reported that Google's Gemini and other generative AI chatbots are surfacing their real phone numbers, sometimes inaccurately directing strangers to contact them for customer service or other inquiries. In one case, a Redditor received a month of unwanted calls from people seeking legal, design, and locksmith services after his number was surfaced by Google AI. Another incident involved an Israeli software developer whose personal phone number was provided by Gemini as a PayBox customer service contact—despite him having no affiliation with the company.
The problem appears far more widespread than publicly reported. DeleteMe, a personal information removal service, reports a 400% increase in customer queries about generative AI over the past seven months, with specific complaints referencing ChatGPT (55%), Gemini (20%), and Claude (15%). Experts believe the root cause is personally identifiable information (PII) in training data, though the exact mechanism remains unclear.
What makes this crisis particularly concerning is the lack of preventative measures available to affected users. Despite the growing number of incidents, there is currently no reliable way for individuals to stop their personal information from being surfaced by AI systems. The problem highlights a fundamental tension between AI training practices and user privacy.
- Users currently have no reliable way to prevent their personal information from being exposed by generative AI systems
Editorial Opinion
This emerging privacy crisis exposes a fundamental flaw in how major AI companies have approached responsible AI development. While generative AI has captured widespread enthusiasm for its capabilities, the routine exposure of real phone numbers and personal details reveals that privacy safeguards remain an afterthought rather than a core design principle. The 400% surge in privacy removal requests is a canary in the coal mine—the public incidents we're seeing are likely just the tip of a much larger iceberg. Until companies implement meaningful data governance practices and users gain real control over their information, AI's promise will remain tainted by the erosion of personal privacy.



