BotBeat
...
← Back

> ▌

Google / AlphabetGoogle / Alphabet
INDUSTRY REPORTGoogle / Alphabet2026-05-14

AI Chatbots Leak Personal Phone Numbers—Google's Gemini, ChatGPT, Claude All Implicated

Key Takeaways

  • ▸Multiple AI chatbots including Google Gemini, ChatGPT, and Claude are exposing users' personal phone numbers, sometimes with harmful consequences
  • ▸DeleteMe reports a 400% surge in AI-related privacy removal requests over seven months, with Gemini-related concerns accounting for 20% of complaints
  • ▸Privacy lapses likely stem from personally identifiable information in training datasets, but no consensus mechanism has been identified
Source:
Hacker Newshttps://www.technologyreview.com/2026/05/13/1137203/ai-chatbots-are-giving-out-peoples-real-phone-numbers/↗

Summary

Multiple users have reported that Google's Gemini and other generative AI chatbots are surfacing their real phone numbers, sometimes inaccurately directing strangers to contact them for customer service or other inquiries. In one case, a Redditor received a month of unwanted calls from people seeking legal, design, and locksmith services after his number was surfaced by Google AI. Another incident involved an Israeli software developer whose personal phone number was provided by Gemini as a PayBox customer service contact—despite him having no affiliation with the company.

The problem appears far more widespread than publicly reported. DeleteMe, a personal information removal service, reports a 400% increase in customer queries about generative AI over the past seven months, with specific complaints referencing ChatGPT (55%), Gemini (20%), and Claude (15%). Experts believe the root cause is personally identifiable information (PII) in training data, though the exact mechanism remains unclear.

What makes this crisis particularly concerning is the lack of preventative measures available to affected users. Despite the growing number of incidents, there is currently no reliable way for individuals to stop their personal information from being surfaced by AI systems. The problem highlights a fundamental tension between AI training practices and user privacy.

  • Users currently have no reliable way to prevent their personal information from being exposed by generative AI systems

Editorial Opinion

This emerging privacy crisis exposes a fundamental flaw in how major AI companies have approached responsible AI development. While generative AI has captured widespread enthusiasm for its capabilities, the routine exposure of real phone numbers and personal details reveals that privacy safeguards remain an afterthought rather than a core design principle. The 400% surge in privacy removal requests is a canary in the coal mine—the public incidents we're seeing are likely just the tip of a much larger iceberg. Until companies implement meaningful data governance practices and users gain real control over their information, AI's promise will remain tainted by the erosion of personal privacy.

Large Language Models (LLMs)Natural Language Processing (NLP)AI Safety & AlignmentPrivacy & Data

More from Google / Alphabet

Google / AlphabetGoogle / Alphabet
POLICY & REGULATION

AI Tools for Protein Design Could Enable Creation of Deadly Bioweapons, Experts Warn

2026-05-13
Google / AlphabetGoogle / Alphabet
PRODUCT LAUNCH

Google Unveils Googlebook: AI Laptop Built Around Gemini with Magic Pointer Interface

2026-05-13
Google / AlphabetGoogle / Alphabet
RESEARCH

Research Reveals Google's Search Empire is Splitting Into Three Different Information Realities

2026-05-13

Comments

Suggested

AdaAda
PRODUCT LAUNCH

Adaption Launches AutoScientist to Democratize Frontier Model Training

2026-05-14
Independent ResearchIndependent Research
RESEARCH

Stateful Transformers Enable 5.9x Faster Streaming Inference

2026-05-14
OpenAIOpenAI
POLICY & REGULATION

OpenAI Faces Mounting Legal Liability Over Alleged Role in Mass Shootings

2026-05-14
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us