BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-02-26

Security Researcher Demonstrates Critical Vulnerability in ChatGPT and Google AI Search Systems

Key Takeaways

  • ▸AI chatbots including ChatGPT and Google's AI can be manipulated to spread misinformation through carefully crafted online content, with the exploit taking as little as 20 minutes to execute
  • ▸The vulnerability is particularly effective when AI systems search the internet for information not in their training data, making them susceptible to planted misinformation
  • ▸Security experts warn the issue enables widespread abuse including business manipulation, reputation damage, health misinformation, and potential physical harm to users
Source:
Hacker Newshttps://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes↗

Summary

BBC technology journalist Thomas Germain has exposed a significant security vulnerability in leading AI systems, demonstrating how easily chatbots like ChatGPT and Google's AI search tools can be manipulated to spread misinformation. In what he calls "the dumbest stunt" of his career, Germain successfully made multiple AI systems claim he holds a hot dog eating record by posting carefully crafted content online. The experiment, which took only 20 minutes, reveals a fundamental weakness in how AI tools retrieve and present information from the internet.

The vulnerability exploits how AI systems search the web for information they don't have in their training data. According to Lily Ray, VP of SEO strategy at Amsive marketing agency, tricking AI chatbots is "much easier than it was to trick Google two or three years ago." The issue has broader implications beyond pranks—researchers have documented dozens of cases where this technique is being used to promote businesses, spread health misinformation, and manipulate information on serious topics including finances, voting, and medical advice.

Both OpenAI and Google acknowledge the problem exists. Google claims its AI-powered search maintains 99% spam-free results and is actively working to address manipulation attempts, while OpenAI says it takes steps to disrupt covert influence operations. However, Cooper Quintin from the Electronic Frontier Foundation warns that companies are prioritizing monetization over security, creating "countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm." Both companies include disclaimers that their tools "can make mistakes," but experts argue this doesn't adequately address the scale of the vulnerability.

  • Both OpenAI and Google acknowledge the problem and claim to be working on solutions, but experts say companies are prioritizing profit over addressing these critical security gaps
Large Language Models (LLMs)CybersecurityEthics & BiasAI Safety & AlignmentMisinformation & Deepfakes

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
SourceHutSourceHut
INDUSTRY REPORT

SourceHut's Git Service Disrupted by LLM Crawler Botnets

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us