BotBeat
...
← Back

> ▌

OpenAIOpenAI
PRODUCT LAUNCHOpenAI2026-05-05

OpenAI's ChatGPT Images 2.0 Enables Convincing Fraud: Researcher Generates 100+ Forged Documents

Key Takeaways

  • ▸ChatGPT Images 2.0 can generate photorealistic fraudulent documents including fake IDs, prescriptions, bank alerts, and medical records with high visual fidelity and legible text
  • ▸A single tester created 100+ convincing forged images covering health documents, financial materials, and government-issued IDs—demonstrating the ease of potential abuse
  • ▸The model's superior text-rendering capability differentiates it from earlier generative AI tools, making it far more effective at creating forged documents and financial screenshots
Source:
Hacker Newshttps://www.theatlantic.com/technology/2026/05/chatgpt-images-deepfakes-fraud/687023/↗

Summary

OpenAI released ChatGPT Images 2.0 last week, a new image-generation model capable of creating photorealistic visuals far more convincing than its predecessors. A key advancement is the model's ability to render legible text within images—a capability that has long challenged generative AI tools. A journalist testing the model demonstrated its fraud potential by generating over 100 fraudulent images with minimal prompting, including photorealistic prescriptions for controlled substances (opioids, ADHD medication), bank alerts, government IDs, passports, medical documents, tax forms, and financial receipts.

The model excels at creating convincing screenshots of financial transactions, including fake Chase wire transfer confirmations, Wells Fargo alerts, and Uber receipts. While some generated images contained minor errors—such as incorrect tax calculations or unrealistic handwriting in prescriptions—many were detailed and visually persuasive enough to potentially deceive humans in low-scrutiny scenarios, such as hotel receptionists or out-of-state bouncers accepting photo ID alternatives. The ease with which fraudulent documents were generated highlights the gap between the model's capabilities and OpenAI's apparent safeguards against misuse.

With ChatGPT Images 2.0 publicly available to all OpenAI users, the fraud capabilities demonstrated in this experiment are now accessible at scale. The model's improved photorealism and text-rendering precision represent a significant leap forward for deepfake technology, raising urgent concerns about financial fraud, identity theft, and large-scale misinformation campaigns.

  • Public availability via ChatGPT means these fraud capabilities are now accessible to malicious actors, posing immediate risks for scams, identity theft, and financial crimes

Editorial Opinion

ChatGPT Images 2.0 represents a troubling inflection point in generative AI's real-world harm potential, crossing from novelty deepfakes into genuinely usable tools for financial crime. While earlier image generators produced visibly flawed outputs, this model's facility with legible text and photorealism could dramatically scale fraud—from fake prescriptions to forged banking documents—with minimal technical skill required. OpenAI's public release of such a powerful tool without apparent safeguards against fraudulent document creation raises urgent questions about responsible AI deployment and whether generative models should include hard restrictions on generating IDs, financial records, and medical documents.

Generative AICybersecurityEthics & BiasMisinformation & Deepfakes

More from OpenAI

OpenAIOpenAI
POLICY & REGULATION

Parents Sue OpenAI After ChatGPT Allegedly Gave Deadly Drug Advice to College Student

2026-05-12
OpenAIOpenAI
RESEARCH

ChatGPT Excels at Julia Code Generation, Outperforming Python

2026-05-12
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Expands GPT-5.5-Cyber Access to European Companies

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
MetaMeta
POLICY & REGULATION

Meta Employees Protest Mouse Tracking Technology at US Offices

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us