OpenAI's ChatGPT Images 2.0 Enables Realistic Deepfakes for Financial Fraud
Key Takeaways
- ▸ChatGPT Images 2.0 represents a significant leap in text-rendering capability, making it far superior to previous models at generating images containing legible text and fine details
- ▸The model can create highly convincing fraudulent financial and identity documents (prescriptions, bank alerts, receipts, IDs, passports) with minimal prompting
- ▸Fake banking screenshots and financial alerts could be weaponized in common scams—wire transfer confirmations, account alerts, and payment receipts are particularly easy to fabricate
Summary
OpenAI released ChatGPT Images 2.0 last week, a new image-generation model capable of creating photorealistic visuals with exceptional clarity and text rendering. The Slate article, published May 2, 2026, documents journalist testing of the model's ability to generate convincing deepfakes and fraudulent content with minimal effort. Beyond celebrity impersonations, the model excels at creating highly persuasive fake documents including medical prescriptions, bank alerts, financial receipts, vaccination cards, passports, driver's licenses, and boarding passes—many nearly indistinguishable from authentic materials. The journalist demonstrated creating over 100 fraudulent images, with particular success in generating fake banking screenshots (Chase wire transfers, Wells Fargo alerts) and financial receipts that could facilitate common scams and identity fraud.
- While some generated images contain minor errors (miscalculated taxes, imperfect handwriting), many are persuasive enough to potentially fool hotel receptionists, bouncers, or other human validators
Editorial Opinion
While ChatGPT Images 2.0 represents a genuine technical advancement in photorealistic image generation, OpenAI's release without stronger safeguards against financial fraud is deeply concerning. The ability to create convincing fake receipts, bank alerts, and identity documents on demand dramatically lowers the barrier to entry for sophisticated scams targeting individuals and institutions. OpenAI must urgently implement stronger guardrails and consider whether certain fraud-prone image categories should be blocked entirely—the convenience of the product cannot justify enabling financial crimes at scale.

