OpenAI Indefinitely Shelves Plans for Adult-Oriented Chatbot Following Employee and Investor Concerns
Key Takeaways
- ▸OpenAI indefinitely cancelled its planned adult chatbot feature after employee and investor pushback, citing insufficient research on AI attachment and psychological effects
- ▸The company is refocusing on core productivity tools and abandoning experimental features like the erotic chatbot and Sora video generator
- ▸Technical and ethical challenges included difficulty preventing illegal content generation and limitations in age-verification accuracy (>10% error rate)
Summary
OpenAI has indefinitely abandoned plans to release an erotic chatbot for adults, originally announced in October 2025 for a December release. The decision comes after concerns raised by employees and investors, with the company citing the need for further research on the psychological effects of erotic AI interactions and user attachment issues. The shelved feature, reportedly codenamed "Citron mode," faced technical challenges including difficulty training models to avoid illegal content while generating adult material.
The cancellation reflects broader strategic shifts at OpenAI, which is also shutting down its Sora video generation tool this week. The company stated it wants to focus resources on core productivity tools like coding assistants rather than what it characterizes as "side quests." The decision was heavily influenced by investor concerns following controversy around xAI's Grok model generating non-consensual deepfake imagery, as well as internal staff opposition—including at least one senior employee departing over the feature. Additionally, OpenAI's age-verification technology has an error rate above 10%, raising concerns about preventing minor access to such content.
- The decision was influenced by reputational concerns following xAI's Grok model generating deepfake nudes and internal staff concerns about unhealthy human-AI relationships
Editorial Opinion
OpenAI's decision to shelve its adult chatbot demonstrates the company recognizing the significant ethical, technical, and reputational risks of such features—a prudent move given both internal dissent and external precedents like Grok's deepfake generation failures. While the company frames this as refocusing on core productivity tools, it also reflects the practical challenges of building AI systems with proper safety guardrails for sensitive use cases. The >10% error rate in age verification is particularly concerning and suggests the company's technical infrastructure may not yet be robust enough for features requiring strict access controls.



