OpenAI's Mental Health Council Unanimously Opposed 'Adult Mode' ChatGPT Launch, WSJ Reports
Key Takeaways
- ▸OpenAI's own mental health experts unanimously opposed the 'adult mode' ChatGPT feature over concerns about emotional dependence and minor access
- ▸The wellness council warned that the feature could create conditions for vulnerable users to form harmful bonds with the AI, including potential suicide risks
- ▸OpenAI proceeded with the launch despite the unanimous internal warnings, raising questions about the company's prioritization of user safety over engagement metrics
Summary
OpenAI's handpicked wellness council of mental health experts unanimously warned against the company's plans to launch an "adult mode" feature for ChatGPT, according to reporting by The Wall Street Journal. The council, created in October following a minor's ChatGPT-linked suicide, expressed urgent concerns that AI-powered erotica could foster unhealthy emotional dependence and provide minors with access to sexual content. One expert warned that without significant safeguards, OpenAI risked creating a "sexy suicide coach" for vulnerable users prone to forming intense bonds with the chatbot.
Despite the unanimous opposition from its own advisers, OpenAI proceeded with plans to launch the feature, prompting council members to express alarm over the decision. The controversy echoes the case of Sewell Setzer III, a minor who died by suicide after becoming obsessed with sexualized conversations on Character.AI, which subsequently restricted underage access. Critics, including businessman Mark Cuban, have cautioned that the danger lies not in explicit pornography but in vulnerable users developing unhealthy parasocial relationships with AI companions designed to be seductive and emotionally engaging.
- The decision follows reports that ChatGPT's user spending has stalled and subscriptions in Europe are 'flatlining,' suggesting commercial pressure may be driving the feature
Editorial Opinion
OpenAI's decision to proceed with an 'adult mode' despite unanimous warnings from its own mental health council represents a troubling prioritization of engagement metrics over user safety and wellbeing. The fact that the company created a wellness advisory board specifically in response to a minor's suicide, only to ignore its expert consensus, undermines the credibility of OpenAI's safety commitments. Given documented cases of vulnerable users forming destructive parasocial relationships with chatbots, the company's assurances about preventing "exclusive relationships" ring hollow—particularly when ChatGPT is being enhanced with features specifically designed to be seductive and emotionally engaging.



