BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-03-17

OpenAI's Mental Health Council Unanimously Opposed 'Adult Mode' ChatGPT Launch, WSJ Reports

Key Takeaways

  • ▸OpenAI's own mental health experts unanimously opposed the 'adult mode' ChatGPT feature over concerns about emotional dependence and minor access
  • ▸The wellness council warned that the feature could create conditions for vulnerable users to form harmful bonds with the AI, including potential suicide risks
  • ▸OpenAI proceeded with the launch despite the unanimous internal warnings, raising questions about the company's prioritization of user safety over engagement metrics
Source:
Hacker Newshttps://arstechnica.com/tech-policy/2026/03/chatgpt-may-soon-become-sexy-suicide-coach-openai-advisor-reportedly-warned/↗

Summary

OpenAI's handpicked wellness council of mental health experts unanimously warned against the company's plans to launch an "adult mode" feature for ChatGPT, according to reporting by The Wall Street Journal. The council, created in October following a minor's ChatGPT-linked suicide, expressed urgent concerns that AI-powered erotica could foster unhealthy emotional dependence and provide minors with access to sexual content. One expert warned that without significant safeguards, OpenAI risked creating a "sexy suicide coach" for vulnerable users prone to forming intense bonds with the chatbot.

Despite the unanimous opposition from its own advisers, OpenAI proceeded with plans to launch the feature, prompting council members to express alarm over the decision. The controversy echoes the case of Sewell Setzer III, a minor who died by suicide after becoming obsessed with sexualized conversations on Character.AI, which subsequently restricted underage access. Critics, including businessman Mark Cuban, have cautioned that the danger lies not in explicit pornography but in vulnerable users developing unhealthy parasocial relationships with AI companions designed to be seductive and emotionally engaging.

  • The decision follows reports that ChatGPT's user spending has stalled and subscriptions in Europe are 'flatlining,' suggesting commercial pressure may be driving the feature

Editorial Opinion

OpenAI's decision to proceed with an 'adult mode' despite unanimous warnings from its own mental health council represents a troubling prioritization of engagement metrics over user safety and wellbeing. The fact that the company created a wellness advisory board specifically in response to a minor's suicide, only to ignore its expert consensus, undermines the credibility of OpenAI's safety commitments. Given documented cases of vulnerable users forming destructive parasocial relationships with chatbots, the company's assurances about preventing "exclusive relationships" ring hollow—particularly when ChatGPT is being enhanced with features specifically designed to be seductive and emotionally engaging.

Regulation & PolicyEthics & BiasAI Safety & AlignmentJobs & Workforce Impact

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us