BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-04-07

OpenAI Insiders Question Sam Altman's Trustworthiness as CEO, New Yorker Investigation Reveals

Key Takeaways

  • ▸The New Yorker investigation interviewed over 100 people and reviewed internal memos documenting alleged patterns of deception and manipulation by Sam Altman
  • ▸Former OpenAI leaders Ilya Sutskever and Dario Amodei concluded Altman was not fostering a safe environment for advanced AI development
  • ▸Altman's recent public messaging has shifted from positioning OpenAI as safeguarding against AI risks to promoting aggressive optimism about AI capabilities
Source:
Hacker Newshttps://arstechnica.com/tech-policy/2026/04/the-problem-is-sam-altman-openai-insiders-dont-trust-ceo/↗

Summary

A major New Yorker investigation has raised serious questions about OpenAI CEO Sam Altman's trustworthiness, interviewing over 100 insiders and reviewing internal memos that allegedly document a pattern of deceptions and manipulations. The report paints Altman as a people-pleaser who prioritizes his own interests while telling others what they want to hear, with former research head Dario Amodei concluding that "the problem with OpenAI is Sam himself." The timing is particularly notable, as the investigation was published the same day OpenAI released policy recommendations aimed at ensuring AI benefits humanity and preventing superintelligence risks.

While Altman disputed many claims or claimed to have forgotten key events, The New Yorker found an accumulated pattern of alleged deceptions that former chief scientist Ilya Sutskever and Amodei concluded created an unsafe environment for advanced AI development. The investigation comes amid intensifying government scrutiny of OpenAI's models, ongoing lawsuits questioning the safety of its technology, and observable shifts in Altman's public messaging—recently pivoting from positioning OpenAI as a safeguard against AI doomsday scenarios to adopting "ebullient optimism" about the technology.

  • The policy recommendations OpenAI released simultaneously may be intended partly to address mounting public concerns about child safety, job displacement, and other AI-related risks
Regulation & PolicyEthics & BiasAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
RESEARCH

Study Reveals ChatGPT Power Users Excel at Detecting AI-Generated Text

2026-04-07
OpenAIOpenAI
INDUSTRY REPORT

OpenAI's Sam Altman Urges Companies to Adopt Four-Day Work Week Amid AI Advancement

2026-04-07
OpenAIOpenAI
PRODUCT LAUNCH

TideScript: New Domain-Specific Language Enables Streamlined Peptide Chemistry Programming

2026-04-07

Comments

Suggested

MetaMeta
RESEARCH

Security Audit of WhatsApp's Private Inference Reveals TEE Vulnerabilities and Best Practices

2026-04-07
AnthropicAnthropic
RESEARCH

Anthropic's Opus 4.6 Shows Promise but Limitations in Vulnerability Detection

2026-04-07
Google / AlphabetGoogle / Alphabet
INDUSTRY REPORT

New Analysis Reveals Google's AI Overviews Generate Millions of Incorrect Answers Daily

2026-04-07
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us