OpenAI Insiders Question Sam Altman's Trustworthiness as CEO, New Yorker Investigation Reveals
Key Takeaways
- ▸The New Yorker investigation interviewed over 100 people and reviewed internal memos documenting alleged patterns of deception and manipulation by Sam Altman
- ▸Former OpenAI leaders Ilya Sutskever and Dario Amodei concluded Altman was not fostering a safe environment for advanced AI development
- ▸Altman's recent public messaging has shifted from positioning OpenAI as safeguarding against AI risks to promoting aggressive optimism about AI capabilities
Summary
A major New Yorker investigation has raised serious questions about OpenAI CEO Sam Altman's trustworthiness, interviewing over 100 insiders and reviewing internal memos that allegedly document a pattern of deceptions and manipulations. The report paints Altman as a people-pleaser who prioritizes his own interests while telling others what they want to hear, with former research head Dario Amodei concluding that "the problem with OpenAI is Sam himself." The timing is particularly notable, as the investigation was published the same day OpenAI released policy recommendations aimed at ensuring AI benefits humanity and preventing superintelligence risks.
While Altman disputed many claims or claimed to have forgotten key events, The New Yorker found an accumulated pattern of alleged deceptions that former chief scientist Ilya Sutskever and Amodei concluded created an unsafe environment for advanced AI development. The investigation comes amid intensifying government scrutiny of OpenAI's models, ongoing lawsuits questioning the safety of its technology, and observable shifts in Altman's public messaging—recently pivoting from positioning OpenAI as a safeguard against AI doomsday scenarios to adopting "ebullient optimism" about the technology.
- The policy recommendations OpenAI released simultaneously may be intended partly to address mounting public concerns about child safety, job displacement, and other AI-related risks



