04/08 2026
558
On April 7th, The New Yorker published the results of an 18-month in-depth investigation. Leveraging two previously unseen internal documents—a 70-page "top-secret memo" penned by former Chief Scientist Ilya Sutskever and a 200-page set of internal notes from Anthropic CEO Dario Amodei—the report thoroughly exposed the deep fissures lurking beneath OpenAI's façade of success.
The investigation took direct aim at OpenAI CEO Sam Altman, summarizing the central issue with unanimous assessments from Silicon Valley insiders: "persistent dishonesty."
When the leader of an AI giant valued at $852 billion, which claims to "benefit all humanity," is embroiled in such a severe integrity crisis, it raises fundamental questions about technological ethics and the future of humanity.
The most startling revelation from the investigation was the systemic breakdown of OpenAI's security defenses. In 2023, OpenAI proudly announced the creation of the "Superalignment Team," pledging to allocate 20% of the company's total computing power to long-term safety research, co-led by Ilya Sutskever and Jan Leike.
However, the reality fell far short of this promise. According to four eyewitnesses, the safety team was actually allocated only 1% to 2% of the computing power, utilizing the oldest and least efficient chip clusters, while top-tier hardware was reserved exclusively for profit-driven projects. When Jan Leike complained to then-CTO Mira Murati, the response he received was: "That promise was never realistic."
Even more alarming was the lax product safety review process. During a December 2022 board meeting, Altman assured board members that several high-risk features of GPT-4 had received approval from the safety review committee. However, when board member Helen Toner requested to see the documents, she discovered that allowing users to "fine-tune" the model and deploy it as a personal assistant had not been approved.
The investigation also revealed that Microsoft released an early version of ChatGPT in India, completely circumventing the necessary safety review process—a fact Altman never mentioned in multiple lengthy reports to the board.
The New Yorker's investigation shed light on more details about the power struggle that rocked the tech world in November 2023. The story traces back to the fall of 2023, when Ilya Sutskever, witnessing Altman repeatedly flout rules, felt genuine fear.
The chief scientist avoided using his company computer, secretly gathering evidence with his phone and sending the 70-page top-secret memo to three independent board members via "self-destructing" software. The memo's first accusation was: "Sam Altman has demonstrated a pattern of persistent lying, eroding trust among executives and pitting them against each other."
On November 17, 2023, at noon, while Altman was watching an F1 race in Las Vegas, he was summoned to a board video call and informed minutes later that he had been fired.
What transpired next exceeded everyone's expectations. That evening, Altman established a "war room" in his $27 million mansion, launching a three-pronged strategy: applying capital pressure, aligning employees, and manipulating public opinion. Microsoft threatened that "OpenAI could cease to exist," nearly all employees signed a petition, and interim CEO Mira Murati ultimately sided with Altman.
Five days later, Altman was reinstated, and the board was ousted. Employees dubbed this reversal "The Blip"—echoing the Marvel movie trope of characters disappearing and then reappearing.
Following Altman's reinstatement, OpenAI engaged WilmerHale, the law firm that handled the Enron investigation, to conduct an independent inquiry. However, this investigation, intended to uncover the truth, produced no written report. The law firm only provided oral briefings to two new directors, meaning many serious allegations have yet to undergo formal independent review.
Meanwhile, OpenAI's internal safety culture further deteriorated. In 2024, the Superalignment Team was disbanded, and both Ilya Sutskever and Jan Leike resigned. Leike wrote a chilling line on social media: "Safety culture and processes have been sacrificed for shiny products."
In OpenAI's latest IRS filing, the word "safety" was even omitted from the description of the company's "most important business activities."
Behind this may lie OpenAI's current severe financial turmoil. CFO Sarah Friar, opposing an aggressive push for an initial public offering (IPO) amid an estimated $14 billion loss by 2026, has been marginalized by Altman. To bridge the massive funding gap, OpenAI has accepted a $200 million contract from the U.S. military and entered into a $500 billion "Stargate Project" collaboration with the Trump administration.
Driven by capital pressure, fierce competition, and rapid technological iteration, how can safety commitments avoid being eroded by commercial interests? When a company holds technology capable of reshaping human civilization, what governance structures and accountability mechanisms should be established?
While The New Yorker's investigation is grounded in extensive internal documents and testimony from over 100 insiders, some allegations rely on single sources and lack cross-verification. Altman himself stated in an interview that his "feelings don't align with many traditional AI safety concerns" and vaguely claimed OpenAI would still "advance safety programs."
However, when safety commitments dwindle from 20% of computing power to less than 2% using outdated chips, when the safety team is disbanded, and when "safety" vanishes from the company's core mission, these facts speak volumes.