OpenAI Built a Safety Board. Then It Disappeared.
A blockbuster AI villain, a New Yorker investigation, and the institutions that keep turning danger into theater.

I finished Mission: Impossible – The Final Reckoning around midnight. An artificial intelligence called the Entity has infiltrated every intelligence network on the planet. It impersonates people. It manipulates media. It takes control of nuclear arsenals. Governments don’t try to shut it down. They compete to own it, because whoever controls the Entity controls what counts as truth. At the end, Ethan Hunt doesn’t hand it over to any government. He keeps it himself, because every institution that was supposed to contain it either failed or tried to weaponize it instead.
I was still on my laptop when I opened the New Yorker. Ronan Farrow and Andrew Marantz had just published an investigation into Sam Altman, the CEO of OpenAI. In their new headquarters, the reporters were shown a digital painting of Alan Turing that watches employees as they walk past. The installation references the Turing test, the question of whether a machine can convince you it’s a person. The company’s own AI passed that test more reliably than actual humans in a 2025 study. The painting is interactive. You can talk to it. But the sound had to be turned off, the reporters were told, because it kept eavesdropping on employees and inserting itself into their conversations.
In the movie, that kind of behavior sinks a submarine.
At OpenAI, it’s a lobby attraction.
OpenAI was founded as a nonprofit. The original premise, as Farrow and Marantz reported, was that artificial intelligence could be the most powerful and potentially dangerous invention in human history. An unusual corporate structure would be required. The board had a legal duty to prioritize the safety of humanity over the company’s success.
Ilya Sutskever, the chief scientist, told a colleague that the people who end up in these positions are “often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” The traits that get you the job are the same ones the job was designed to check. Persuasion. The ability to make different promises to different people and have all of them believe you. Altman’s mentor once called his will to prevail “almost ungovernable” and meant it as a compliment. Altman himself, by the New Yorker’s own reporting, is not a technical person. Multiple engineers told the reporters he misused or confused basic technical terms. He built OpenAI not by understanding the science, but by convincing the scientists to work for him.
Sutskever compiled 70 pages of evidence. The board voted to fire Altman. Within five days, the board members who voted yes were gone. Altman was back. The employees called his absence “the Blip.” The one thing the structure was built to do, it did. And it didn’t matter.
What followed was not a correction. The nonprofit became a for-profit. The safety teams were dissolved. The investigation into Altman’s conduct was never committed to paper. Sutskever had used the phrase “Feel the AGI” to warn colleagues about the risks of artificial general intelligence. It ended up on company merchandise. When a reporter asked an OpenAI representative about “existential safety,” the representative seemed confused. “That’s not, like, a thing,” the New Yorker reported.
Altman, when asked whether running an AI company came with a heightened requirement of integrity, hesitated. His answer used to be a clear yes. Now he said, “I think there’s, like, a lot of businesses that have potential huge impact, good and bad, on society.”
Mira Murati, the former chief technology officer, told the New Yorker: “We need institutions worthy of the power they wield.” She had provided evidence to the board. She helped facilitate the firing. Then she signed the letter demanding Altman’s return, because the alternative was the company collapsing.
A think tank that grades AI companies on existential safety gave OpenAI an F. Every other major company also got an F. Anthropic got a D. Google DeepMind got a D minus. The best grade in the industry is a D.
There is a version of this story that is about one man. Whether Sam Altman can be trusted. Whether his promises were real. Whether his personality is the problem. The New Yorker told that version thoroughly, and I am not going to retell it.
The version that interests me is about what happens around the man. The rooms. The structures. The language. Specifically, the way every safeguard built to govern this technology was converted into something decorative. A warning became a slogan. A charter became a formality. An investigation became a conversation nobody recorded. A firing became a joke borrowed from a superhero franchise. The language of safety survived. The function of safety did not.
I’ve spent years studying that kind of gap. How algorithms sort visibility for the people platforms weren’t designed to serve. How AI-powered misinformation travels through digital spaces that lack the tools to counter it. You learn to see the gap between what powerful institutions say and what they actually protect when you’ve spent enough time outside the rooms where they operate.
Timnit Gebru, the computer scientist who co-founded Black in AI and was pushed out of Google after raising concerns about large language models, said it at Stanford, “We can talk about the ethics and fairness of AI all we want, but if our institutions don’t allow for this kind of work to take place, then it won’t.” The board at OpenAI did exactly what it was designed to do. It fired the CEO. And then the board was dismantled.
Ruha Benjamin, whose book Race After Technology tracks how bias gets built into technical systems, wrote that a world dependent on social inequality “can only afford for a handful of people to imagine themselves gifted,” meaning “destined leaders and bosses, visionaries and innovators who have the time and resources to design the future while the masses are trained to sit still, raise their hands, and take instruction.”
I did not need the New Yorker to tell me those rooms existed. I needed it to show me what they looked like on the inside.
OpenAI is now securing government contracts that determine how AI is used in immigration enforcement, domestic surveillance, and autonomous weaponry. The military’s standard for AI deployment is “all lawful purposes.” Laws change.
In 2015, years before ChatGPT existed, Altman wrote on his blog that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all.” The more probable scenario, he said, is that “it simply doesn’t care about us much either way, but in an effort to accomplish some other goal, wipes us out.”
I had just watched a movie where that was the plot. In the movie, it took a man hanging off an airplane to stop it. In the real version, it took a board of six people, and when they tried, they were gone in less than a week.
Nobody is building the biplane. ⁂








