
A group of former OpenAI employees, legal experts, AI researchers, and nonprofit leaders have come together to sound the alarm on OpenAI’s shift away from its nonprofit mission. In a public letter addressed to Attorneys General in California and Delaware, they’re calling this move what it is: a potential unraveling of the very safeguards that once made OpenAI a mission-first organization.
Let’s rewind for a second.
OpenAI was never meant to be your average tech company. Its founding docs are clear: “Ensure artificial general intelligence benefits all of humanity.” Not investors. Not executives. Humanity.
The proposed restructuring would flip this ethos on its head. By transforming its for-profit subsidiary into a Delaware public benefit corporation (PBC), OpenAI risks putting shareholders ahead of the mission — a mission that includes preventing worst-case scenarios like misuse, societal disruption, and AGI extinction-level threats.
The criticism is sharp and detailed:
→ The new structure could sideline nonprofit oversight in favor of shareholder interests
→ Profit caps for investors may be lifted
→ Board independence becomes shaky
→ AGI development could shift into the hands of those prioritizing profit over safety
→ Key commitments like the “stop-and-assist” clause may fall by the wayside
It’s a hard left from what Sam Altman, Elon Musk, and others originally promised — a governance model that wasn’t designed for speed or competition, but for restraint, transparency, and global good.
The letter doesn’t just voice concern — it calls for intervention. It urges regulators to pause the restructuring and reestablish the principles that once made OpenAI a beacon of mission-first AI.
In an industry moving at breakneck speed, this raises a sobering question: Is scaling AI worth it if we lose sight of why we built it in the first place?