No headings found

The Midas Project Statement on OpenAI's Restructuring

The Midas Project Statement on OpenAI's Restructuring

The Midas Project

Oct 28, 2025

8 min read

  • Get updates on AI risk Join our newsletter

Get updates on AI risk Join our newsletter

FOR IMMEDIATE RELEASE

San Francisco, CA — In light of today's announcement that OpenAI has completed its restructuring, The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta for their diligent work over the past year to secure meaningful safeguards for the public interest. The outcome represents a substantial improvement over OpenAI's initial proposal, with important protections for safety and security, ongoing oversight, and nonprofit control. We are grateful for the Attorneys General's commitment to accountability and transparency in this consequential matter, and we believe their continued oversight will be essential as OpenAI advances its mission and continues to scale.

That said, significant concerns remain about whether this restructuring adequately preserves OpenAI's founding commitments to humanity.

Duty To The Mission

Most significantly, the Attorneys General secured a provision requiring that for decisions affecting safety and security, the PBC board must consider only OpenAI's mission, and may not consider shareholder financial interests. This represents a critical safeguard, helping ensure profit motives don’t compromise safety and broad benefit.

However, the effectiveness of this safeguard will depend entirely on how broadly "safety and security issues" are defined in practice. It would not be surprising to see OpenAI attempt to classify most business decisions—pricing, partnerships, deployment timelines, compute allocation—as falling outside this category. This would allow shareholder interests to determine the majority of corporate strategy while minimizing the mission-only standard to apply to an artificially narrow set of decisions they deem easy or costless. 

Board Independence and Effective Control

While OpenAI claims the nonprofit maintains "control" through the power to appoint Public Benefit Corporation (PBC) board members, this authority is substantially weaker than the comprehensive oversight authority the nonprofit previously held. Concerningly, eight of the nine current nonprofit board members will also serve on the PBC board, creating a structure where the nonprofit's ability to provide oversight is reduced to essentially the power to fire itself. When the boards are nearly identical, asking the public to rely on the nonprofit's "control" through board appointments rings hollow. 

This is especially concerning given that OpenAI characterizes eight of nine board members as "independent" despite publicly documented financial interests in OpenAI's continued commercialization.

Our recent report, "The OpenAI Files," documented significant potential conflicts of interest across the board:

  • Board Chair Bret Taylor leads Sierra AI, a $10 billion startup that relies on OpenAI's models. He also holds hundreds of millions of dollars in other companies doing business with OpenAI.

  • Adebayo Ogunlesi directs a $30 billion AI infrastructure fund and has stated his fund plans to "build data centres for all the hyperscalers." As of this month, it has been pursuing a $40 billion AI data center deal.

  • Adam D'Angelo's company Quora operates Poe and is a major OpenAI customer spending significant sums on OpenAI's services—a conflict so serious that OpenAI President Greg Brockman previously argued D'Angelo should leave the board because of it.

  • CEO Sam Altman has significant investments in companies partnering with OpenAI, including Helion Energy, Retro Biosciences, and Reddit.

This is the same board that approved this restructuring and has been running OpenAI like a traditional tech company rather than a charity. For the board to truly be independent, one would think it should be comprised of directors who won’t personally benefit from OpenAI's continued hyperscaling and commercialization.

Diminished Public Entitlements

The Midas Project also challenges OpenAI's repeated characterization that, through this restructuring, it has created "one of the best resourced philanthropies ever" through the nonprofit's 26% equity stake.

Thanks to the now-gutted profit caps, OpenAI’s nonprofit was already entitled to the vast majority of the company’s cash flows. According to OpenAI, if they succeeded, "orders of magnitude" more money would go to the nonprofit than to investors. President Greg Brockman said “all but a fraction” of the money they earn would be returned to the world thanks to the profit caps. 

Reducing that to 26% equity—even with a warrant (of unspecified value) that only activates if valuation increases tenfold over 15 years—represents humanity voluntarily surrendering tens or hundreds of billions of dollars it was already entitled to. Private investors are now entitled to dramatically more, and humanity dramatically less.

OpenAI is not suddenly one of the best-resourced nonprofits ever. From the public's perspective, OpenAI may be one of the worst financially performing nonprofits in history, having voluntarily transferred more of the public's entitled value to private interests than perhaps any charitable organization ever.

Transparency Questions Finally Answered—After OpenAI Refused to Respond

It's frustrating that OpenAI never directly responded to the transparency questions in our open letter signed by over 10,000 people. Instead, as NBC News reported, OpenAI subpoenaed organizations involved in the letter (including our own).

But now, with today's announcements, we can finally review which questions have been answered:

  1. Will OpenAI continue to have a legal duty to prioritize its charitable mission over profits? 

Partially. The duty applies only to decisions impacting safety and security, however that is defined, and not for all business decisions.

  1. Will OpenAI's nonprofit continue to have full management control over OpenAI? 

No. The nonprofit will have the power to appoint and fire directors, not the comprehensive management control it previously held.

  1. Which of OpenAI's nonprofit directors will receive equity in OpenAI's new structure? 

Unclear. We know the majority of the board must be independent, but specifics remain undisclosed.

  1. Will OpenAI maintain profit caps and abide by its commitment to devote excess profits to the benefit of humanity? 

Not really. Profit caps have been eliminated. However, the nonprofit now has a warrant which would increase the nonprofit’s entitlements by an unspecified amount if the company grows tenfold in fifteen years.

  1. Does OpenAI plan to commercialize AGI once developed, instead of adhering to its promise to retain nonprofit control of AGI for the benefit of all of humanity? 

Yes. Despite OpenAI’s original promise, Microsoft's announcement today reveals it has secured IP rights to OpenAI models "post-AGI" through 2032. The nonprofit will not retain control over AGI exclusively for humanity's benefit as originally promised.

  1. Will OpenAI recommit to the principles in its Charter, including its pledge to stop competing and start assisting if another responsible organization is close to AGI? 

Yes. The Charter will continue to guide the PBC, although the implementation details of commitments like the stop-and-assist commitment remain unclear.

  1. Will OpenAI reveal what is at stake for the public in its restructuring by releasing:

    1. The OpenAI Global, LLC operating agreement?

    2. Estimates of the potential value of above-cap profits?

No.

About The Midas Project

The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it. We lead strategic initiatives to monitor tech companies, counter corporate propaganda, raise awareness about corner-cutting, and advocate for the responsible development of emerging technology.

Media Contact:
Tyler Johnston
Executive Director, The Midas Project
tyler@themidasproject.com

Interested in live updates on AI accountability? Visit Watchtower →

Interested in live updates on AI accountability? Visit Watchtower →

Ways to get involved

Join us in shaping the future of responsible AI - explore the ways you can contribute.

Join the Movement

Be part of a community demanding transparency and ethical standards in AI development.

Support Our Research

Help us investigate Big Tech accountability by funding in-depth reports and public interest research.

Spread the Word

Share our work and reports to raise awareness about the risks and responsibilities of emerging technologies.