PROJECTS

PROJECTS

PROJECTS

Holding the Hyperscalers Accountable

Holding the Hyperscalers Accountable

Holding the Hyperscalers Accountable

Our investigations expose corner-cutting, push for accountability, and mobilize the public to ensure AI serves the common good.

Our investigations expose corner-cutting, push for accountability, and mobilize the public to ensure AI serves the common good.

Our investigations expose corner-cutting, push for accountability, and mobilize the public to ensure AI serves the common good.

<a href="https://www.freepik.com/free-photo/millennial-asia-businessmen-businesswomen-meeting-brainstorming-ideas-about-new-paperwork-project-colleagues-working-together-planning-success-strategy-enjoy-teamwork-small-modern-night-office_7685820.htm#fromView=search&page=1&position=15&uuid=e2c57d0a-7581-4d4e-881c-e87210163ba1&query=projects">Image by tirachardz on Freepik</a>
<a href="https://www.freepik.com/free-photo/millennial-asia-businessmen-businesswomen-meeting-brainstorming-ideas-about-new-paperwork-project-colleagues-working-together-planning-success-strategy-enjoy-teamwork-small-modern-night-office_7685820.htm#fromView=search&page=1&position=15&uuid=e2c57d0a-7581-4d4e-881c-e87210163ba1&query=projects">Image by tirachardz on Freepik</a>
Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.
Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.

Model Republic

Coming Soon

www.modelrepublic.org

A new publication from The Midas Project.

Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.

Model Republic

Aug 25

www.modelrepublic.org

A new publication from The Midas Project.

Open Letter to OpenAI

Aug 2025

www.openai-transparency.org

More than 100 prominent AI experts, former OpenAI team members, public figures, and civil society groups signed an open letter calling for greater transparency from OpenAI.

Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.
Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.

Open Letter to OpenAI

Aug 25

www.openai-transparency.org

More than 100 prominent AI experts, former OpenAI team members, public figures, and civil society groups signed an open letter calling for greater transparency from OpenAI.

Open Letter to OpenAI project cover – The Midas Project initiative urging AI developers to adopt stronger AI safety and accountability practices.
The OpenAI Files project cover – The Midas Project report investigating OpenAI’s governance, tax issues, and accountability.
The OpenAI Files project cover – The Midas Project report investigating OpenAI’s governance, tax issues, and accountability.

The OpenAI Files

June 2025

www.openaifiles.org

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

The OpenAI Files project cover – The Midas Project report investigating OpenAI’s governance, tax issues, and accountability.

The OpenAI Files

June 25

www.openaifiles.org

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

Seoul Tracker

Seoul Tracker

Feb 2025

Feb 25

www.seoul-tracker.org

www.seoul-tracker.org

At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.

At the 2024 AI Safety Summit in Seoul, South Korea, sixteen leading tech organizations pledged to implement "red line" risk evaluation policies for frontier AI models. The deadline has now arrived, but not everyone has lived up to their commitment. This tracker assesses progress across the five key components.

Seoul Tracker project cover – The Midas Project analysis of global AI safety commitments from the 2024 AI Seoul Summit.
Seoul Tracker project cover – The Midas Project analysis of global AI safety commitments from the 2024 AI Seoul Summit.
Seoul Tracker project cover – The Midas Project analysis of global AI safety commitments from the 2024 AI Seoul Summit.

Join our Movement

Encourage tech companies to prioritize safety, transparency, and public interest in AI development.

Join our Movement

Encourage tech companies to prioritize safety, transparency, and public interest in AI development.