ABOUT US

ABOUT US

Ensuring AI Serves the Public Good

Ensuring AI Serves the Public Good

We’re an independent nonprofit working to ensure AI benefits everyone - not just the companies developing it.

We’re an independent nonprofit working to ensure AI benefits everyone. Not just the companies developing it.

Founded in 2024, The Midas Project conducts investigations, policy analysis, and public engagement to promote responsible AI development.

We work to increase transparency, prevent corporate negligence, and make sure emerging technologies serve the public’s best interest.

Founded in 2024, The Midas Project conducts investigations, policy analysis, and public engagement to promote responsible AI development.

We work to increase transparency, prevent corporate negligence, and make sure emerging technologies serve the public’s best interest.

Silhouettes in a corporate meeting – The Midas Project cover image.
Silhouettes in a corporate meeting – The Midas Project cover image.

Featured projects

Explore some of our most impactful investigations into AI accountability, corporate transparency, and public interest technology.

Featured projects

Explore some of our most impactful investigations into AI accountability, corporate transparency, and public interest technology.

Our Values

Join us in shaping the future of responsible AI - explore the ways you can contribute.

(1)

(1)

(1)

We are pro-technology

We are pro-technology

We are pro-technology

AI has the potential to make our world so much better. When developed and deployed carefully, it will create new economic opportunities, revolutionize education, cure diseases, and help solve the world's most pressing problems. We want to live in a world with responsible AI technology that benefits everyone.

(2)

(2)

(2)

We are pro-democracy

We are pro-democracy

We are pro-democracy

One of our greatest fears is that, without intervention, the benefits created by AI won’t be distributed equally. We want to see a world where critical decisions about the future of humanity aren’t made by a handful of corporate stakeholders with massive financial conflicts of interest. Equality and democracy should be at the center of the conversation surrounding AI.

(3)

(3)

(3)

We are grounded in the present

We are grounded in the present

We are grounded in the present

We believe the best way to positively influence our future is to engage with real-world problems, debates, and opportunities surrounding AI today. That means monitoring the individuals and companies that are developing and deploying new AI technology to ensure that they are acting responsibly and with everyone’s best interest in mind.

(4)

(4)

(4)

We are oriented toward the future

We are oriented toward the future

We are oriented toward the future

AI technology is improving at full speed, and we expect it to change the world in unprecedented ways. Beyond economic and social transformation, the potential for machines to reach or exceed human-level intelligence, or even to have their own morally-relevant internal experiences, is not science fiction to us. It’s a real possibility. The time to prepare for these possibilities is now.

Ways to get involved

Join us in shaping the future of responsible AI - explore the ways you can contribute.

Join the Movement

Be part of a community demanding transparency and ethical standards in AI development.

Join the Movement

Be part of a community demanding transparency and ethical standards in AI development.

Join the Movement

Be part of a community demanding transparency and ethical standards in AI development.

Support Our Research

Help us investigate Big Tech accountability by funding in-depth reports and public interest research.

Support Our Research

Help us investigate Big Tech accountability by funding in-depth reports and public interest research.

Support Our Research

Help us investigate Big Tech accountability by funding in-depth reports and public interest research.

Spread the Word

Share our work and reports to raise awareness about the risks and responsibilities of emerging technologies.

Spread the Word

Share our work and reports to raise awareness about the risks and responsibilities of emerging technologies.

Spread the Word

Share our work and reports to raise awareness about the risks and responsibilities of emerging technologies.

Frequently Asked Questions

What people often ask about our mission, work, and how to get involved.

We engage in a combination of research, outreach, and public advocacy to ensure that AI companies are meeting public expectations, and living up to their past promises, when it comes to ensuring responsible AI development and deployment. The most important component of our work is helping to identify and disseminate industry best practices for AI development. We review technical literature, regulatory guidance, and case studies to distill concrete measures—such as frontier-model risk assessments, red-teaming requirements, audit regimes, and whistle-blower protections—and advocate for the most important voluntary steps that companies can take today to ensure they are acting responsible. We also monitor whether companies follow their stated policies and industry norms. When evidence shows back-tracking or inadequate controls, we document these gaps and publicly press for corrective action—mobilizing employees, customers, and civil-society allies until the company adopts the necessary safeguards. Finally, we publicize our research to help ensure the public is aware of how AI developers stack up on safety and responsibility. We release concise scorecards, incident analyses, and memos so that regulators, investors, and the wider public can see how individual developers perform on safety and responsibility.
Various AI experts including Nick Bostrom and Stuart Russell have compared the development of advanced AI to the myth of King Midas. According to the legend, King Midas once asked a powerful satyr to make it so that whatever he touched instantly turned into gold. At first, he was thrilled with his new powers. But the King soon discovered that he couldn’t touch food, water, or even his family without instantly turning them to metal. In other words, the sudden attainment of an incredible power with insufficiently well-specified goals and safeguards led to a terrible tragedy. Much like King Midas, tech companies are now eagerly pursuing incredible wealth and power by developing artificial intelligence, a technology that will change our world forever. But how will we know that it is designed in alignment with our collective human values? If we misspecify even a single goal or safeguard for these systems, how will we prevent them from causing an incredible catastrophe? In the words of Stuart Russell, “​​If you continue on the current path, the better AI gets, the worse things get for us. For any given incorrectly stated objective, the better a system achieves that objective, the worse it is.”
The Midas Project is a nonprofit initiative founded in early 2024. To this day, the majority of our campaign participants are unpaid volunteers who contribute in their free time. We are a nonprofit, tax-exempt, 501(c)(3) organization that relies on donations from the public to continue our work.
No. One of our central values is a pro-technology attitude. Progress in technology has improved lives for millions of people around the globe (after all, without it, we wouldn’t have penicillin, air conditioning, or the internet). Artificial intelligence is already being used by millions to help improve medicine, education, and overall living standards. We believe this progress should continue, and we hope AI will be a positive force in the world. However, we are also realists — and skeptical realists at that. We believe advanced AI systems may be a “dual-use” technology that can be used for harm as well. In order to avert social inequality, concentration of power, or AI-driven catastrophes, everybody needs to have a voice at the table when decisions about development and deployment are being made. Currently, the vast majority of these decisions about the future of AI are being made in shadowy corporate boardrooms with little oversight and accountability. That’s why The Midas Project is committed to raising awareness about the risks of AI, and ensuring that global citizens are given a chance to make their voice heard.
If you’d like to get involved, consider signing up for our newsletter, joining as an official volunteer, or making a charitable donation today.
You can email us at info@themidasproject.com, or reach out via the form on our contact page.

Frequently Asked Questions

What people often ask about our mission, work, and how to get involved.

We engage in a combination of research, outreach, and public advocacy to ensure that AI companies are meeting public expectations, and living up to their past promises, when it comes to ensuring responsible AI development and deployment. The most important component of our work is helping to identify and disseminate industry best practices for AI development. We review technical literature, regulatory guidance, and case studies to distill concrete measures—such as frontier-model risk assessments, red-teaming requirements, audit regimes, and whistle-blower protections—and advocate for the most important voluntary steps that companies can take today to ensure they are acting responsible. We also monitor whether companies follow their stated policies and industry norms. When evidence shows back-tracking or inadequate controls, we document these gaps and publicly press for corrective action—mobilizing employees, customers, and civil-society allies until the company adopts the necessary safeguards. Finally, we publicize our research to help ensure the public is aware of how AI developers stack up on safety and responsibility. We release concise scorecards, incident analyses, and memos so that regulators, investors, and the wider public can see how individual developers perform on safety and responsibility.
Various AI experts including Nick Bostrom and Stuart Russell have compared the development of advanced AI to the myth of King Midas. According to the legend, King Midas once asked a powerful satyr to make it so that whatever he touched instantly turned into gold. At first, he was thrilled with his new powers. But the King soon discovered that he couldn’t touch food, water, or even his family without instantly turning them to metal. In other words, the sudden attainment of an incredible power with insufficiently well-specified goals and safeguards led to a terrible tragedy. Much like King Midas, tech companies are now eagerly pursuing incredible wealth and power by developing artificial intelligence, a technology that will change our world forever. But how will we know that it is designed in alignment with our collective human values? If we misspecify even a single goal or safeguard for these systems, how will we prevent them from causing an incredible catastrophe? In the words of Stuart Russell, “​​If you continue on the current path, the better AI gets, the worse things get for us. For any given incorrectly stated objective, the better a system achieves that objective, the worse it is.”
The Midas Project is a nonprofit initiative founded in early 2024. To this day, the majority of our campaign participants are unpaid volunteers who contribute in their free time. We are a nonprofit, tax-exempt, 501(c)(3) organization that relies on donations from the public to continue our work.
No. One of our central values is a pro-technology attitude. Progress in technology has improved lives for millions of people around the globe (after all, without it, we wouldn’t have penicillin, air conditioning, or the internet). Artificial intelligence is already being used by millions to help improve medicine, education, and overall living standards. We believe this progress should continue, and we hope AI will be a positive force in the world. However, we are also realists — and skeptical realists at that. We believe advanced AI systems may be a “dual-use” technology that can be used for harm as well. In order to avert social inequality, concentration of power, or AI-driven catastrophes, everybody needs to have a voice at the table when decisions about development and deployment are being made. Currently, the vast majority of these decisions about the future of AI are being made in shadowy corporate boardrooms with little oversight and accountability. That’s why The Midas Project is committed to raising awareness about the risks of AI, and ensuring that global citizens are given a chance to make their voice heard.
If you’d like to get involved, consider signing up for our newsletter, joining as an official volunteer, or making a charitable donation today.
You can email us at info@themidasproject.com, or reach out via the form on our contact page.