• Sign the Cognition petition

    Sign the Cognition petition

    The Midas Project has written a petition to Cognition, asking them to adopt an industry standard risk-evaluation policy. There are two places you can sign our petition — on our website directly, and on Change.org. (Don’t worry about signing on both — we will be combining the signature totals and cross-checking to remove duplicates before…

  • Tweet at Cognition

    Tweet at Cognition

    Despite outreach from The Midas Project, Cognition has failed to provide any information about when they will release a comprehensive risk-evaluation policy — or if they will at all. Perhaps they think they can get away with this on account of the fact that they’re still relatively under-the-radar. It’s time for the public to demand…

  • Email Cognition

    Email Cognition

    Despite outreach from The Midas Project, Cognition has failed to provide any information about when they will release a comprehensive risk-evaluation policy — or if they will at all. Perhaps they think they can get away with this on account of the fact that they’re still relatively under-the-radar. It’s time for the public to demand…

  • Why are AI employees demanding a “right to warn” the public?

    Why are AI employees demanding a “right to warn” the public?

    This week, another warning flag was raised concerning the rapid progress of advanced artificial intelligence technology. This time, it took the form of an open letter authored by current and former employees at some of the world’s top AI labs — and cosigned by leading experts including two of the three “godfathers of AI.” This…

  • How financial interests took over OpenAI

    How financial interests took over OpenAI

    How did an idealistic nonprofit, hoping to ensure advanced AI “benefits humanity as a whole,” turn into a $82 billion mega-corporation cutting corners and rushing to scale commercial products? In 2015, OpenAI announced its existence to the world through a post on their blog. In it, they write: “OpenAI is a non-profit artificial intelligence research…

  • OpenAI’s Model Spec Feedback

    OpenAI’s Model Spec Feedback

    On May 8th, OpenAI released its Model Spec, which is a sort of charter that describes how its AI models should behave, prioritize between different values, and adhere to rules for safety, and legality. They’ve opened a form to proactively solicit feedback from the public on this document. The Midas Project is glad to see…

  • How AI Chatbots Work: What We Know, and What We Don’t

    How AI Chatbots Work: What We Know, and What We Don’t

    AI chatbots (such as Anthropic’s Claude and OpenAI’s ChatGPT) have already transformed our world. On the surface, they appear remarkably capable of friendly, natural conversations. But below the surface lies sophisticated artificial intelligence driving their abilities, and a great deal of uncertainty about exactly how they work and what they are capable of. In this…

  • Are AI companies using copyrighted data?

    Are AI companies using copyrighted data?

    The current era of training large AI models requires three fundamental requirements: advanced algorithms, advanced computer chips, and a lot of data. This last component has become a sticking point for AI companies in recent years, including OpenAI, Anthropic, Meta, and Google. These companies have essentially hovered up the entire internet in their fight to…

  • How can you distinguish human-made content from AI-generated content?

    How can you distinguish human-made content from AI-generated content?

    AI-generated content is everywhere on the web. Videos, images, art, writing, and recently even music created by AI have become increasingly prevalent. How will we continue to distinguish real from fake? The short answer is that it’s hard, and there’s good news and bad news. The good news is that certain common features or deficiencies…

  • Sign Open Letter on Deepfakes

    Sign Open Letter on Deepfakes

    “Disrupting the Deepfake Supply Chain” is an open letter raising awarneess for the need to criminalize, and prevent the production of, non-consensual and misleading deepfakes. These false images and videos are incredibly harmful and threaten to degrade social trust, damage democracies, and exacerbate sexual harassment faced by women in particular. To read the full open…

Got any book recommendations?