This month The Midas Project released our newest report, the Seoul Commitment Tracker. Our report reveals that most AI companies are breaking the safety promises they made to the world last year.
The Seoul Tracker evaluates sixteen leading AI companies that promised to implement safety frameworks at the 2024 AI Seoul Summit. The results are concerning: not a single company earned better than a “B-” grade. Even worse, six companies received failing “F” marks for completely neglecting their commitments.
Even among companies that did create safety policies, many are using vague language and setting extremely high thresholds before safety measures kick in. This means dangerous AI capabilities might develop without proper safeguards in place.
The report found that companies like Meta, xAI, and Mistral AI have particularly fallen short, failing to fully implement a risk management policy as promised. However, the Seoul Tracker does highlight some positive steps: several companies including OpenAI, Google, and Anthropic have begun addressing serious risks that AI safety experts have warned about, such as AI systems being used to design biological weapons or launch cyberattacks. Some are also monitoring for concerning capabilities like model autonomy – where AI systems might act independently in unexpected ways.
For everyday people, this matters because these AI systems are rapidly becoming more powerful and integrated into our lives. Without proper safety guardrails, we risk creating technologies that could be harmful or uncontrollable. The report serves as a wake-up call that public pressure and government oversight remain essential to ensure AI development benefits everyone, not just tech companies’ bottom lines.