xAI misses a second, self-imposed deadline to implement a Frontier Safety Policy

May 13, 2025

On the same day that The Midas Project released the Seoul Tracker in February 2025, xAI released a "draft" of their frontier safety policy, entitled their Risk Management Framework.

Unfortunately, xAI's draft did not fulfill the commitments as described — for one, it was explicitly a tentative plan, with "DRAFT" watermarked across every page. Secondly, it applied only to unspecified future systems "not yet in development," and thus presumably did not cover Grok 3 — even though that model proved to be a "frontier system" as described in the Seoul Commitment. And third, it failed to "articulate how risk mitigations will be identified and implemented," a core component of the Seoul commitments.

The upside, or so it appeared, was that following the draft would come a full release — and, having missed the February 10th deadline to meet the standards of the Seoul commitments, xAI created a new, self-imposed deadline. Their policy reads: "We plan to release an updated version of this policy within three months."

Released February 10th, 2025

May 10th has now come and gone — three months elapsed — and xAI has said nothing about if and when they plan to implement a full frontier safety policy meeting the standards of the Seoul commitment.

This marks two deadlines missed. In 2024, they promised the governments of the United Kingdom and South Korea that they would have a complete frontier safety policy in place by February 10th. They only had a patchy draft. Now, they've missed their three-month deadline to release an improved policy.

A Pattern of Behavior

This isn't the first time Elon Musk's xAI has demonstrated a casual approach to AI safety concerns. Despite Musk's frequent public statements about the potential dangers of AI, his own company appears unwilling to commit to concrete, verifiable safety measures.

A recent study by SaferAI rated xAI as having among the weakest risk management practices in the industry. Their researchers noted that xAI "received the lowest possible score because they have barely published anything about risk management."

The Path Forward

As The Midas Project continues to monitor compliance with the Seoul commitments, we call on xAI to:

  1. Immediately publish an updated Risk Management Framework as promised

  2. Strengthen the framework significantly beyond the draft version, particularly in the areas of risk mitigation and halting procedures

  3. Commit to transparency about their risk evaluation processes and results

  4. Engage meaningfully with third-party experts and government stakeholders

Until these steps are taken, we must question whether xAI's participation in the Seoul Summit was merely a public relations exercise rather than a genuine commitment to responsible AI development.