
Watchtower
Watchtower tracks changes to corporate and government AI safety policies, both announced and unannounced. Click any entry for details.
< Back
Date:
xAI
Violation
Major
Unannounced
At the 2024 AI Seoul Summit, xAI committed to publicly report security risks of its products. In February 2025, xAI published a draft risk management framework, but it only applied to "unspecified future AI models not currently in development" and failed to articulate how xAI would identify and implement risk mitigations. xAI promised to release a finalized version within three months—by May 10, 2025—but missed the deadline without acknowledgement. When the company released Grok 4 in July, the model should have been covered by the finalized framework, but xAI failed to release a safety report alongside the launch. While xAI claimed it conducted internal evaluations, it provided no details; AI safety researcher Samuel Marks called the lack of reporting "reckless" and a break from "industry best practices followed by other major AI labs."
