-
OpenAI
Adjusted the language on the o1 system card webpage, changing “o1” to “o1-preview.”
-
Microsoft
Removed Vice Chair and President Brad Smith’s byline from Microsoft’s 2023 White House commitment to advance safe and secure artificial intelligence.
-
Anthropic
Between December 16 and December 18, Anthropic changed the “last updated” date on their Responsible Disclosure Policy, with no apparent substantive changes to the text of the policy.
-
Scaling, Reasoning, and Unknown Unknowns
The past decade of progress in artificial intelligence has primarily been driven by scaling model training. That is to say, making AI models larger, training them for longer, and exposing them to more data produces oddly predictable returns to model performance. In recent years, there’s been much debate about whether scaling is “hitting a wall.”…
-
Cognition
Changed terms of service concerning use of user data. Did not announce or report that change was made.
-
Cohere
On November 21, Cohere released a complete rewrite of their usage policies.
-
OpenAI
Released a white paper detailing how they approach external red teaming.
-
United States
An independent congressional commission recommended the US launch “a Manhattan Project-like program to race to AGI”
-
Anthropic
Released a new page providing details about how they are complying with multiple voluntary safety and security frameworks.
-
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous. All the top AI developers know this. OpenAI’s charter explicitly says its goal is to develop “highly autonomous systems that outperform humans at most economically valuable…
Got any book recommendations?