Category: Articles
-
AI Developers Are Bungling Their Dress Rehearsal
Here’s a sobering fact: artificial intelligence today is the worst it will ever be. From here on out, AI will only become more and more capable — and dangerous. All the top AI developers know this. OpenAI’s charter explicitly says its goal is to develop “highly autonomous systems that outperform humans at most economically valuable…
-
How deepfakes threaten democracy — and what you can do to help
On July 26, Elon Musk uploaded a video to the social media platform X (formerly known as Twitter), narrated by what appeared to be Vice President Kamala Harris. Soon, however, it becomes clear that it wasn’t her. In the video, her sound-alike claims that she doesn’t “know the first thing about running the country” and…
-
What we learned about OpenAI during this week’s Senate hearing
The US government is carefully watching AI companies. Every month, more and more DC insiders are waking up to the incredible amount of AI progress that may await us in the coming years, and how serious the implications of this will be. On Wednesday, September 18, the US Senate Subcommittee on Privacy, Technology, and the…
-
Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”
Over 500 people have signed our petition against Cognition since we launched it this summer. In that time, we’ve been calling the company out for having never once publicly discussed safety and responsible usage. But yesterday, that changed. Cognition released an acceptable usage policy that details what obligations users have to them in using their…
-
Following the trendlines: The pace of AI progress
If there’s one thing to know about the current state of AI development, it’s this: Things are moving faster than anyone anticipated. For a long time, there was uncertainty about whether the set of methods known as machine learning would ever be able to achieve human-level general intelligence, let alone surpass it. In the late…
-
Incentive gradients and The Midas Project’s theory of change
Why start an industry watchdog organization calling out irresponsible AI developers? Companies move along incentive gradients. Imagine this as a 3D landscape with peaks and valleys, downward slopes and upward climbs. Companies move along this landscape. They want to follow the path of least resistance. They’re constantly moving in the easiest, cheapest direction, just as…
-
Which tech companies are taking AI risk seriously?
Tech companies are locked in an all-out race to develop and deploy advanced AI systems. There’s a lot of money to be made, and indeed, plenty of opportunities to improve the world. But there are also serious risks — and racing to move as quickly as possible can make detecting and averting these risks a…
-
Magic.dev has finally released a risk evaluation policy. How does it measure up?
Big news: the AI coding startup Magic.dev has released a new risk evaluation policy this week. Referred to as their “AGI Readiness Policy” and developed in collaboration with the nonprofit METR, this announcement follows in the footsteps of Responsible Scaling Policies (RSPs) released by companies like Anthropic, OpenAI, and Google Deepmind. So how does it…
-
Why has Cognition fallen behind the industry standard for AI Safety?
Amid fierce debates surrounding AI safety, it’s easy to forget that most disagreements concern what particular risks we face and how to address them. Very few people will try to argue that there are no risks or that serious caution isn’t warranted. In light of this, there is an emerging consensus among policy experts (and…
-
Why are AI employees demanding a “right to warn” the public?
This week, another warning flag was raised concerning the rapid progress of advanced artificial intelligence technology. This time, it took the form of an open letter authored by current and former employees at some of the world’s top AI labs — and cosigned by leading experts including two of the three “godfathers of AI.” This…