-
How deepfakes threaten democracy — and what you can do to help
On July 26, Elon Musk uploaded a video to the social media platform X (formerly known as Twitter), narrated by what appeared to be Vice President Kamala Harris. Soon, however, it becomes clear that it wasn’t her. In the video, her sound-alike claims that she doesn’t “know the first thing about running the country” and…
-
What we learned about OpenAI during this week’s Senate hearing
The US government is carefully watching AI companies. Every month, more and more DC insiders are waking up to the incredible amount of AI progress that may await us in the coming years, and how serious the implications of this will be. On Wednesday, September 18, the US Senate Subcommittee on Privacy, Technology, and the…
-
Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”
Over 500 people have signed our petition against Cognition since we launched it this summer. In that time, we’ve been calling the company out for having never once publicly discussed safety and responsible usage. But yesterday, that changed. Cognition released an acceptable usage policy that details what obligations users have to them in using their…
-
Following the trendlines: The pace of AI progress
If there’s one thing to know about the current state of AI development, it’s this: Things are moving faster than anyone anticipated. For a long time, there was uncertainty about whether the set of methods known as machine learning would ever be able to achieve human-level general intelligence, let alone surpass it. In the late…
-
Join “Red Teaming In Public”
“Red Teaming in Public” is a project, originally started by Nathan Labenz and Pablo Eder in June 2024. The goal is to catalyze a shift toward higher standards for AI developers. Labenz shared the following details in the project’s announcement on X: For context, we are pro-technology “AI Scouts” who believe in the immense potential…
-
Incentive gradients and The Midas Project’s theory of change
Why start an industry watchdog organization calling out irresponsible AI developers? Companies move along incentive gradients. Imagine this as a 3D landscape with peaks and valleys, downward slopes and upward climbs. Companies move along this landscape. They want to follow the path of least resistance. They’re constantly moving in the easiest, cheapest direction, just as…
-
Which tech companies are taking AI risk seriously?
Tech companies are locked in an all-out race to develop and deploy advanced AI systems. There’s a lot of money to be made, and indeed, plenty of opportunities to improve the world. But there are also serious risks — and racing to move as quickly as possible can make detecting and averting these risks a…
-
Magic.dev has finally released a risk evaluation policy. How does it measure up?
Big news: the AI coding startup Magic.dev has released a new risk evaluation policy this week. Referred to as their “AGI Readiness Policy” and developed in collaboration with the nonprofit METR, this announcement follows in the footsteps of Responsible Scaling Policies (RSPs) released by companies like Anthropic, OpenAI, and Google Deepmind. So how does it…
-
Why has Cognition fallen behind the industry standard for AI Safety?
Amid fierce debates surrounding AI safety, it’s easy to forget that most disagreements concern what particular risks we face and how to address them. Very few people will try to argue that there are no risks or that serious caution isn’t warranted. In light of this, there is an emerging consensus among policy experts (and…
-
Sign the Cognition petition
The Midas Project has written a petition to Cognition, asking them to adopt an industry standard risk-evaluation policy. There are two places you can sign our petition — on our website directly, and on Change.org. (Don’t worry about signing on both — we will be combining the signature totals and cross-checking to remove duplicates before…
Got any book recommendations?