No headings found

Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”

Campaign Progress Update: Cognition AI releases new “Acceptable Usage Policy”

Sep 5, 2024

2 min read

  • Get updates on AI risk Join our newsletter

Get updates on AI risk Join our newsletter

Over 500 people have signed our petition against Cognition since we launched it this summer. In that time, we’ve been calling the company out for having never once publicly discussed safety and responsible usage.

But yesterday, that changed. Cognition released an acceptable usage policy that details what obligations users have to them in using their products.

We’ve been clear that one of the main risks posed by products like Devin is the fact that it could be abused to generate and spread harmful and/or abusive material including cyberattacks, automated phishing schemes, and spyware.

This new policy is a meaningful step forward. Not only has the team at Cognition now explicitly said that these malicious uses are prohibited, but they’ve also offered some breadcrumbs about how they will help enforce it, including “monitoring usage” and opening up a reporting email for security vulnerabilities and violations of their acceptable use policy.

Still, this document is mostly putting the onus of responsibility on users. Since launching our campaign, we’ve been clear: Cognition also has a responsibility to ensure that they aren’t releasing highly-capable dual-use products without conducting risk evaluations and implementing safeguards. So far, we still have no indication that they plan to implement a policy like this.

There are also some concerning signs that they treated their new acceptable use policy as an afterthought, namely releasing it with multiple broken or inactive links, as well as typos and grammatical errors (including misspelling the name of their own product!)

Our campaign has clearly been pushing them in the right direction. But there’s a lot more to be done. If you believe Cognition should release a public risk evaluation policy, sign our petition or express interest in volunteering on our campaign.

Interested in live updates on AI accountability? Visit Watchtower →

Interested in live updates on AI accountability? Visit Watchtower →

Ways to get involved

Join us in shaping the future of responsible AI - explore the ways you can contribute.

Join the Movement

Be part of a community demanding transparency and ethical standards in AI development.

Support Our Research

Help us investigate Big Tech accountability by funding in-depth reports and public interest research.

Spread the Word

Share our work and reports to raise awareness about the risks and responsibilities of emerging technologies.