Does Cognition care about AI Safety?

Cognition is a startup building “Devin,” a highly advanced AI software engineer that can reason and act autonomously.

So what’s their plan to keep it safe?

They won’t say.

Most billion-dollar AI companies at least try to convince consumers that they will adequately manage risks and protect users. But Cognition hasn’t even done the bare minimum. Unlike nearly all other companies developing advanced AI products, we couldn’t find any public description of Cognition’s safety practices.

We even reached out to ask them directly — and got nothing but radio silence in return.

Cognition hasn’t publicly adopted a safety policy of any kind. The public shouldn’t stand for it. We are calling upon Cognition to release an industry-standard scaling policy.

Sign now if you agree.

Please enable JavaScript in your browser to complete this form.
Updates
You can unsubscribe at any time.
391 / 500 Signatures 78%

There are serious risks posed by advanced AI technology — especially systems that can reason, plan, and act autonomously. According to Cognition, Devin can access, write, and execute code all on its own.

How will we determine when these capabilities pose excessive risk? Most leading AI companies have discussed their plans to use tiered risk-evaluation frameworks (sometimes referred to as responsible scaling policies) to address this problem.

Does cognition have a policy to determine when their models pose significant threats? What safeguards will they have in place at that point? Will they ever make these bare-minimum safety preparations?

We don’t know, because they won’t say.

Cognition is an artificial intelligence company founded by Scott Wu, Walden Yan, and Steven Hao. The company was founded in November 2023. It is based out of San Francisco and New York City. As of March 2024, it was valued at approximately $2 billion. Recently, they announced a major partnership with Microsoft.

The team at Cognition has big plans. They are very explicit about their goal to build an end-to-end autonomous AI software engineer, a system that could plan, reason, and write/execute code.

They’ve already received a great deal of criticism. Some software developers have accused them of aiming to replace human labor — they often respond to this charge by claiming their product will actually increase the demand for such jobs, but they don’t provide any compelling justification for this belief.

They’ve also been accused of using deceptive marketing tactics to build hype for their product. In response, they admitted their mistake and said “skepticism is good and we all need to vet new technologies critically” — it’s in that spirit that we are calling upon Cognition to prove that they take safety issues seriously.

Devin is the main product that has been publicly shared by Cognition. Dubbed “the first AI software engineer,” it is an early iteration of an “AI agent” — that is, an AI system which was designed to act autonomously.

Devin can use a web browser to access the internet, a code editor to write new programs, a command line to run and debug said programs, and everything else it needs to start acting autonomously to solve real-world coding tasks.

The only problem: nobody has found a way to ensure that AI outputs are accurate and safe. Popular chatbots like OpenAI’s ChatGPT can consistently make mistakes, peddle misinformation, or be jailbroken to produce biased or toxic outputs.

For a text-based chatbot, this problem is contained to conversation. But still, the risk of misuse and harm is a significant enough threat that most leading chatbot developers have released plans for evaluation-based scaling policies to determine acceptable risk thresholds and safety mitigations.

This is far more important for agents like Devin, who aren’t just engaging in a conversation but taking real-world actions online. This is why it is imperative that companies like Cognition conduct thorough risk evaluations.

Yet Cognition appears to have no such policy in place.

Last month, the UK and South Korean governments announced that sixteen AI companies have committed to implement evaluation-based scaling policies.

To see examples of what an evaluation-based scaling policy might look like, take a look at the policies released by Anthropic, OpenAI, and Google.

At the heart of these policies is the understanding that we don’t fully understand how these systems work and, therefore, must be proactive in assessing the risks they pose, especially as the models scale to become increasingly capable.

As certain risk thresholds are reached, these policies specify the safeguards that must be in place to release the product, or to continue developing even more capable products.

To support our campaign, click any of the following links:

The Midas Project is a nonprofit organization campaigning to ensure that AI benefits all of humanity, and not just the tech companies building it. To keep up to date with our work, or to lend you support, click any of the following links:

Share our campaign: