Google

On February 4, 2025, aligning with updates to Google’s Frontier Safety Framework, they also removed previous commitments they made to not develop AI for use in warfare or surveillance. These changes came a few weeks after an investigation from The Washington Post revealed that they’d been marketing their products to the Israeli military.

The relevant text that was removed from their AI principles can be found below:

Applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.