Category: Fairness
-
Join “Red Teaming In Public”
“Red Teaming in Public” is a project, originally started by Nathan Labenz and Pablo Eder in June 2024. The goal is to catalyze a shift toward higher standards for AI developers. Labenz shared the following details in the project’s announcement on X: For context, we are pro-technology “AI Scouts” who believe in the immense potential…
-
Sign Open Letter on Deepfakes
“Disrupting the Deepfake Supply Chain” is an open letter raising awarneess for the need to criminalize, and prevent the production of, non-consensual and misleading deepfakes. These false images and videos are incredibly harmful and threaten to degrade social trust, damage democracies, and exacerbate sexual harassment faced by women in particular. To read the full open…
-
Submit Red Teaming Findings
One of the most useful ways to hold AI companies accountable is to discover and gather evidence of the harms their models pose. This furthers our understanding of their risks and helps us build evidence to present to companies, policymakers, and the public to ensure that these companies are accountable to ensuring their models behave…