Tag: Investigation
-
Join “Red Teaming In Public”
“Red Teaming in Public” is a project, originally started by Nathan Labenz and Pablo Eder in June 2024. The goal is to catalyze a shift toward higher standards for AI developers. Labenz shared the following details in the project’s announcement on X: For context, we are pro-technology “AI Scouts” who believe in the immense potential…
-
Submit Red Teaming Findings
One of the most useful ways to hold AI companies accountable is to discover and gather evidence of the harms their models pose. This furthers our understanding of their risks and helps us build evidence to present to companies, policymakers, and the public to ensure that these companies are accountable to ensuring their models behave…