About 200 people work on safety at OpenAI. | The Verge
Kolter laid out OpenAI’s different safety groups: the safety systems team, which works on guardrails and evaluations; the preparedness team, which deals with OpenAI’s preparedness framework; the alignment team, which helps train models on ways that “align with human values”; the model policy team, which develops the model spec; and other teams focusing on investigations. When speaking about the controversial dissolution of OpenAI’s superalignment team and AGI readiness team, he said some of that research is being done by other teams.