OpenAI safety teams
Our teams span a wide spectrum of technical efforts tackling AI safety challenges at OpenAI. The Safety Systems team stays closest to the deployment risk while our Superalignment team focuses on aligning superintelligence and our Preparedness team focuses on safety assessments for frontier models.
Forecasting potential misuses of language models for disinformation campaigns and how to reduce riskJanuary 11, 2023
Best practices for deploying language modelsJune 2, 2022
Lessons learned on language model safety and misuseMarch 3, 2022
Why responsible AI development needs cooperation on safetyJuly 10, 2019