We believe in AI’s potential to make life better for everyone, which means making it safe for everyone.
Teach
We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.
Test
We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.
Share
We use real-world feedback to help make our AI safer and more helpful.
Safety doesn’t stop
Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.
Leading the way in Safety
We collaborate with industry leaders and policymakers on the issues that matter most.
Child Safety
Creating industry-wide standards to protect children.
Private Information
Protecting people’s privacy.
Deep Fakes
Improving transparency in AI content.
Bias
Rigorously evaluating content to avoid reinforcing biases or stereotypes.
Elections
Partnering with governments to combat disinformation globally.
Conversations with OpenAI researchers
Get inside OpenAI with our series that breaks down a range of topics around safety and more.