Skip to main content
Safety at every step

We believe in AI’s potential to make life better for everyone, which means making it safe for everyone.

Teach

We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.

Test

We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.

Share

We use real-world feedback to help make our AI safer and more helpful.

Safety doesn’t stop

Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.

Leading the way in Safety

We collaborate with industry leaders and policymakers on the issues that matter most.

Card with Modal Page > Card with Modal > Child safety > light

Child Safety

Creating industry-wide standards to protect children.

Card with Modal Page > Card with Modal > Card Two > Private Information > Media > Light mode

Private Information

Protecting people’s privacy.

Card with Modal Page > Card with Modal > Deep Fakes > Media > Light mode

Deep Fakes

Improving transparency in AI content.

Card with Modal Page > Card with Modal > Voice Protection > Media > Light

Bias

Rigorously evaluating content to avoid reinforcing biases or stereotypes.

Card with Modal Page > Card with Modal > Elections > Media > Light

Elections

Partnering with governments to combat disinformation globally.

Conversations with OpenAI researchers

Get inside OpenAI with our series that breaks down a range of topics around safety and more.

Latest news on Safety

Go deeper on Safety

This report outlines the safety work carried out prior to releasing OpenAI o1-preview and o1-mini, including external red teaming and frontier risk evaluations according to our Preparedness Framework.
This system card takes a detailed look at speech-to-speech while also evaluating text and image capabilities.
A system card on the safety challenges in GPT-4 and the interventions we implemented to mitigate potential harms.
This system card dives deeper into the evaluations, preparation, and mitigation work done for image inputs.
This system card details how we prepared DALL·E 3 for deployment, focusing on risk evaluation, red teaming, and mitigation.
A document outlining OpenAI’s processes to track, evaluate, and protect against catastrophic risks from powerful models.
This new committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects.
A collection of resources about our safety practices across development, deployment, and the use of our models.