Skip to main content
OpenAI

Safety

Safety at every step

We believe in AI’s potential to make life better for everyone, which means making it safe for everyone

Teach

We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.

Test

We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.

Share

We use real-world feedback to help make our AI safer and more helpful.

Safety doesn’t stop

Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.

How we think about safety and alignment

Leading the way in safety

We collaborate with industry leaders and policymakers on the issues that matter most.

Illustration of two people avatar, with one smaller than the other, depicting a parent and child relationship.

Child safety

Creating industry-wide standards to protect children.

Illustration of a pair of sunglasses and a hat, depicting the concept of anonymity and privacy.

Private information

Protecting people’s privacy.

An illustration of a human avatar inside of a shield depicting safety.

Deep fakes

Improving transparency in AI content.

An abstract illustration featuring a tilted balance scale and directional arrows, representing the concept of bias in decision-making or evaluation.

Bias

Rigorously evaluating content to avoid reinforcing biases or stereotypes.

A stylized illustration depicting a ballot box with a check-marked paper being inserted, symbolizing the voting process in an election.

Elections

Partnering with governments to combat disinformation globally.

Conversations with OpenAI researchers

Get inside OpenAI with our series that breaks down a range of topics around safety and more.