OpenAI invests in security as we believe it is foundational to our mission. We safeguard computing efforts that advance artificial general intelligence and continuously prepare for emerging security threats.
The OpenAI API has been evaluated by a third-party security auditor and is SOC 2 Type 2 compliant.
The OpenAI API undergoes annual third-party penetration testing, which identifies security weaknesses before they can be exploited by malicious actors.
Reporting security issues
We are committed to protecting people’s privacy.
Our goal is to build helpful AI models
We want our AI models to learn about the world—not private individuals. We use training information to help our AI models, like ChatGPT, learn about language and how to understand and respond to it.
We do not actively seek out personal information to train our models, and we do not use public information on the internet to build profiles about people, advertise to or target them, or to sell user data.
Our models generate new words each time they are asked a question. They don’t store information in a database for recalling later or “copy and paste” training information when responding to questions.
We work to:
- Reduce the amount personal information in our training datasets
- Train models to reject requests for personal information of private individuals
- Minimize the possibility that our models might generate responses that include the personal information of private individuals
Ways to manage data
One of the most useful features of AI models is that they can improve over time. We continuously improve our models through research breakthroughs and exposure to real-world problems and data.
We understand users may not want their data used to improve our models and provide ways for them to manage their data:
- In ChatGPT, users can turn off chat history, allowing them to choose which conversations can be used to train our models
- We do not train on API customer data by default
- An opt-out form