Safety in deployment
We monitor the use of our tools and update safety mitigations based on what we learn about model risks and capabilities, reflecting our leadership in commercial AI deployment.
Our principles
Minimize harm
We will build safety into our AI tools where possible, and work hard to aggressively reduce harms posed by the misuse or abuse of our AI tools.
Build trust
Alongside our user and developer community, we’ll share the responsibility of supporting safe, beneficial applications of our technology.
Learn and iterate
We will observe and analyze how our models behave and are used and seek input on our approach to safety in order to improve our systems over time.
Be a pioneer in trust and safety
We will support research into the unique trust and safety challenges posed by generative AI, to help improve safety beyond our ecosystem.

Documents and policies
We’ve created and compiled resources about our safety practices. Here’s how you can uphold trust and safety as you engage with our products.
Usage policies
By following our usage policies, you'll help us make sure that our technology is used for good.Moderation
The moderation endpoint is a tool you can use to check whether content complies with OpenAI's content policy.Safety best practices
Read about how to build with safety in mind.Educator considerations for ChatGPT
Learn more about the capabilities, limitations, and considerations for using ChatGPT for teaching and learning.