Product safety standards

As part of our mission to ensure AI benefits all of humanity, we strive to ensure responsible development, deployment, and use of our models.
Close-up of laptops on the laps of two people sitting together on a couch, one person gesturing with their palm outstretched

Safety in deployment

We monitor the use of our tools and update safety mitigations based on what we learn about model risks and capabilities, reflecting our leadership in commercial AI deployment.

Our principles

Minimize harm
We will build safety into our AI tools where possible, and work hard to aggressively reduce harms posed by the misuse or abuse of our AI tools.

Build trust
Alongside our user and developer community, we’ll share the responsibility of supporting safe, beneficial applications of our technology.

Learn and iterate
We will observe and analyze how our models behave and are used and seek input on our approach to safety in order to improve our systems over time.

Be a pioneer in trust and safety
We will support research into the unique trust and safety challenges posed by generative AI, to help improve safety beyond our ecosystem.

Two people sitting on a yellow couch in front of a row of black-framed artwork, working on their laptops at a long desk

Documents and policies

We’ve created and compiled resources about our safety practices. Here’s how you can uphold trust and safety as you engage with our products.