Developing safe & responsible AI

Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly.
Aerial shot of San Francisco

AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.

Mira MuratiChief Technology Officer at OpenAI

A focus on safety

AI technology comes with tremendous benefits, along with serious risk of misuse. Our Charter guides every aspect of our work to ensure that we prioritize the development of safe and beneficial AI.
People sitting in a warmly lit library working on laptops at long tables
OpenAI humans

OpenAI safety teams

Our teams span a wide spectrum of technical efforts tackling AI safety challenges at OpenAI. The Safety Systems team stays closest to the deployment risk while our Superalignment team focuses on aligning superintelligence and our Preparedness team focuses on safety assessments for frontier models.

Sharing our expertise

We collaborate with industry leaders and policymakers to ensure that AI systems are developed in a trustworthy manner.

  • Forecasting Misuse

    Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk

    January 11, 2023
  • Best Practices For Deploying Language Models

    Best practices for deploying language models

    June 2, 2022
  • Language Model Safety And Misuse

    Lessons learned on language model safety and misuse

    March 3, 2022
  • Cooperation On Safety

    Why responsible AI development needs cooperation on safety

    July 10, 2019

This technology will profoundly transform how we live. There is still time to guide its trajectory, limit abuse, and secure the most broadly beneficial outcomes.

Anna MakanjuHead of Public Policy at OpenAI
Two people sitting together on a couch in a sunlit room, talking over their laptops

Safety in practice

We develop risk mitigation tools, best practices for responsible use, and monitor our platforms for misuse.

  • New Ai Classifier For Indicating Ai Written Text

    New AI classifier for indicating AI-written text

    January 31, 2023
  • New And Improved Content Moderation Tooling

    New and improved content moderation tooling

    August 10, 2022
  • An aerial view of a crowd of people facing away, wearing hats and bearing flags

    DALL·E 2 pre-training mitigations

    June 28, 2022