Usage policies
We’ve updated our usage policies to be more readable and added service-specific guidance. Customers may sign up to receive notifications of new updates to our usage policies by filling out this form(opens in a new window).
We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them. By using our services, you agree to adhere to our policies.
We have established universal policies applicable to all our services, as well as specific policies for builders who use ChatGPT or our API to create applications for themselves or others. Violating our policies could result in action against your account, up to suspension or termination. We also work to make our models safer and more useful, by training them to refuse harmful instructions and reduce their tendency to produce harmful content.
We believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems. We cannot predict all beneficial or abusive uses of our technology, so we proactively monitor for new abuse trends. Our policies will evolve based on what we learn over time.
Universal Policies
To maximize innovation and creativity, we believe you should have the flexibility to use our services as you see fit, so long as you comply with the law and don’t harm yourself or others. When using any OpenAI service, like ChatGPT, labs.openai.com, and the OpenAI API, these rules apply:
Comply with applicable laws – for example, don’t compromise the privacy of others, engage in regulated activity without complying with applicable regulations, or promote or engage in any illegal activity, including the exploitation or harm of children and the development or distribution of illegal substances, goods, or services.
Don’t use our service to harm yourself or others – for example, don’t use our services to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.
Don’t repurpose or distribute output from our services to harm others – for example, don’t share output from our services to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualize children, or promote violence, hatred or the suffering of others.
Respect our safeguards - don’t circumvent safeguards or safety mitigations in our services unless supported by OpenAI (e.g., domain experts in our Red Teaming Network) or related to research conducted in accordance with our Sharing & Publication Policy.
We report apparent child sexual abuse material (CSAM) to the National Center for Missing and Exploited Children.
Building with the OpenAI API Platform
The OpenAI Platform allows you to build entirely custom applications. As the developer of your application, you are responsible for designing and implementing how your users interact with our technology. To make this easier, we’ve shared our Safety best practices(opens in a new window), and offer tools like our Moderation Endpoint(opens in a new window) and customizable system messages.
We recognize that our API introduces new capabilities with scalable impact, so we have service-specific policies that apply to all use of our APIs in addition to our Universal Policies:
Don’t compromise the privacy of others, including:
Collecting, processing, disclosing, inferring or generating personal data without complying with applicable legal requirements
Using biometric systems for identification or assessment, including facial recognition
Facilitating spyware, communications surveillance, or unauthorized monitoring of individuals
Don’t perform or facilitate the following activities that may significantly impair the safety, wellbeing, or rights of others, including:
Providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations
Making high-stakes automated decisions in domains that affect an individual’s safety, rights or well-being (e.g., law enforcement, migration, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance)
Facilitating real money gambling or payday lending
Engaging in political campaigning or lobbying, including generating campaign materials personalized to or targeted at specific demographics
Deterring people from participation in democratic processes, including misrepresenting voting processes or qualifications and discouraging voting
Don’t misuse our platform to cause harm by intentionally deceiving or misleading others, including:
Generating or promoting disinformation, misinformation, or false online engagement (e.g., comments, reviews)
Impersonating another individual or organization without consent or legal right
Engaging in or promoting academic dishonesty
Failing to ensure that automated systems (e.g., chatbots) disclose to people that they are interacting with AI, unless it's obvious from the context
Don’t build tools that may be inappropriate for minors, including:
Sexually explicit or suggestive content. This does not include content created for scientific or educational purposes.
Building with ChatGPT
Shared GPTs allow you to use ChatGPT to build experiences for others. Because your GPT’s users are also OpenAI users, when building with ChatGPT, we have the following service-specific policies in addition to our Universal Policies:
Don’t compromise the privacy of others, including:
Collecting, processing, disclosing, inferring or generating personal data without complying with applicable legal requirements
Soliciting or collecting the following sensitive identifiers, security information, or their equivalents: payment card information (e.g. credit card numbers or bank account information), government identifiers (e.g. SSNs), API keys, or passwords
Using biometric identification systems for identification or assessment, including facial recognition
Facilitating spyware, communications surveillance, or unauthorized monitoring of individuals
Don’t perform or facilitate the following activities that may significantly affect the safety, wellbeing, or rights of others, including:
Taking unauthorized actions on behalf of users
Providing tailored legal, medical/health, or financial advice
Making automated decisions in domains that affect an individual’s rights or well-being (e.g., law enforcement, migration, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance)
Facilitating real money gambling or payday lending
Engaging in political campaigning or lobbying, including generating campaign materials personalized to or targeted at specific demographics
Deterring people from participation in democratic processes, including misrepresenting voting processes or qualifications and discouraging voting
Don’t misinform, misrepresent, or mislead others, including:
Generating or promoting disinformation, misinformation, or false online engagement (e.g., comments, reviews)
Impersonating another individual or organization without consent or legal right
Engaging in or promoting academic dishonesty
Using content from third parties without the necessary permissions
Misrepresenting or misleading others about the purpose of your GPT
Don’t build tools that may be inappropriate for minors, including:
Sexually explicit or suggestive content. This does not include content created for scientific or educational purposes.
Don’t build tools that target users under 13 years of age.
We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies. Violations can lead to actions against the content or your account, such as warnings, sharing restrictions, or ineligibility for inclusion in GPT Store or monetization.
GPT Store
We want to make sure that GPTs in the GPT Store are appropriate for all users. For example, GPTs that contain profanity in their names or that depict or promote graphic violence are not allowed in our Store. We also don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities.
These policies may be enforced automatically at submission time or applied retroactively upon further review.
Updates
Customers may sign up to receive notifications of new updates to our usage policies by filling out this form(opens in a new window).
Changelog
2024-01-10: We've updated our Usage Policies to be clearer and provide more service-specific guidance.
2023-02-15: We’ve combined our use case and content policies into a single set of usage policies, and have provided more specific guidance on what activity we disallow in industries we’ve considered high risk.
2022-11-09: We no longer require you to register your applications with OpenAI. Instead, we'll be using a combination of automated and manual methods to monitor for policy violations.
2022-10-25: Updated App Review process (devs no longer need to wait for approval after submitting as long as they comply with our policies). Moved to an outcomes-based approach and updated Safety Best Practices.
2022-06-07: Refactored into categories of applications and corresponding requirements
2022-03-09: Refactored into “App Review”
2022-01-19: Simplified copywriting and article writing/editing guidelines
2021-11-15: Addition of “Content guidelines” section; changes to bullets on almost always approved uses and disallowed uses; renaming document from “Use case guidelines” to “Usage guidelines”.
2021-08-04: Updated with information related to code generation
2021-03-12: Added detailed case-by-case requirements; small copy and ordering edits
2021-02-26: Clarified the impermissibility of Tweet and Instagram generators