We’ve recently updated our usage policies to be clearer and more specific.
We want everyone to use our tools safely and responsibly. That’s why we’ve created usage policies that apply to all users of OpenAI’s models, tools, and services. By following them, you’ll ensure that our technology is used for good.
If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes. Repeated or serious violations may result in further action, including suspending or terminating your account.
Our policies may change as we learn more about use and abuse of our models.
Disallowed usage of our models
We don’t allow the use of our models for the following:
- OpenAI prohibits the use of our models, tools, and services for illegal activity.
Child Sexual Abuse Material or any content that exploits or harms children
- We report CSAM to the National Center for Missing and Exploited Children.
Generation of hateful, harassing, or violent content
- Content that expresses, incites, or promotes hate based on identity
- Content that intends to harass, threaten, or bully an individual
- Content that promotes or glorifies violence or celebrates the suffering or humiliation of others
Generation of malware
- Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.
Activity that has high risk of physical harm, including:
- Weapons development
- Military and warfare
- Management or operation of critical infrastructure in energy, transportation, and water
- Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders
Activity that has high risk of economic harm, including:
- Multi-level marketing
- Payday lending
- Automated determinations of eligibility for credit, employment, educational institutions, or public assistance services
Fraudulent or deceptive activity, including:
- Coordinated inauthentic behavior
- Academic dishonesty
- Astroturfing, such as fake grassroots support or fake review generation
Adult content, adult industries, and dating apps, including:
- Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
- Erotic chat
Political campaigning or lobbying, by:
- Generating high volumes of campaign materials
- Generating campaign materials personalized to or targeted at specific demographics
- Building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying
- Building products for political campaigning or lobbying purposes
Activity that violates people’s privacy, including:
- Tracking or monitoring an individual without their consent
- Facial recognition of private individuals
- Classifying individuals based on protected characteristics
- Using biometrics for identification or assessment
- Unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records
Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information
- OpenAI’s models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice.
Offering tailored financial advice without a qualified person reviewing the information
- OpenAI’s models are not fine-tuned to provide financial advice. You should not rely on our models as a sole source of financial advice.
Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition
- OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.
- OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.
High risk government decision-making, including:
- Law enforcement and criminal justice
- Migration and asylum
We have further requirements for certain uses of our models:
- Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.
- Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person's explicit consent or be clearly labeled as “simulated” or “parody.”
- Use of model outputs in livestreams, demonstrations, and research are subject to our Sharing & Publication Policy.
Our API is being used to power businesses across many sectors and technology platforms. From iOS Apps to websites to Slack, the simplicity of our API makes it possible to integrate into a wide array of use cases. Subject to the use case restrictions mentioned above, we allow the integration of our API into products on all major technology platforms, app stores, and beyond.
In addition to the disallowed usages of our models detailed above, we have additional requirements for developers building plugins:
- The plugin manifest must have a clearly stated description that matches the functionality of the API exposed to the model.
- Don’t include irrelevant, unnecessary, or deceptive terms or instructions in the plugin manifest, OpenAPI endpoint descriptions, or plugin response messages. This includes instructions to avoid using other plugins, or instructions that attempt to steer or set model behavior.
- Don’t use plugins to circumvent or interfere with OpenAI’s safety systems.
- Don’t use plugins to automate conversations with real people, whether by simulating a human-like response or by replying with pre-programmed messages.
- Plugins that distribute personal communications or content generated by ChatGPT (such as emails, messages, or other content) must indicate that the content was AI-generated.
Like our other usage policies, we expect our plugin policies to change as we learn more about use and abuse of plugins.
- 2023-02-15: We’ve combined our use case and content policies into a single set of usage policies, and have provided more specific guidance on what activity we disallow in industries we’ve considered high risk.
- 2022-11-09: We no longer require you to register your applications with OpenAI. Instead, we'll be using a combination of automated and manual methods to monitor for policy violations.
- 2022-10-25: Updated App Review process (devs no longer need to wait for approval after submitting as long as they comply with our policies). Moved to an outcomes-based approach and updated Safety Best Practices.
- 2022-06-07: Refactored into categories of applications and corresponding requirements
- 2022-03-09: Refactored into “App Review”
- 2022-01-19: Simplified copywriting and article writing/editing guidelines
- 2021-11-15: Addition of “Content guidelines” section; changes to bullets on almost always approved uses and disallowed uses; renaming document from “Use case guidelines” to “Usage guidelines”.
- 2021-08-04: Updated with information related to code generation
- 2021-03-12: Added detailed case-by-case requirements; small copy and ordering edits
- 2021-02-26: Clarified the impermissibility of Tweet and Instagram generators