API Terms & Policies API Terms & Policies

Sharing & Publication Policy

Updated August 10, 2021

Contents

  1. Social media policy
  2. Livestreaming and demonstrations policy
  3. End-users policy
  4. Fictional content co-authored with the OpenAI API policy
  5. Research policy

Social media policy

To mitigate the possible risks of combining AI-generated content with social media, we have set the following policy on permitted uses.

We have updated these social media policies to be slightly more permissive than they were previously—however, we reserve the right to update them again as we gather further input and assess their efficacy:

Occasional one-off postings of prompts / completions to social media are permissible so long as the output is attributed to your name or company, and you use judgment and discretion in what to share.

  • You may share more than one prompt / completion as part of these one-off postings (for instance, interesting results from a few different prompts).
  • Rule-of-thumb: if subscribers/followers are tuning in specifically for that content, or come to expect content at an approximate cadence, the posting is ongoing and not one-off.
  • Please review our Community Guidelines before getting started. You are interacting with the raw model, which means we do not filter out biased or negative responses. (Also, you can read more about implementing our free content filter)
  • We kindly ask that you refrain from sharing outputs that may offend others; if you would like to ensure the OpenAI team is aware of a particular completion, you may email us or use the reporting tools within Playground.

More-frequent, ongoing posting of prompts / completions / derivative content from the API to social media are also okay, only provided it meets all of the following criteria. If your content fails to meet any of the following criteria, you must be approved through a Pre-launch Review before publishing. The criteria are as follows:

  • Outputs are attributed to your name or company.
  • Outputs are ‘static’, e.g., have no end-user interaction with the API. (Examples of static outputs: a screenshot of a Playground prompt; a screen recording of a Playground output; a Playground output transposed in a Medium article.)
  • Content is clearly indicated as being AI-generated in a way no user could reasonably miss or misunderstand (for some use-cases, screenshotting Playground may be preferable to copy-pasting the text, to clearly indicate this is AI-generated).
  • Content is filtered to avoid unsafe content (content where our Content Filter returns a 2 = Unsafe). If you are using OpenAI Codex and have not implemented the content filter, you should refrain from posting content containing slurs or offensive language.
  • Each post is manually posted by a human being, with no more than 4 posts per day.
  • Topics of the content do not violate OpenAI's Terms of Use, e.g., are not related to political campaigns, spam, hateful content, content that incites violence, or other uses that may cause social harm.

You should use your best judgment about interpreting these criteria if there are important case-by-case circumstances. For instance, if you are recording your screen in Playground and the Content Filter triggers a clear false-positive (e.g., flags something as Unsafe that is clearly innocuous), it is okay to carry on regardless.

Livestreaming and demonstrations policy

  • We do not currently allow livestreams of interacting with the OpenAI API, or other ‘live’ (in-person or over video chat) demonstrations of interacting with the OpenAI API that are being recorded, unless your application has been approved through our Pre-launch Review process. An exception to this is that we permit demonstrating OpenAI Codex through livestreaming, though we ask that you take steps to minimize the risk of displaying sensitive content, such as avoiding prompts which may result in sensitive generations.
  • For hackathons, conferences, or other educationally-oriented activities, you are permitted to demonstrate the OpenAI API or work-in-progress applications, provided you clarify that the application has not yet been approved for launch; please email community@openai.com targeting at least two weeks before your event to coordinate.
  • For other events, you may conduct unrecorded ‘live’ demonstrations for audiences of <25 people from various organizations, or for any amount of people belonging to one organization, again provided you clarify that the application has not yet been approved for launch.
  • Note that sharing a pre-recorded video of you interacting with the OpenAI API (not a livestream) is governed by the social media policies in the above section.
    • Ideally, please limit these videos to tutorials, tips and tricks, demonstrations of a use-case, etc.
    • Uses that are in violation of the social media policies above (e.g., a series of videos showing how to generate hateful content with the OpenAI API, or other actions deemed in bad-faith) may be subject to revocation of one’s API key or quota.

Fictional content co-authored with the OpenAI API policy

Creators who wish to publish their first-party written fictional content (e.g., a book, compendium of short stories) created in part with the OpenAI API are permitted to do so under the following conditions:

  • The published content is attributed to your name or company.
  • The role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand.
  • Topics of the content do not violate OpenAI's Terms of Use, e.g., are not related to political campaigns, adult content, spam, hateful content, content that incites violence, or other uses that may cause social harm.
  • We kindly ask that you refrain from sharing outputs that may offend others.

For instance, one must detail in a Foreword or Introduction (or some place similar) the relative roles of drafting, editing, etc. People should not represent API-generated content as being wholly generated by a human or wholly generated by an AI, and it is a human who must take ultimate responsibility for the content being published.

Here is some stock language you may use to describe your creative process, provided it is accurate:

“The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.”

Research policy

We believe it is important for the broader world to be able to evaluate our research and products, especially to understand and improve potential weaknesses and safety or bias problems in our models.

Accordingly, we welcome research publications related to the OpenAI API, with a few further details:

  • We do require that researchers receive prior approval for either 1) publishing datasets of API-generated text or 2) embarking on research that attempts to train models in part from API-generated data, through our Pre-launch Review process.
    • This is because datasets and derivative models of the OpenAI API can potentially be used to enable new AI capabilities by third-parties and may raise IP considerations.
  • For other research, we kindly request advance notice about research publications based on API access at papers@openai.com.
    • In some cases, we may want to highlight your work internally and/or externally.
    • In others, such as publications that pertain to security or misuse of the API, we may want to take appropriate actions to protect our users.
    • If you notice any safety or security issues with the API in the course of your research, we ask that you please submit these immediately through our Coordinated Vulnerability Disclosure Program.
  • In some cases, we may be able to provide feedback on drafts (time permitting), also through papers@openai.com, though our approval is not required for publication of most research.

There are a number of research directions we are excited to explore with the OpenAI API. In particular, we consider the following to be especially important directions, though you are free to craft your own direction:

  • Fairness and Representation: How should performance criteria be established for fairness and representation in language models? How can language models be improved in order to effectively support the goals of fairness and representation in specific, deployed contexts?
  • Misuse Potential: How can systems like the API be misused? What sorts of ‘red teaming’ approaches can we develop to help us and other AI developers think about responsibly deploying technologies like this?
  • Robustness: Can we predict the kinds of domains and tasks for which large language models are more likely to be robust (or not robust), and how does this relate to the training data? How can robustness be measured in the context of few-shot learning (e.g., across variations in prompts)?
  • Interdisciplinary Research: How can AI development draw on insights from other disciplines such as philosophy, cognitive science, and sociolinguistics?