Skip to main content
OpenAI

Transparency & content moderation

To promote safe and responsible use of our products, we use a range of procedures and tools to address content that may violate the law or our terms and policies.

How we monitor and enforce

We use a combination of automated technologies and human review to monitor activity on our services, in line with our Privacy Policy. Our methods include:

Proactive detection: We use classifiers, reasoning models, hash-matching, blocklists, and other automated systems to identify content that may violate our terms or policies.

User reports: We respond to external notices and user reports about content violations.  Information on how to report a violation is available here(opens in a new window).

Human review: Our team may review flagged content to determine appropriate actions.

Enforcement actions

When we identify content that violates our terms or policies, we may take actions such as:

Account restrictions: Terminating or limiting access to our products.

Warnings: Informing users about potential violations and potential consequences.

Content sharing restrictions: Preventing or disabling the sharing of specific content.

Search results: Blocking certain search results from appearing.

GPT visibility controls: Restricting access to specific GPTs, including their presence in the GPT Store.

Forum moderation: Removing posts or restricting access to OpenAI forums.

We consider factors like legal requirements, the severity of the violation, and past or repeat violations, when determining enforcement actions.

Appeals process

If we take enforcement action based on your content or activity, we may notify you with details and reasons for our decision. If you think we have made a mistake, you can report to us or appeal by emailing trustandsafety@openai.com or contacting Support(opens in a new window). We may reassess, considering any additional information you provide. If your appeal is successful, we will reverse the enforcement action.

Please note that misuse of the complaints process, such as submitting manifestly unfounded notices, may also result in action.

Continuous improvement

Our integrity and safety teams continuously monitor and refine our policies, processes, and tools to enhance our approach as our products evolve globally.

Content presentation on Sora

Sora allows users to create and share videos and images, which may appear on the Explore community page(opens in a new window). Others can view, like, remix, blend, download, or search for this content.

Explore feed prioritization

Content in the Explore feed is prioritized based on a number of factors, including:

Chronology: How recent the post is.

Popularity: Number of likes.

Compliance: Avoiding content that violates laws or our terms and policies.

We use automated technology and human review to determine what appears in the Explore feed. Users can choose not to publish their content in the Explore feed through their data controls(opens in a new window).

ChatGPT search functionality

Search results may be displayed in a conversation when users ask ChatGPT to search the web, or when ChatGPT decides to search the web to provide a relevant response.

How search results are determined

Search results are displayed using the following measures:

Advanced language models: Used to evaluate content based on meaning, intent, and relevance. Search responses are meant to follow our terms and policies.

Automated systems: Determine which results to present, considering factors like user intent, relevance, and recency. 

In-line citations: Links to relevant sources are included directly in responses.

Sidebar sources: Additional related resources provide further context, even if not directly cited. When using third-party search providers, the ordering of sidebar sources is influenced by the provider’s own ranking systems.

Safety standards

We aim to deliver helpful information while upholding our safety standards. We may not link to or surface certain websites containing illegal, harmful, or sensitive content, such as explicit material involving minors, exposed personal data, or instructions for violence.

When a user’s query suggests shopping intent (e.g., “I’m looking to buy a dog costume”), ChatGPT search may display relevant product options with links to learn more or make a purchase.

How products are selected

Products are displayed based on:

Structured Metadata: From third-party providers (e.g., price, product description) and other third-party content (e.g., reviews).

Model Responses: Generated by ChatGPT before considering new web content.

Our safety standards: Ensuring content aligns with our policies.

For more information on how product results are selected, accompanying descriptions, pricing, and how merchants are selected, visit shopping results from ChatGPT search(opens in a new window).