Skip to main content

Updated: October 10, 2023

Supplier Security Measures

These Supplier Security Measures apply to Supplier when it provides services (“Services”) to OpenAI and are incorporated into the applicable agreement between Supplier and OpenAI (“Agreement”). Terms used here but not defined here are defined in the Agreement.

1. Policies and Codes of Conduct

Supplier maintains Information Security related policies, including the following measures:

  • Supplier maintains a documented information security program consistent with industry standard practices and complies with Data Protection Laws, including policies, procedures, and technical and physical controls designed to ensure the security, availability, integrity, and confidentiality of OpenAI Data. Supplier reviews its information security program at least annually, and in any case after any major changes in applicable law or regulatory guidance. Supplier assigns responsibility for information security management to senior personnel or dedicated individuals.

  • Supplier maintains a Code of Conduct and communicates the policy to all relevant staff. Supplier implements processes designed to ensure the ongoing compliance with these policies and to identify and enable Supplier to take action against any areas of non-compliance. Failure to comply with these policies are addressed through appropriate disciplinary actions.

2. Personnel

Supplier maintains industry best practices for vetting, training, and managing, including:

  • Background checks, where legally permissible, of employees and subcontractors with access to OpenAI Data or supporting other aspects of the Services.

  • Annual security and privacy training for employees and Subcontractors, and supplemental security training as appropriate.

  • Supplier requires all Supplier employees and Subcontractors to execute a confidentiality agreement as a condition of employment or engagement and to follow policies on the protection of OpenAI Data.

3. System and Workstation Control

Supplier maintains industry best practices for securing Supplier’s corporate systems, including laptops, mobile devices, and on-premises infrastructure, including:

  • Endpoint management of devices handling OpenAI Data;

  • Automatic application of security configurations to workstations and mandatory patch management;

  • Mandatory full-disk encryption on all corporate workstations; and

  • Restrictions on use of portable or removable media.

4. Identity, Authentication, and Authorization Controls

Supplier maintains industry best practices for authenticating and authorizing internal employee and subcontractor and service access, including the following measures:

  • Internal policies and procedures governing access management;

  • Supplier maintains an accurate and up to date list of all personnel who have access to Systems and has a process to promptly disable within one business day of transfer or termination access by individual personnel;

  • Supplier uses single sign-on (SSO), or equivalent, to authenticate to third-party services used in the delivery of the Services. Role Based Access Controls (RBAC) are used when provisioning internal access to the Services;

  • Mandatory multi-factor authentication is used for authenticating to Supplier’s identity provider. 

  • Non-privileged users are prohibited from executing privileged functions, including, but not limited to, disabling, circumventing, or altering implemented security safeguards

  • Privileged accounts (e.g., administrator, root) are only used when technically required under change control procedures and not for day-to-day operation;

  • When privileged account access is used, this access is logged and monitored and access can be attributed to a named individual;

  • Established review and approval processes for any access requests to services storing OpenAI Data;

  • Periodic access audits designed to ensure access levels are appropriate for the roles each user performs;

  • Established procedures for reporting and revoking compromised credentials such as passwords and API keys if Supplier becomes aware than an account has been compromised; and

  • Established password reset procedures, including procedures designed to verify the identity of a user prior to a new, replacement, or temporary password.

5. Logging, Audit, and Accountability

Supplier will create and retain audit records to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate activity. Supplier will review and analyze audit records on a regular basis to detect significant unauthorized activity with respect to systems that process OpenAI Data. Supplier will provide audit logs to OpenAI upon request.

6. Identity, Authentication, and Authorization Controls

Supplier maintains industry best practices for authenticating and authorizing customers to the Services, including the following measures:

  • Use of a third-party identity access management service to manage OpenAI users’  identities, meaning Supplier does not store user-provided passwords on users’ behalf; and

  • Logically separating OpenAI Data by organization account using unique identifiers. Within an organization account, unique user accounts are supported.

7. Cloud Infrastructure and Network Security

Supplier maintains industry best practices for securing and operating its cloud infrastructure, including the following measures:

  • Separate production and non-production environments;

  • Primary backend resources are deployed behind a VPN.

  • The Services are routinely audited for security vulnerabilities.

  • Application secrets and service accounts are managed by a secrets management service;

  • Network security policies and firewalls are configured for least-privilege access against a pre-established set of permissible traffic flows. Non-permitted traffic flows are blocked; and

  • Services logs are monitored for security and availability.

8. Availability control

Supplier maintains industry best practices for maintaining Services functionality through accidental or malicious intent, including:

  • Ensuring that systems may be restored in the event of an interruption;

  • Ensuring that systems are functioning and faults are reported; and

  • Managing an availability management program that continuously and iteratively monitors, analyzes, and evaluates the performance and availability of the System.

9. Segregation control

Supplier maintains industry best practices for separate processing of data collected for different purposes, including:

  • Logical segregation of OpenAI Data;

  • Restriction of access to data stored for different purposes according to staff roles and responsibilities;

  • Segregation of business information system functions; and

  • Segregation of testing and production information system environments.

10. Vulnerability Management

A vulnerability management program designed to ensure the prompt remediation of vulnerabilities affecting the Services, including:

  • Supplier maintains and implements a vulnerability management program with regular scanning for vulnerabilities, subscribes to a vulnerability notification service, has a method for prioritizing vulnerability remediation based on risk, and has established remediation timeframes based on risk rating.

  • Once a patch is released, and the associated security vulnerability has been reviewed and assessed for its applicability and importance, the patch is applied and verified in a timeframe which is commensurate with the risk posed to Systems.

  • Supplier deploys a log management solution and retains logs produced by intrusion detection systems for a minimum period of one year.

11. Business Continuity and Disaster Recovery

  • Supplier implements and maintains business continuity and disaster recovery plans to address emergencies or other occurrences that could damage or destroy Systems or OpenAI data, including a disaster recovery plan with at least annual testing of such plans. Supplier may not modify such plans to provide materially less protection to OpenAI without OpenAI’s prior written consent, which may not be unreasonably conditioned or withheld.

  • Backups are taken on a regular basis.

12. Third Party Risk Management

Supplier maintains a program for managing third party security risks, including with respect to any subprocessor or subcontractor to whom Supplier provides OpenAI Data, including the following measures:

  • Written contracts designed to ensure that any subcontractor agrees to maintain reasonable and appropriate safeguards to protect OpenAI Data using at least the same degree of care outlined in this Exhibit.

  • Supplier Security Assessments: All third parties undergo a formal Supplier assessment process maintained by Supplier’s Security team.

13. Data Encryption

Encryption keys are protected from unauthorized use, disclosure, alteration, and destruction, and have a backup and recovery process. If a private key is compromised, all associated certificates will be revoked.

14. Data Retention

At the expiry or termination of the Agreement, Supplier will, at OpenAI’s option, delete or return all OpenAI Data (excluding any back-up or archival copies which shall be deleted in accordance with Supplier’s data retention schedule), except where Supplier is required to retain copies under applicable laws, in which case Supplier will isolate and protect that OpenAI Data from any further Processing except to the extent required by applicable laws. Supplier will provide OpenAI the ability to configure data retention periods within the product, if applicable to the Services.

15. Secure Disposal

  • Supplier implements controls designed to ensure the secure disposal of OpenAI Data in accordance with applicable law taking into account available technology so that OpenAI Data cannot be read or reconstructed. 

  • Media will be securely erased electronically before disposal using methods described in NIST SP 800-88 standard by overwriting or degaussing, or physically destroyed prior to disposal or reassignment to another system.

16. Risk Assessments

  • Supplier maintains a risk assessment program that includes regular risk assessments and controls for risk identification, analysis, monitoring, reporting, and corrective action.

  • At least annually, Supplier will perform risk assessments (either internally or with contracted, independent resources) to identify risks to OpenAI Data, risks to Supplier’s business assets (e.g., technical infrastructure), threats against those elements (both internal and external), the likelihood of those threats occurring, and the impact upon the organization.

  • Threat modeling to OpenAI data for documentation and triaged sources of security risk for prioritization and remediation.

17. Security Incidents

Supplier maintains a security incident response plan for responding to and resolving events that compromise the confidentiality, availability, or integrity of the Services or OpenAI Data including the following:

  • Upon becoming aware of a Security Incident, Supplier agrees to provide written notice without undue delay and, in any case, within 48 hours of becoming aware of the Security Incident. Where possible, such notice will include all available details required under Data Protection Laws for OpenAI to comply with its own notification obligations to regulatory authorities or individuals affected by the Security Incident.

  • Supplier will take reasonable measures to mitigate the risks of further Security Incidents. Where the Security Incident is due to Supplier’s breach of this Exhibit, Supplier will reimburse (subject to the limitations of liability included in the Enterprise Agreement) OpenAI for its actual, out of pocket remediation costs and expenses incurred as a result of actions required to be taken under Data Protection Laws or agreed upon between the parties with respect to a Security Incident, including, where applicable: (i) the creation and transmission of legally required notices to affected individuals; (ii) call center support to respond to inquiries; and (iii) legally required credit monitoring services for affected individuals. OpenAI shall have sole discretion to control the timing, content and manner of any notices provided under this paragraph.

18. Security Evaluations

Supplier performs regular security and vulnerability testing to assess whether key controls are implemented properly and are effective as measured against industry security standards and its policies and procedures and to ensure continued compliance with obligations imposed by law, regulation, or contract with respect to the security of OpenAI Data as well as the maintenance and structure of Supplier’s information systems_._

19. Security Control Testing

Supplier will engage with a qualified, independent external auditor at least annually to conduct periodic review of Supplier’s security practices against recognized audit standards, such as SOC 2 Type 2 and/or ISO 27001 certification (including surveillance and recertifications), as applicable. Upon request, Supplier agrees to make reports available to OpenAI.

20. Right to Audit

Supplier will keep and maintain complete and accurate books, records, and accounts relating to the Agreement. During the term of this Agreement, and for a period of one year thereafter, and subject to confidentiality obligations, OpenAI may audit Supplier’s relevant records to confirm Supplier’s compliance with these Supplier Security Measures and the Agreement. OpenAI’s auditor will only have access to those books and records of Supplier which are reasonably necessary to confirm such compliance.

21. Annual Testing

If Supplier provides software-as-a-service, platform-as-a-service, or any  similar hosted or online services to OpenAI (“Hosted Services”), Supplier shall, at least once per year, perform a suite of independent third-party tests. These tests will be performed upon: (i) the Hosted Services; (ii) all aspects of Supplier’s internet-facing perimeter; and (iii) Supplier’s internal corporate network and internal systems. Supplier will supply OpenAI with details of all third-party tests from the previous year, including names of third-party testers and number of person hours used. Supplier will, upon Supplier’s request and under confidentiality obligations, share with Supplier confirmation that the tests were performed and the test results. Supplier will fix all critical and high severity vulnerabilities that could affect the security of OpenAI confidential information or data, of which Supplier becomes aware, within 60 days of becoming aware of the vulnerability. If Supplier cannot fix the vulnerability within 60 days, Supplier will promptly inform OpenAI, including all details of the risk to OpenAI arising from Supplier’s inability to fix the vulnerability.

22. Verification Rights

Supplier will use commercially reasonable efforts to respond to appropriately scoped questionnaires from OpenAI that are designed to verify Supplier’s security practices. Questionnaire responses are provided for informational purposes only.