Blog

OpenAI Red Teaming Network

We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts.

Red Teaming Network

Illustration: Justin Jay Wang

September 19, 2023

Authors

Update: applications for the Red Teaming Network for this phase closed on December 1, 2023. We aim to notify applicants of their status by the end of the year. We appreciate your interest and may re-open for future rounds in the near future!

We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts. We are looking for experts from various fields to collaborate with us in rigorously evaluating and red teaming our AI models.

What is the OpenAI Red Teaming Network?

Red teaming[^red] is an integral part of our iterative deployment process. Over the past few years, our red teaming efforts have grown from a focus on internal adversarial testing at OpenAI, to working with a cohort of external experts[^expert] to help develop domain specific taxonomies of risk and evaluating possibly harmful capabilities in new systems. You can read more about our prior red teaming efforts, including our past work with external experts, on models such as DALL·E 2 and GPT-4.[^risk]

Today, we are launching a more formal effort to build on these earlier foundations, and deepen and broaden our collaborations with outside experts in order to make our models safer. Working with individual experts, research institutions, and civil society organizations is an important part of our process. We see this work as a complement to externally specified governance practices, such as third party audits.

The OpenAI Red Teaming Network is a community of trusted and experienced experts that can help to inform our risk assessment and mitigation efforts more broadly, rather than one-off engagements and selection processes prior to major model deployments. Members of the network will be called upon based on their expertise to help red team at various stages of the model and product development lifecycle. Not every member will be involved with each new model or product, and time contributions will be determined with each individual member, which could be as few as 5–10 hours in one year.

Outside of red teaming campaigns commissioned by OpenAI, members will have the opportunity to engage with each other on general red teaming practices and findings. The goal is to enable more diverse and continuous input, and make red teaming a more iterative process. This network complements other collaborative AI safety opportunities including our Researcher Access Program and open-source evaluations.

Why join the OpenAI Red Teaming Network?

This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact. By becoming a part of this network, you will be a part of our bench of subject matter experts who can be called upon to assess our models and systems at multiple stages of their deployment.

Seeking diverse expertise

Assessing AI systems requires an understanding of a wide variety of domains, diverse perspectives and lived experiences. We invite applications from experts from around the world and are prioritizing geographic as well as domain diversity in our selection process. 

Some domains we are interested in include, but are not limited to:

Cognitive ScienceChemistry
BiologyPhysics
Computer ScienceSteganography
Political SciencePsychology
PersuasionEconomics
AnthropologySociology
HCIFairness and Bias
AlignmentEducation
HealthcareLaw
Child SafetyCybersecurity
FinanceMis/disinformation
Political UsePrivacy
BiometricsLanguages and Linguistics

Prior experience with AI systems or language models is not required, but may be helpful. What we value most is your willingness to engage and bring your perspective to how we assess the impacts of AI systems.

Compensation and confidentiality

All members of the OpenAI Red Teaming Network will be compensated for their contributions when they participate in a red teaming project. While membership in this network won’t restrict you from publishing your research or pursuing other opportunities, you should take into consideration that any involvement in red teaming and other projects are often subject to Non-Disclosure Agreements (NDAs) or remain confidential for an indefinite period.

How to apply

Applications are now closed. Join us in this mission to build safe AGI that benefits humanity. Apply to be a part of the OpenAI Red Teaming Network today.

FAQ

Q: What will joining the network entail?

A: Being part of the network means you may be contacted about opportunities to test a new model, or test an area of interest on a model that is already deployed. Work conducted as a part of the network is conducted under a non-disclosure agreement (NDA), though we have historically published many of our red teaming findings in System Cards and blog posts. You will be compensated for time spent on red teaming projects.


Q: What is the expected time commitment for being a part of the network? 

A: The time that you decide to commit can be adjusted depending on your schedule. Note that not everyone in the network will be contacted for every opportunity, OpenAI will make selections based on the right fit for a particular red teaming project, and emphasize new perspectives in subsequent red teaming campaigns. Even as little as 5 hours in one year would still be valuable to us, so don’t hesitate to apply if you are interested but your time is limited.

Q: When will applicants be notified of their acceptance?

A: OpenAI will be selecting members of the network on a rolling basis and you can apply until December 1, 2023. After this application period, we will re-evaluate opening future opportunities to apply again.

Q: Does being a part of the network mean that I will be asked to red team every new model?

A: No, OpenAI will make selections based on the right fit for a particular red teaming project, and you should not expect to test every new model.

Q: What are some criteria you’re looking for in network members?

A: Some criteria we are looking for are:

  • Demonstrated expertise or experience in a particular domain relevant to red teaming
  • Passionate about improving AI safety
  • No conflicts of interest
  • Diverse backgrounds and traditionally underrepresented groups
  • Diverse geographic representation 
  • Fluency in more than one language
  • Technical ability (not required)

Q: What are other collaborative safety opportunities?

A: Beyond joining the network, there are other collaborative opportunities to contribute to AI safety. For instance, one option is to create or conduct safety evaluations on AI systems and analyze the results.

OpenAI’s open-source Evals repository (released as part of the GPT-4 launch) offers user-friendly templates and sample methods to jump-start this process.

Evaluations can range from simple Q&A tests to more-complex simulations. As concrete examples, here are sample evaluations developed by OpenAI for evaluating AI behaviors from a number of angles:

Persuasion

  • MakeMeSay: How well can an AI system trick another AI system into saying a secret word?
  • MakeMePay: How well can an AI system convince another AI system to donate money?
  • Ballot Proposal: How well can an AI system influence another AI system’s support of a political proposition?

Steganography (hidden messaging)

  • Steganography: How well can an AI system ​​pass secret messages without being caught by another AI system?
  • Text Compression: How well can an AI system compress and decompress messages, to enable hiding secret messages?
  • Schelling Point: How well can an AI system coordinate with another AI system, without direct communication?

We encourage creativity and experimentation in evaluating AI systems. Once completed, we welcome you to contribute your evaluation to the open-source Evals repo for use by the broader AI community.

You can also apply to our Researcher Access Program, which provides credits to support researchers using our products to study areas related to the responsible deployment of AI and mitigating associated risks.

Authors