Skip to main content

Researcher Access Program application

Information

We’re interested in supporting researchers using our products to study areas related to the responsible deployment of AI and mitigating associated risks, as well as understanding the societal impact of AI systems. If you are interested in an opportunity for subsidized access to enable a research project, please apply for API credits through this program(opens in a new window)

Note that this will take you to a third-party provider, SurveyMonkey Apply, where you’ll need to create an account to apply. The application will ask for information about your research question and planned use of OpenAI’s products to facilitate that research.

We encourage applications from early stage researchers in countries supported by our API(opens in a new window), and are especially interested in subsidizing work by researchers with limited financial and institutional resources. Please note that applications are reviewed once every 3 months (in March, June, September, and December), and the expected turnaround time for accepted applicants is around 4–6 weeks.

Before applying, please take a moment to review our sharing & publication policy.

Researchers are still bound to our Usage Policies and other applicable OpenAI policies. Acceptance to the researcher access program should not be considered permission to violate those policies. If you receive a warning or your access to our services is suspended, and you believe this is in error and would like to appeal, please contact us through our help center(opens in a new window).

Areas of interest include:

Alignment

  • How do we increase the extent to which AI’s objectives are aligned with human preferences?

Fairness & representation

  • How should performance criteria be established for fairness and representation in language models?

  • How can language models be improved in order to effectively support the goals of fairness and representation in specific, deployed contexts?

Societal impact

  • How do we create measurements for AI’s impact on society?

  • What impact does AI have on different domains and groups of people?

Misuse potential

  • How can systems like the API be misused?

  • What sorts of “red teaming” approaches can we develop to help AI developers think about responsibly deploying technologies like this?

Human-AI interaction

  • How can we enhance the ways humans interact with AI models to improve usability and accessibility?

  • What interface designs facilitate more intuitive interactions with AI systems?

  • How can we create AI explanations that are interpretable to non-expert users?

  • How can humans and AI collaborate effectively in decision-making processes?

Economic impacts

  • How can we use models to scale economic research?

  • How can we improve evaluation of economic impacts pre-deployment?

  • How can we measure or forecast the social and economic impacts of AI?

  • What measures can individuals, firms, and governments implement to mitigate harms and maximize the economic benefits of AI?

Generalization and transfer learning

  • How do language models generalize across different domains and tasks?

  • What factors influence a model's ability to transfer knowledge to new, unseen tasks?

  • How can we improve models' performance in low-resource languages or specialized fields?

  • What techniques facilitate continual learning without catastrophic forgetting?

Multimodal measurements

  • How can we integrate language models with other data modalities such as images, audio, or video?

  • How can we develop models that understand and generate text in conjunction with other modalities?

  • In what ways can multimodal understanding enhance tasks like captioning, translation, or content creation?

Apply for API credits