About the Team
The Safety Systems team is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Safety Reasoning Research team is poised at the intersection of short-term pragmatic projects and long-term fundamental research, prioritizing rapid system development while maintaining technical robustness. Key focus areas include improving foundational models’ ability to accurately reason about safety, values, and questions of cultural norms, refining moderation models, driving rapid policy improvements, and addressing critical societal challenges like election misinformation. As we venture into 2024, the team seeks talents adept in novel abuse discovery and policy iteration, aligning with our high-priority goals of multimodal moderation and ensuring digital safety.
About the Role
To help API users monitor and prevent unwanted use cases, we developed the moderation endpoint, a tool for checking whether content complies with OpenAI's content policy. Developers can thus identify content that our content policy prohibits and take actions (e.g. block it). We seek a Research Engineer to help design and build a robust pipeline for data management, model training and deployment to enable a consistent improvement on the Moderation model.
In this role, you will:
Conduct applied research to improve the ability of foundational models to accurately reason about questions of human values, morals, ethics, and cultural norms, and apply these improved models to practical safety challenges.
Develop and refine AI moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse.
Work with policy researchers to adapt and iterate on our content policies to ensure effective prevention of harmful behavior.
Contribute to research on multimodal content analysis to enhance our moderation capabilities.
Develop and improve pipelines for automated data labeling and augmentation, model training, evaluation and deployment, including active learning process, routines for calibration and validation data refresh etc.
Experiment and design an effective red-teaming pipeline to examine the robustness of our harm prevention systems and identify areas for future improvement.
You might thrive in this role if you:
Possess 5+ years of research engineering experience and proficiency in Python or similar languages.
Thrive in environments involving large-scale AI systems and multimodal datasets (a plus).
Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, fairness & biases, which is extremely advantageous.
Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use.
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.