About the Team
The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Human-AI Interaction (HAI) team is responsible for pursuing effective and efficient use of human expertise for safety, policy, cultural, and social topics via novel approaches to Human-AI collaboration. Our goal is to ensure the model is aligned with human values by studying when to provide human feedback and how to do so efficiently and effectively with the model’s assistance.
About the Role
As one of the founding Research Engineers within the Human-AI Interaction team, you will play a crucial role in pioneering methodologies and implementing systems to integrate human feedback into AI models and systems. You will have an opportunity for shaping the team vision, work on the cutting edge of AI research, and collaborate closely with cross-functional teams to improve the safety, fairness, and transparency of AI systems via leveraging human expertise.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Experiment with novel methods for model-assisted instruction, rules, policy development, and prompt engineering.
Design and implement systems for collecting and integrating high-quality human feedback into AI systems.
Explore ways to use various forms of human feedback that can best benefit model training.
Design model safety behavior evaluation practice with human data.
Study ways of human-AI collaboration for challenging tasks, including tasks on ambiguous or controversial topics, AI in sensitive or regulated domains, etc.
Contribute to the development of tooling and workflow for offline iteration and feedback integration.
You might thrive in this role if you:
Have strong belief in & passion for the value of human feedback for development safe AI for real-world use.
Bring 3+ years of experience in the field of AI safety, especially in areas like RLHF, human-AI collaboration, human feedback collection.
Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
Have an in-depth understanding of deep learning research and/or strong engineering skills.
Stay goal-oriented instead of method-oriented, and are not afraid of tedious but high-value work when needed.
Are a team player who enjoys collaborative work environments.
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.