GPT-4o contributions
Language
Pre-training leads
Post-training leads
Architecture leads
Optimization leads
Long-context lead
Pre-training Data leads
Tokenizer lead
Human data leads
Eval lead
Data flywheel lead
Inference lead
Inference Productionization lead
Post-training infrastructure leads
Pre-training organization lead
Pre-training program lead
Post-training organization leads
Post-training program lead
Core contributors
Multimodal
Multimodal lead
Post-Training Multimodal lead
Audio Pre-Training leads
Audio Post-Training leads
Visual perception leads
Visual generation leads
Science leads
Data acquisition leads
Data infrastructure leads
Human data lead
Encoders leads
Decoders leads
Interruptions leads
Inference lead
Real-time AV platform leads
Front-end leads
Post-training Multimodal Infrastructure leads
Applied Eng lead
Audio manager
Multimodal organization lead
Program lead
Core contributors
Platform
Data Systems lead
Model distribution leads
ML leads
Runtime lead
Systems lead
Kernels lead
Hardware health leads
Supercomputing leads
Preparedness, Safety, Policy
Safety lead
Audio safety lead
Preparedness lead
Red-teaming lead
Core contributors
Model Launch and Deployment
Lead
Additional contributions
Additional Leadership
Legal
Blog post authorship
Demo content + production
Communications + Marketing
Resource Allocation & Problem Solving
Inference Compute
Security and privacy
GTM, Pricing, Finance
We also acknowledge and thank every OpenAI team member not explicitly mentioned above, including the amazing people on the executive assistant, finance, go to market, human resources, legal, operations and recruiting teams. From hiring everyone in the company, to making sure we have an amazing office space, to building the administrative, HR, legal, and financial structures that allow us to do our best work, everyone at OpenAI has contributed to GPT-4o.
We thank Microsoft for their partnership, especially Microsoft Azure for supporting model training with infrastructure design and management, and the Microsoft Bing team and Microsoft's safety teams for their partnership on safe deployment.
We are grateful to our expert adversarial testers and red teamers who helped test our models at early stages of development and informed our risk assessments as well as the system card. Participation in this red teaming process is not an endorsement of the deployment plans of OpenAI or OpenAI’s policies.
*Contributors listed in alphabetized order