Zico Kolter Joins OpenAI’s Board of Directors
We’re strengthening our governance with expertise in AI safety and alignment. Zico will also join the Safety & Security Committee.
We’re announcing the appointment of Zico Kolter to OpenAI’s Board of Directors. As a professor and Director of the Machine Learning Department at Carnegie Mellon University, Zico’s work predominantly focuses on AI safety, alignment, and the robustness of machine learning classifiers. His research and expertise spans new deep network architectures, innovative methodologies for understanding the influence of data on models, and automated methods for evaluating AI model robustness, making him an invaluable technical director for our governance.
Zico will also join the Board’s Safety and Security Committee alongside directors Bret Taylor, Adam D’Angelo, Paul Nakasone, Nicole Seligman and Sam Altman (CEO) and OpenAI technical experts. The committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects.
In welcoming Zico to the board, Bret Taylor, Chairman of the Board, remarked, “Zico adds deep technical understanding and perspective in AI safety and robustness that will help us ensure general artificial intelligence benefits all of humanity.”
Zico Kolter is a Professor of Computer Science and the head of the Machine Learning Department at Carnegie Mellon University, where he has been a key figure for 12 years. Zico completed his Ph.D. in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at MIT from 2010 to 2012. Throughout his career, he has made significant contributions to the field of machine learning, authoring numerous award-winning papers at prestigious conferences such as NeurIPS, ICML, and AISTATS.
Zico's research includes developing the first methods for creating deep learning models with guaranteed robustness. He pioneered techniques for embedding hard constraints into AI models using classical optimization within neural network layers. More recently, in 2023, his team developed innovative methods for automatically assessing the safety of large language models (LLMs), demonstrating the potential to bypass existing model safeguards through automated optimization techniques. Alongside his academic pursuits, Zico has worked closely within the industry throughout his career, formerly as Chief Data Scientist at C3.ai, and currently as Chief Expert at Bosch and Chief Technical Advisor at Gray Swan, a startup specializing in AI safety and security.