Our structure

We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.

UpdatedJune 28, 2023

We announced our “capped profit” structure in 2019, about three years after founding the original OpenAI Nonprofit.

Since the beginning, we have believed that powerful AI, culminating in AGI—meaning a highly autonomous system that outperforms humans at most economically valuable work—has the potential to reshape society and bring tremendous benefits, along with risks that must be safely addressed. The increasing capabilities of present day systems mean it’s more important than ever for OpenAI and other AI companies to share the principles, economic mechanisms, and governance models that are core to our respective missions and operations.

Overview

We founded the OpenAI Nonprofit in late 2015 with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity. A project like this might previously have been the provenance of one or multiple governments—a humanity-scale endeavor pursuing broad benefit for humankind.

Seeing no clear path in the public sector, and given the success of other ambitious projects in private industry (e.g., SpaceX, Cruise, and others), we decided to pursue this project through private means bound by strong commitments to the public good. We initially believed a 501(c)(3) would be the most effective vehicle to direct the development of safe and broadly beneficial AGI while remaining unencumbered by profit incentives. We committed to publishing our research and data in cases where we felt it was safe to do so and would benefit the public.

We always suspected that our project would be capital intensive, which is why we launched with the goal of $1 billion in donation commitments. Yet over the years, OpenAI’s Nonprofit received approximately $130.5 million in total donations, which funded the Nonprofit’s operations and its initial exploratory work in deep learning, safety, and alignment.

It became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward, jeopardizing our mission. So we devised a structure to preserve our Nonprofit’s core mission, governance, and oversight while enabling us to raise the capital for our mission:

  • The OpenAI Nonprofit would remain intact, with its board continuing as the overall governing body for all OpenAI activities.
  • A new for-profit subsidiary would be formed, capable of issuing equity to raise capital and hire world class talent, but still at the direction of the Nonprofit. Employees working on for-profit initiatives were transitioned over to the new subsidiary.
  • The for-profit would be legally bound to pursue the Nonprofit’s mission, and carry out that mission by engaging in research, development, commercialization and other core operations. Throughout, OpenAI’s guiding principles of safety and broad benefit would be central to its approach.
  • The for-profit’s equity structure would have caps that limit the maximum financial returns to investors and employees to incentivize them to research, develop, and deploy AGI in a way that balances commerciality with safety and sustainability, rather than focusing on pure profit-maximization.
  • The Nonprofit would govern and oversee all such activities through its board in addition to its own operations. It would also continue to undertake a wide range of charitable initiatives, such as sponsoring a comprehensive basic income study, supporting economic impact research, and experimenting with education-centered programs like OpenAI Scholars. Over the years, the Nonprofit also supported a number of other public charities focused on technology, economic impact and justice, including the Stanford University Artificial Intelligence Index Fund, Black Girls Code, and the ACLU Foundation.

In that way, the Nonprofit would remain central to our structure and control the development of AGI, and the for-profit would be tasked with marshaling the resources to achieve this while remaining duty-bound to pursue OpenAI’s core mission. The primacy of the mission above all is encoded in the operating agreement of the for-profit, which every investor and employee is subject to:

OpenAI investor disclaimer

The structure in more detail

While investors typically seek financial returns, we saw a path to aligning their motives with our mission. We achieved this innovation with a few key economic and governance provisions:

  • First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.
  • Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
  • Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
  • Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
  • Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
Org Structure

We strive to preserve these core governance and economic components of our structure when exploring opportunities to accelerate our work. Indeed, given the path to AGI is uncertain, our structure is designed to be adaptable—we believe this is a feature, not a bug.

Microsoft

Shortly after announcing the OpenAI capped profit structure (and our initial round of funding) in 2019, we entered into a strategic partnership with Microsoft. We subsequently extended our partnership, expanding both Microsoft’s total investment as well as the scale and breadth of our commercial and supercomputing collaborations.

While our partnership with Microsoft includes a multibillion dollar investment, OpenAI remains an entirely independent company governed by the OpenAI Nonprofit. Microsoft is a non-voting board observer and has no control. And, as explained above, AGI is explicitly carved out of all commercial and IP licensing agreements.

These arrangements exemplify why we chose Microsoft as our compute and commercial partner. From the beginning, they accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity. They have also worked with us to create and refine our joint safety board that reviews our systems before they are deployed. Harkening back to our origins, they understand that this is a unique and ambitious project that requires resources at the scale of the public sector, as well as the very same conscientiousness to share the ultimate results with everyone.

Our board

OpenAI is governed by the board of the OpenAI Nonprofit, currently comprised of Independent Directors Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.