Skip to main content

October 24, 2024

OpenAI’s approach to AI and national security

A soft, abstract pastel image featuring swirling brushstrokes of lavender, blue, yellow, and green. The colors blend seamlessly, creating a soothing and ethereal effect, reminiscent of a sky or dreamy landscape.

Today, the White House released a National Security Memorandum (NSM) on Artificial Intelligence(opens in a new window) outlining how the U.S. government can responsibly harness AI to advance national security while establishing essential guardrails for its use. The NSM also recognizes the importance of increasing the supply and access to semiconductor chips, power generation, and data center capacity – all of which we agree are essential to continued U.S. leadership on AI. 

At OpenAI, we’re building AI to benefit the most people possible. Supporting U.S. and allied efforts to advance AI in a way that upholds democratic values is essential to our mission of ensuring AI’s benefits are widely shared. We view the NSM as an important step forward in that effort – here is how we’re currently thinking about national security and our role in it.

Delivering on our mission through democratic AI leadership

We believe a democratic vision for AI is essential to unlocking its full potential and ensuring its benefits are broadly shared. AI is a transformational technology that can be used to strengthen democratic values or to undermine them. That’s why we believe democracies should continue to take the lead in AI development, guided by values like freedom, fairness, and respect for human rights. And it’s why we think countries that share these values should understand how, with the proper safeguards, AI can help protect people, deter adversaries, and even prevent future conflict.

There are important national security use cases that align with our mission. For example, we already collaborate with DARPA(opens in a new window) to help cyber defenders better protect critical networks. We also work with the U.S. Agency for International Development(opens in a new window), which is using ChatGPT to reduce administrative burdens for staff. We also see opportunities to deepen our collaboration with the U.S. National Laboratories, building on our bioscience research partnership with Los Alamos National Laboratory.

At the same time, we need clear guardrails and policies around how AI can be used. That’s why we’re taking a careful, measured approach to our national security partnerships.

Our policies and values

OpenAI’s usage policies prohibit anyone from using our technology to harm people, destroy property, or develop weapons. Over the past several months, we’ve also developed a framework for assessing potential national security partnerships—including a set of values to guide this work. Each potential use case is evaluated through a formal process led by our Product Policy and National Security teams for alignment with both our policies and our values.

The values that guide our work on national security include:

  • Democratic Values: We believe that AI should be developed and used in ways that promote freedom, protect individual rights, and foster innovation. We believe that will require taking tangible steps to democratize access to the technology and maximize its economic, educational, and societal benefits.  

  • Safety: Our goal is to protect people from harm. We want AI to be used to mitigate risks, enhance security, and safeguard human rights, and we rigorously evaluate all potential applications to make sure they align with this principle.

  • Responsibility: We believe AI should be used for the common good. Our policies prohibit the use of AI to cause harm or infringe on basic rights, and we apply this rigorously to all potential partnerships, especially in sensitive areas like national security.

  • Accountability: AI systems must be developed and deployed with accountability at their core. We believe that all AI applications, especially those involving government and national security, should be subject to oversight, clear usage guidelines, and ethical standards.

Looking ahead

The new framework released by the White House opens up the potential to support more national security work in the U.S. and allied countries in a way that stays true to our mission. For example, we could apply our technology to advance scientific research, enhance logistics, streamline translation and summarization tasks, and study and mitigate civilian harm. Any work we do in this space will continue to go through a rigorous internal review process.  

We believe the U.S. government and U.S. companies like ours have an opportunity to take the lead on setting norms around how AI is safely and responsibly used in the national security context, just like we’re leading the development of the technology itself. As we explore potential partnerships with the U.S. government and allies, we want to help set those norms with transparency and care.

Author

OpenAI