Skip to main content

February 20, 2018

Preparing for malicious uses of AI

Preparing For Malicious Uses Of Ai

We’ve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This paper is the outcome of almost a year of sustained work with our colleagues at the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, the Electronic Frontier Foundation, and others.

AI challenges global security because it lowers the cost of conducting many existing attacks, creates new threats and vulnerabilities, and further complicates the attribution of specific attacks. Given the changes to the threat landscape that AI seems to bring, the report makes some high-level recommendations that companies, research organizations, individual practitioners, and governments can take to ensure a safer world:

  • Acknowledge AI’s dual-use nature: AI is a technology capable of immensely positive and immensely negative applications. We should take steps as a community to better evaluate research projects for perversion by malicious actors, and engage with policymakers to understand areas of particular sensitivity. As we write in the paper: “Surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion. Governments and powerful private actors will have access to many of these AI tools and could use them for public good or harm.” Some potential solutions to these problems include pre-publication risk assessments for certain bits of research, selectively sharing some types of research with a significant safety or security component among a small set of trusted organizations, and exploring how to embed norms into the scientific community that are responsive to dual-use concerns.

  • Learn from cybersecurity: The computer security community has developed various practices that are relevant to AI researchers, which we should consider implementing in our own research. These range from “red teaming” by intentionally trying to break or subvert systems, to investing in tech forecasting to spot threats before they arrive, to conventions around the confidential reporting of vulnerabilities discovered in AI systems, and so on.

  • Broaden the discussion: AI is going to alter the global threat landscape, so we should involve a broader cross-section of society in discussions. Parties could include those involved in the civil society, national security experts, businesses, ethicists, the general public, and other researchers.

Like our work on concrete problems in AI safety, we’ve grounded some of the problems motivated by the malicious use of AI in concrete scenarios, such as: persuasive ads generated by AI systems being used to target the administrator of a security systems; cybercriminals using neural networks and “fuzzing” techniques to create computer viruses with automatic exploit generation capabilities; malicious actors hacking a cleaning robot so that it delivers an explosives payload to a VIP; and rogue states using omniprescent AI-augmented surveillance systems to pre-emptively arrest people who fit a predictive risk profile.

We’re excited to start having this discussion with our peers, policymakers, and the general public; we’ve spent the last two years researching and solidifying our internal policies at OpenAI and are going to begin engaging a wider audience on these issues. We’re especially keen to work with more researchers that see themselves contributing to the policy debates around AI as well as making research breakthroughs.