Skip to main content

June 20, 2024

Empowering defenders through our Cybersecurity Grant Program

Highlighting innovative research and AI integration in cybersecurity.

DALL·E 2024-02-24 08.56

We’re sharing more about the work we have sponsored in the last year under our Cybersecurity Grant Program

In 2023, we launched the Cybersecurity Grant Program with a bold vision: to equip cyber defenders with the most advanced AI models and to empower groundbreaking research at the nexus of cybersecurity and artificial intelligence. The enthusiastic response from the community has exceeded our expectations—we’ve received over 600 applications—underscoring the critical need for and impact of meaningful discourse and research dialogue between OpenAI and the cybersecurity community.

Selected projects

Since its inception, the program has supported a diverse array of projects. We are excited to highlight a few of them. 

Wagner Lab from UC Berkeley

Professor David Wagner’s security research lab at UC Berkeley is pioneering techniques aimed at defending against prompt-injection attacks in large language models (LLMs). The group is working with OpenAI to enhance the trustworthiness of these models and protect them against cybersecurity threats.

Coguard

Albert Heinle, co-founder and CTO at Coguard(opens in a new window), uses AI to reduce software misconfiguration, a common cause of security incidents. Software configuration is complex, which is compounded when connecting software to networks and clusters. Current software solutions rely on outdated rules-based policies. AI can help automate the detection of misconfigurations and keep them updated.

Mithril Security

Mithril has developed a proof-of-concept to fortify inference infrastructure for LLMs, including open-source tools to deploy AI models on GPUs with secure enclaves based on Trusted Platform Modules (TPMs). This project aims to demonstrate that data can be sent to AI providers without any data exposure, even to administrators. Their work is available publicly on GitHub(opens in a new window), and as a whitepaper detailing their architecture(opens in a new window).

Gabriel Bernadett-Shapiro

An individual grantee, Gabriel Bernadett-Shapiro, created the AI OSINT workshop and AI Security Starter Kit, offering technical training on the basics of LLMs and free tools for students, journalists, investigators and information-security professionals. In particular, Gabriel has emphasized affiliated training for international atrocity crime investigators and intelligence studies students at Johns Hopkins University to help ensure they have the best tools to leverage AI in both critical and challenging environments.

Breuer Lab at Dartmouth

Neural networks are vulnerable to attacks where adversaries reconstruct private training data by interacting with the model. Defending against these attacks typically requires costly tradeoffs in terms of model accuracy and training time. Professor Adam Breuer’s(opens in a new window) Lab at Dartmouth is developing new defense techniques that prevent these attacks without compromising accuracy or efficiency.

Security Lab Boston University (SeclaBU)

Identifying and reasoning about code vulnerabilities is an important and active area of research. Ph.D candidate Saad Ullah, Professor Gianluca Stringhini from SeclaBU(opens in a new window) and Professor Ayse Coskun from Peac Lab(opens in a new window) at Boston University are working to improve the ability of LLMs to detect and fix vulnerabilities in code. This research could enable cyber defenders to catch and prevent code exploits before they are used maliciously.

CY-PHY Security Lab from the University of Santa Cruz (UCSC)

Professor Alvaro Cardenas(opens in a new window)’ Research Group from UCSC is exploring how we can use foundation models to design agents that respond autonomously to computer network intruders, otherwise known as autonomous cyber defense agents. The project intends to compare the advantages and disadvantages of foundation models with their counterparts trained using reinforcement learning (RL) and, subsequently, how they can work together to improve network security and the triage of threat information.

MIT Computer Science Artificial Intelligence Laboratory (MIT CSAIL)

Stephen Moskal, Erik Hemberg and Una-May O’Reilly from MIT Computer Science Artificial Intelligence Laboratory(opens in a new window) are exploring how to automate the decision process and perform actionable responses using prompt engineering approaches in a plan-act-report loop for red-teaming.  Additionally, the group is exploring LLM-Agent capabilities in Capture-the-Flag (CTF) challenges - exercises aimed at discovering vulnerabilities in a controlled environment.

Empowering defenders with ChatGPT

ChatGPT has emerged as one of the most popular and frequently used tools by cybersecurity professionals. Among the most common uses for cyber defenders include translating and rephrasing technical jargon or log events into simpler language, writing code to analyze artifacts during investigations, creating log parsers, and summarizing an incident status within strict time constraints.

To amplify its benefits, we've granted free access to ChatGPT Plus to many in the cybersecurity community, seeing this as a key opportunity to enhance AI adoption in cyber defense.

We will continue offering free ChatGPT Plus accounts and are extending this initiative to provide ChatGPT Team and Enterprise. Our expansion begins with our partners at the Research and Education Network for Uganda (RENU)(opens in a new window).

Apply now!

If you share our vision for a secure and innovative AI-driven future, we invite you to submit your proposals and join us in our aim towards enhancing defensive cybersecurity technologies.

Submit your proposal here