Skip to main content

January 16, 2024

Democratic inputs to AI grant program: lessons learned and implementation plans

Democratic Inputs To AI Grant Program Update

We funded 10 teams from around the world to design ideas and tools to collectively govern AI. We summarize the innovations, outline our learnings, and call for researchers and engineers to join us as we continue this work.

As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. In May, we announced the Democratic Inputs to AI grant program. We then awarded $100,000 to 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. 

At OpenAI, we’ll build on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. We’re excited to combine our research with ideas and prototypes developed by the grant teams in the coming months.

In this update, we will cover:

  • How our grant recipients innovated on democratic technology

  • Key learnings from the grant program

  • Our implementation plans

How our grant recipients innovated on democratic technology

We received nearly 1,000 applications across 113 countries. There were far more than 10 qualified teams, but a joint committee of OpenAI employees and external experts in democratic governance selected the final 10 teams to span a set of diverse backgrounds and approaches: the chosen teams have members from 12 different countries and their expertise spans various fields, including law, journalism, peace-building, machine learning, and social science research.

During the program, teams received hands-on support and guidance. To facilitate collaboration, teams were encouraged to describe and document their processes in a structured way (via “process cards” and “run reports”). This enabled faster iteration and easier identification of opportunities to integrate with other teams’ prototypes. Additionally, OpenAI facilitated a special Demo Day(opens in a new window) in September for the teams to showcase their concepts to one another, OpenAI staff, and researchers from other AI labs and academia. 

The projects spanned different aspects of participatory engagement, such as novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior. Notably, across nearly all projects, AI itself played a useful role as a part of the processes(opens in a new window) in the form of customized chat interfaces, voice-to-text transcription, data synthesis, and more. 

Today, along with lessons learned, we share the code that teams created for this grant program(opens in a new window), and present brief summaries of the work accomplished by each of the ten teams:

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Key learnings from the grant program so far

Public opinion can change frequently

Teams captured views in multiple ways. Many teams found that public views changed often.

  • The Democratic Fine-Tuning team created a chatbot that presented scenarios to participants and produced “value cards” that participants could review and evaluate. The Case Law team held expert workshops, and represented their opinions as a set of dimensions and considerations over a specific set of scenarios. The Inclusive.AI team captured both statements and how strongly people felt about these statements by allowing them to distribute voting tokens across many statements (versus a single vote). Many other teams presented statements accompanied by the proportion of participants in support. 

  • Interestingly, many teams found that public opinion changed frequently, even day-to-day, which could have meaningful implications for how frequently input-collecting processes should take place. A collective process should be thorough enough to capture hard-to-change and perhaps more fundamental values, and simultaneously be sensitive enough (or recur frequently enough) to detect meaningful changes of views over time.

Bridging across the digital divide is still difficult and this can skew results

Reaching relevant participants across digital and cultural divides might require additional investments in better outreach and better tooling. 

  • Some teams found that participants recruited online leaned more optimistic toward AI, a trait that was correlated with increased support and enthusiasm for AI model behavior in general. 

  • Furthermore, due to lack of reach or availability on most platforms we consulted, most teams faced serious difficulty in recruiting participants across the digital divide.

  • More subtly, even when citizens of global majority countries are included, the tools might be less useful to them due to limitations in understanding the local language or context. For example, in their online and onground focus group discussions, the Rappler team found that the documented disparities in performance across languages(opens in a new window) of readily available speech recognition tools like Whisper made transcription difficult in participants’ spoken languages, e.g. Tagalog, Binisaya, Hiligaynon, which are major Filipino languages.

  • The Ubuntu-AI team chose to directly incentivize participation, by developing a platform that allows African creatives to receive compensation for contributing to machine learning about their own designs and backgrounds.

Finding agreement within polarized groups

Finding a compromise can be hard when a small group has strong opinions on a particular issue.

  • The Collective Dialogues team found that each session always contained a small group of people who felt strongly that restricting AI assistants from answering certain questions was wrong no matter what. In this case, because the group was small, majority voting yielded outcomes that they strongly disagreed with. 

  • The Collective Dialogues, Energize.AI, and Recursive Public teams’ processes were designed to find policy proposals that would be strongly supported across polarized groups. For example, all policy guidelines generated by the Collective Dialogues process with U.S. participants —including on vaccine information, a known divisive issue— had over 72% support across Democrats, Independents, and Republicans.

Reaching consensus vs. representing diversity

When trying to produce a single outcome or make a single decision to represent a group, there might be tension between trying to reach consensus and adequately representing the diversity of various opinions. It’s not just about siding with the majority, but also giving a platform to different viewpoints. 

  • The Generative Social Choice team devised a method that highlights a few key positions, showcasing the range of opinions while finding some common ground. They used mathematical theory to help navigate this balance. 

  • Meanwhile, the Inclusive.AI team investigated different voting mechanisms and how they are perceived. They found that methods which show how strongly people feel about their choices, and that ensure everyone has an equal say, are perceived as more democratic and fair.

Hopes and anxieties about the future of AI governance

Some participants felt nervous about the use of AI in writing policy and would like transparency regarding when and how AI is applied in democratic processes. Post-deliberation sessions, many teams found that participants became more hopeful about the public's ability to help guide AI.

  • In collaborations with a municipal government and roundtables with various stakeholders, both the Deliberation at Scale and Recursive Public teams found that while there is clear interest in the role AI itself might play in improving democratic processes, there is also an air of caution around how much power or influence democratic institutions should grant to these systems and their developers. 

  • The Collective Dialogues team found that combining AI in a decision making process with non-AI decision steps – like expert curation of AI-generated policy clauses, or a final vote on a policy informed by AI – resulted in a process that had AI-enabled efficiency while still being perceived as trustworthy and legitimate by the public. 

  • In the Collective Dialogues team’s process, a popular clause emerged during deliberations – across different participant groups – which roughly states that the chosen policy should be “expanded on and updated regularly as new issues arise, better understanding is developed, and AI's capabilities evolve.” On average, this clause was supported by 85% of participants. 

  • The Collective Dialogues and Deliberation at Scale teams found that the act of participating in a deliberation session on an AI policy issue made people more likely to think the public was capable of helping guide AI behavior in general.

Our implementation plans

Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we are forming a “Collective Alignment” team consisting of researchers and engineers that will:

  • Implement a system for collecting and encoding public input on model behavior into our systems.

  • Continue to work with external advisors and grant teams, including running pilots to incorporate the grant prototypes into steering our models.

We are recruiting exceptional research engineers from diverse technical backgrounds to help build this work with us. If you’re excited about what we’re doing and would like to apply, please apply to join us!

Authors

Tyna Eloundou, Teddy Lee

Acknowledgments

AJ Ostrow, Alex Beutel, Andrea Vallone, Arka Dhar, Atoosa Kasirzadeh, Artemis Seaford, Aviv Ovadya, Chris Clark, Colin Megill, Erik Ritter, Gillian Hadfield, Greg Brockman, Gretchen Krueger, Hélène Landemore, Jason Kwon, Lama Ahmad, Miguel Manriquez, Miles Brundage, Natalie Cone, Ryan Lowe, Shibani Santurkar, Wojciech Zaremba, Yo Shavit