Skip to main content

August 16, 2024

Disrupting a covert Iranian influence operation

We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.

OpenAI is committed to preventing abuse and improving transparency around AI-generated content. This includes our work to detect and stop covert influence operations (IO), which try to manipulate public opinion or influence political outcomes while hiding the true identity or intentions of the actors behind them. This is especially important in the context of the many elections being held in 2024. We have expanded our work in this area throughout the year, including by leveraging our own AI models to better detect and understand abuse. 

This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035(opens in a new window). We have banned these accounts from using our services, and we continue to monitor for any further attempts to violate our policies. The operation used ChatGPT to generate content focused on a number of topics—including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites. 

Similar to the covert influence operations we reported in May, this operation does not appear to have achieved meaningful audience engagement. The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media. Using Brookings’ Breakout Scale(opens in a new window), which assesses the impact of covert IO on a scale from 1 (lowest) to 6 (highest), this operation was at the low end of Category 2 (activity on multiple platforms, but no evidence that real people picked up or widely shared their content). Our investigation benefited from information about the operation published by Microsoft last week.(opens in a new window) 

Our investigation revealed that this operation used ChatGPT for two purposes: generating long-form articles and shorter social media comments. The first workstream produced articles on U.S. politics and global events, published on five websites that posed as both progressive and conservative news outlets. The second workstream created short comments in English and Spanish, which were posted on social media. We identified a dozen accounts on X and one on Instagram involved in this operation. Some of the X accounts posed as progressives, and others as conservatives. They generated some of these comments by asking our models to rewrite comments posted by other social media users.

The operation generated content about several topics: mainly, the conflict in Gaza, Israel’s presence at the Olympic Games, and the U.S. presidential election—and to a lesser extent politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence. They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following.

Notwithstanding the lack of meaningful audience engagement resulting from this operation, we take seriously any efforts to use our services in foreign influence operations. Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders. OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale by partnering with industry, civil society, and government, and by harnessing the power of generative AI to be a force multiplier in our work. We will continue to publish findings like these to promote information-sharing and best practices.

The image shows two website headers. The top headline reads: “Why Kamala Harris Picked Tim Walz as Her Running Mate.” The bottom is from “The Creator” with the headline: “X Censors Trump’s Tweets: A Hidden War on Free Speech?”

Headlines of two articles generated by this operation and published on two of its websites. We did not see evidence of these articles being widely shared on social media.

The image shows two social media posts. The first criticizes Kamala Harris’s policies on immigration and climate change, using the hashtag #DumpKamala. The second post criticizes Donald Trump, warning against his leadership, with the hashtag #DumpTrump.

Posts about U.S. election candidates generated by this operation, posted by two different accounts on X. These posts garnered low to no engagement.

The image shows two social media posts about inner beauty. The left post in Spanish features a woman looking at sunlight, and the right post in English shows a woman facing pink flowers. Both posts use the hashtag #VitalCaplini.

Posts about beauty generated by the operation in English and Spanish, posted by two different accounts on X. We ran these images through our DALL·E 3 classifier, which identified them as not being generated by our services. These posts garnered low to no engagement.

Domains connected with this activity:

  • niothinker[.]com

  • savannahtime[.]com

  • evenpolitics[.]com

  • teorator[.]com

  • westlandsun[.]com