Research

DALL·E 3

DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.

About DALL·E 3

DALL·E 3 is now available to all ChatGPT Plus, Team and Enterprise users, as well as to developers through our API.

Modern text-to-image systems have a tendency to ignore words or descriptions, forcing users to learn prompt engineering. DALL·E 3 represents a leap forward in our ability to generate images that exactly adhere to the text you provide.
Dalle Image Map Mobile
Even with the same prompt, DALL·E 3 delivers significant improvements over DALL·E 2.
DALL-E 2 Basketball

DALL·E 2
“An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.”

DALL·E 3

DALL·E 3

DALL·E 3 is built natively on ChatGPT, which lets you use ChatGPT as a brainstorming partner and refiner of your prompts. Just ask ChatGPT what you want to see in anything from a simple sentence to a detailed paragraph.

DALL·E 3 in ChatGPT

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

DALL·E 3 will be available to ChatGPT Plus and Enterprise customers in early October. As with DALL·E 2, the images you create with DALL·E 3 are yours to use and you don't need our permission to reprint, sell or merchandise them.

Dalle 3 In Chatgpt

A focus on safety

Like previous versions, we’ve taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.

Preventing harmful generations

DALL·E 3 has mitigations to decline requests that ask for a public figure by name. We improved safety performance in risk areas like generation of public figures and harmful biases related to visual over/under-representation, in partnership with red teamers—domain experts who stress-test the model—to help inform our risk assessment and mitigation efforts in areas like propaganda and misinformation.

Internal testing

We’re also researching the best ways to help people identify when an image was created with AI. We’re experimenting with a provenance classifier—a new internal tool that can help us identify whether or not an image was generated by DALL·E 3—and hope to use this tool to better understand the ways generated images might be used. We’ll share more soon.

Creative control

DALL·E 3 is designed to decline requests that ask for an image in the style of a living artist. Creators can now also opt their images out from training of our future image generation models.

Credits

Core Research and Execution
Gabriel Goh, James Betker, Li Jing, Aditya Ramesh

Research Contributors—Primary
Tim Brooks, Jianfeng Wang, Lindsey Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Prafulla Dhariwal, Casey Chu, Joy Jiao

Research Contributors—Secondary
Jong Wook Kim, Alex Nichol, Yang Song, Lijuan Wang, Tao Xu


Inference Optimization
Connor Holmes, Arash Bakhtiari, Umesh Chand, Zhewei Yao, Samyam Rajbhandari, Yuxiong He


Product—Primary
Yufei Guo, Luke Miller, Joyce Lee, Wesam Manassra, Anton Tananaev, Chester Cho, Rachel Lim, Meenaz Merchant

Product—Secondary
Dave Cummings, Rajeev Nayak, Sriya Santhanam


Safety—Primary
Sandhini Agarwal, Michael Lampe, Katarina Slama, Kim Malfacini, Bilva Chandra, Ashyana-Jasmine Kachra, Rosie Campbell, Florencia Leoni Aleman, Madelaine Boyd, Shengli Hu, Johannes Heidecke

Safety—Secondary
Lama Ahmad, Chelsea Carlson, Henry Head, Andrea Vallone, CJ Weinmann, Lilian Weng


Communications
Alex Baker-Whitcomb, Ryan Biddy, Ruby Chen, Thomas Degry, Niko Felix, Elie Georges, Lindsey Held, Chad Nelson, Kendra Rimbach, Natalie Summers, Justin Wang, Hannah Wong, Kayla Wood


Legal and Public Policy
Che Chang, Jason Kwon, Fred von Lohmann, Ashley Pantuliano, David Robinson, Tom Rubin, Thomas Stasi


Special Thanks
Alec Radford, Mark Chen, Katie Mayer, Misha Bilenko, Mikhail Parakhin, Bob McGrew, Mira Murati, Greg Brockman, Sam Altman