Skip to main content

Research

DALL·E 3

DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into 
exceptionally accurate images.

A woman underneath a cherry blossom tree is setting up a picnic on a yellow checkered blanket around sunset. Behind her, a small, calm body of water containing a boat with 4 figures on their way to a Pagoda in the middle of the water.

DALL·E 3 makes notable improvements over DALL·E 2, even when given the same prompt.

An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.
An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.

DALL·E 3 is built natively on ChatGPT, which lets you use ChatGPT as a brainstorming partner and refiner of your prompts. Just ask ChatGPT what you want to see in anything from a simple sentence to a detailed paragraph.

When prompted with an idea, ChatGPT will automatically generate tailored, detailed prompts for DALL·E 3 that bring your idea to life. If you like a particular image, but it’s not quite right, you can ask ChatGPT to make tweaks with just a few words.

As with DALL·E 2, the images you create with DALL·E 3 are yours to use and you don't need our permission to reprint, sell or merchandise them.

  • Preventing harmful generations

    DALL·E 3 has mitigations to decline requests that ask for a public figure by name. We improved safety performance in risk areas like generation of public figures and harmful biases related to visual over/under-representation, in partnership with red teamers—domain experts who stress-test the model—to help inform our risk assessment and mitigation efforts in areas like propaganda and misinformation.

  • Internal testing

    We’re also researching the best ways to help people identify when an image was created with AI. We’re experimenting with a provenance classifier—a new internal tool that can help us identify whether or not an image was generated by DALL·E 3—and hope to use this tool to better understand the ways generated images might be used. We’ll share more soon.

Credits

Core research and execution

Gabriel Goh, James Betker, Li Jing, Aditya Ramesh

Research contributors—primary

Tim Brooks, Jianfeng Wang, Lindsey Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Prafulla Dhariwal, Casey Chu, Joy Jiao

Research contributors—secondary

Jong Wook Kim, Alex Nichol, Yang Song, Lijuan Wang, Tao Xu

Inference optimization

Connor Holmes, Arash Bakhtiari, Umesh Chand, Zhewei Yao, Samyam Rajbhandari, Yuxiong He

Product—primary

Yufei Guo, Luke Miller, Joyce Lee, Wesam Manassra, Anton Tananaev, Chester Cho, Rachel Lim, Meenaz Merchant

Product—secondary

Dave Cummings, Rajeev Nayak, Sriya Santhanam

Safety—primary

Sandhini Agarwal, Michael Lampe, Katarina Slama, Kim Malfacini, Bilva Chandra, Ashyana-Jasmine Kachra, Rosie Campbell, Florencia Leoni Aleman, Madelaine Boyd, Shengli Hu, Johannes Heidecke

Safety—secondary

Lama Ahmad, Chelsea Carlson, Henry Head, Andrea Vallone, CJ Weinmann, Lilian Weng

Communications

Alex Baker-Whitcomb, Ryan Biddy, Ruby Chen, Thomas Degry, Niko Felix, Elie Georges, Lindsey Held, Chad Nelson, Kendra Rimbach, Natalie Summers, Justin Wang, Hannah Wong, Kayla Wood

Legal and public policy

Che Chang, Jason Kwon, Fred von Lohmann, Ashley Pantuliano, David Robinson, Tom Rubin, Thomas Stasi

Special thanks

Alec Radford, Mark Chen, Katie Mayer, Misha Bilenko, Mikhail Parakhin, Bob McGrew, Mira Murati, Greg Brockman, Sam Altman