Skip to main content

January 5, 2021

DALL·E: Creating images from text

We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.

DALL·E

Illustration: Justin Jay Wang

DALL·E is a 12-billion parameter version of GPT-3(opens in a new window) trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.

See also: DALL·E 2, which generates more realistic and accurate images with 4x greater resolution.

Loading...

GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks. Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.

Overview

Like GPT-3, DALL·E is a transformer language model. It receives both the text and the image as a single stream of data containing up to 1280 tokens, and is trained using maximum likelihood to generate all of the tokens, one after another. A

This training procedure allows DALL·E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt.

We recognize that work involving generative models has the potential for significant, broad societal impacts. In the future, we plan to analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology.

Capabilities

We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language. We illustrate this using a series of interactive visuals in the next section. The samples shown for each caption in the visuals are obtained by taking the top 32 of 512 after reranking with CLIP, but we do not use any manual cherry-picking, aside from the thumbnails and standalone images that appear outside.B

Controlling attributes

We test DALL·E’s ability to modify several of an object’s attributes, as well as the number of times that it appears.

Loading...

Drawing multiple objects

Simultaneously controlling multiple objects, their attributes, and their spatial relationships presents a new challenge. For example, consider the phrase “a hedgehog wearing a red hat, yellow gloves, blue shirt, and green pants.” To correctly interpret this sentence, DALL·E must not only correctly compose each piece of apparel with the animal, but also form the associations (hat, red), (gloves, yellow), (shirt, blue), and (pants, green) without mixing them up C

 We test DALL·E’s ability to do this for relative positioning, stacking objects, and controlling multiple attributes.

Loading...

While DALL·E does offer some level of controllability over the attributes and positions of a small number of objects, the success rate can depend on how the caption is phrased. As more objects are introduced, DALL·E is prone to confusing the associations between the objects and their colors, and the success rate decreases sharply. We also note that DALL·E is brittle with respect to rephrasing of the caption in these scenarios: alternative, semantically equivalent captions often yield no correct interpretations.

Visualizing perspective and three-dimensionality

We find that DALL·E also allows for control over the viewpoint of a scene and the 3D style in which a scene is rendered.

Loading...

To push this further, we test DALL·E’s ability to repeatedly draw the head of a well-known figure at each angle from a sequence of equally spaced angles, and find that we can recover a smooth animation of the rotating head.

Loading...

DALL·E appears to be able to apply some types of optical distortions to scenes, as we see with the options “fisheye lens view” and “a spherical panorama.” This motivated us to explore its ability to generate reflections.

Loading...

Visualizing internal and external structure

The samples from the “extreme close-up view” and “x-ray” style led us to further explore DALL·E’s ability to render internal structure with cross-sectional views, and external structure with macro photographs.

Loading...

Inferring contextual details

The task of translating text to images is underspecified: a single caption generally corresponds to an infinitude of plausible images, so the image is not uniquely determined. For instance, consider the caption “a painting of a capybara sitting on a field at sunrise.” Depending on the orientation of the capybara, it may be necessary to draw a shadow, though this detail is never mentioned explicitly. We explore DALL·E’s ability to resolve underspecification in three cases: changing style, setting, and time; drawing the same object in a variety of different situations; and generating an image of an object with specific text written on it.

Loading...

With varying degrees of reliability, DALL·E provides access to a subset of the capabilities of a 3D rendering engine via natural language. It can independently control the attributes of a small number of objects, and to a limited extent, how many there are, and how they are arranged with respect to one another. It can also control the location and angle from which a scene is rendered, and can generate known objects in compliance with precise specifications of angle and lighting conditions.

Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to “fill in the blanks” when the caption implies that the image must contain a certain detail that is not explicitly stated.

Applications of preceding capabilities

Next, we explore the use of the preceding capabilities for fashion and interior design.

Loading...

Combining unrelated concepts

The compositional nature of language allows us to put together concepts to describe both real and imaginary things. We find that DALL·E also has the ability to combine disparate ideas to synthesize objects, some of which are unlikely to exist in the real world. We explore this ability in two instances: transferring qualities from various concepts to animals, and designing products by taking inspiration from unrelated concepts.

Loading...

Animal illustrations

In the previous section, we explored DALL·E’s ability to combine unrelated concepts when generating images of real-world objects. Here, we explore this ability in the context of art, for three kinds of illustrations: anthropomorphized versions of animals and objects, animal chimeras, and emojis.

Loading...

Zero-shot visual reasoning

GPT-3 can be instructed to perform many kinds of tasks solely from a description and a cue to generate the answer supplied in its prompt, without any additional training. For example, when prompted with the phrase “here is the sentence ‘a person walking his dog in the park’ translated into French:”, GPT-3 answers “un homme qui promène son chien dans le parc.” This capability is called zero-shot reasoning. We find that DALL·E extends this capability to the visual domain, and is able to perform several kinds of image-to-image translation tasks when prompted in the right way.

Loading...

We did not anticipate that this capability would emerge, and made no modifications to the neural network or training procedure to encourage it. Motivated by these results, we measure DALL·E’s aptitude for analogical reasoning problems by testing it on Raven’s progressive matrices, a visual IQ test that saw widespread use in the 20th century.

Loading...

Geographic knowledge

We find that DALL·E has learned about geographic facts, landmarks, and neighborhoods. Its knowledge of these concepts is surprisingly precise in some ways and flawed in others.

Loading...

Temporal knowledge

In addition to exploring DALL·E’s knowledge of concepts that vary over space, we also explore its knowledge of concepts that vary over time.

Loading...

Summary of approach and prior work

DALL·E is a simple decoder-only transformer that receives both the text and the image as a single stream of 1280 tokens—256 for the text and 1024 for the image—and models all of them autoregressively. The attention mask at each of its 64 self-attention layers allows each image token to attend to all text tokens. DALL·E uses the standard causal mask for the text tokens, and sparse attention for the image tokens with either a row, column, or convolutional attention pattern, depending on the layer. We provide more details about the architecture and training procedure in our paper(opens in a new window).

Text-to-image synthesis has been an active area of research since the pioneering work of Reed et. al,1 whose approach uses a GAN conditioned on text embeddings. The embeddings are produced by an encoder pretrained using a contrastive loss, not unlike CLIP. StackGAN3 and StackGAN++4 use multi-scale GANs to scale up the image resolution and improve visual fidelity. AttnGAN5 incorporates attention between the text and image features, and proposes a contrastive text-image feature matching loss as an auxiliary objective. This is interesting to compare to our reranking with CLIP, which is done offline. Other work2, 6, 7 incorporates additional sources of supervision during training to improve image quality. Finally, work by Nguyen et. al8 and Cho et. al9 explores sampling-based strategies for image generation that leverage pretrained multimodal discriminative models.

Similar to the rejection sampling used in VQVAE-2(opens in a new window), we use CLIP to rerank the top 32 of 512 samples for each caption in all of the interactive visuals. This procedure can also be seen as a kind of language-guided search16, and can have a dramatic impact on sample quality.

Loading...

Footnotes

  1. A

    A token is any symbol from a discrete vocabulary; for humans, each English letter is a token from a 26-letter alphabet. DALL·E’s vocabulary has tokens for both text and image concepts. Specifically, each image caption is represented using a maximum of 256 BPE-encoded tokens with a vocabulary size of 16384, and the image is represented using 1024 tokens with a vocabulary size of 8192.

The images are preprocessed to 256x256 resolution during training. Similar to VQVAE, each image is compressed to a 32x32 grid of discrete latent codes using a discrete VAE that we pretrained using a continuous relaxation. We found that training using the relaxation obviates the need for an explicit codebook, EMA loss, or tricks like dead code revival, and can scale up to large vocabulary sizes.

  1. B

    Further details provided in a later section.

  2. 17

    This task is called variable binding, and has been extensively studied in the literature.

References

  1. 1

    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H. (2016). “Generative adversarial text to image synthesis(opens in a new window)”. In ICML 2016.

  2. 2

    Reed, S., Akata, Z., Mohan, S., Tenka, S., Schiele, B., Lee, H. (2016). “Learning what and where to draw(opens in a new window)”. In NIPS 2016.

  3. 3

    Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang X., Metaxas, D. (2016). “StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks(opens in a new window)”. In ICCY 2017.

  4. 4

    Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., Metaxas, D. (2017). “StackGAN++: realistic image synthesis with stacked generative adversarial networks(opens in a new window)”. In IEEE TPAMI 2018.

  5. 5

    Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., He, X. (2017). “AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks(opens in a new window).

  6. 6

    Li, W., Zhang, P., Zhang, L., Huang, Q., He, X., Lyu, S., Gao, J. (2019). “Object-driven text-to-image synthesis via adversarial training(opens in a new window)”. In CVPR 2019.

  7. 7

    Koh, J. Y., Baldridge, J., Lee, H., Yang, Y. (2020). “Text-to-image generation grounded by fine-grained user attention(opens in a new window)”. In WACV 2021.

  8. 8

    Nguyen, A., Clune, J., Bengio, Y., Dosovitskiy, A., Yosinski, J. (2016). “Plug & play generative networks: conditional iterative generation of images in latent space(opens in a new window).

  9. 9

    Cho, J., Lu, J., Schwen, D., Hajishirzi, H., Kembhavi, A. (2020). “X-LXMERT: Paint, caption, and answer questions with multi-modal transformers(opens in a new window)”. EMNLP 2020.

  10. 10

    Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes(opens in a new window).” arXiv preprint (2013).

  11. 11

    Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. “Stochastic backpropagation and approximate inference in deep generative models(opens in a new window).” arXiv preprint (2014).

  12. 12

    Jang, E., Gu, S., Poole, B. (2016). “Categorical reparametrization with Gumbel-softmax(opens in a new window)”.

  13. 13

    Maddison, C., Mnih, A., Teh, Y. W. (2016). “The Concrete distribution: a continuous relaxation of discrete random variables(opens in a new window)”.

  14. 14

    van den Oord, A., Vinyals, O., Kavukcuoglu, K. (2017). “Neural discrete representation learning(opens in a new window)”.

  15. 15

    Razavi, A., van der Oord, A., Vinyals, O. (2019). “Generating diverse high-fidelity images with VQ-VAE-2(opens in a new window)”.

  16. 16

    Andreas, J., Klein, D., Levine, S. (2017). “Learning with Latent Language(opens in a new window)”.

  17. 17

    Smolensky, P. (1990). “Tensor product variable binding and the representation of symbolic structures in connectionist systems(opens in a new window)”.

  18. 18

    Plate, T. (1995). “Holographic reduced representations: convolution algebra for compositional distributed representations(opens in a new window)”.

  19. 19

    Gayler, R. (1998). “Multiplicative binding, representation operators & analogy(opens in a new window)”.

  20. 20

    Kanerva, P. (1997). “Fully distributed representations(opens in a new window)”.